Artificial General Intelligence – are we seeing it now?

Artificial General Intelligence – are we seeing it now?

Artificial General Intelligence (‘AGI’) yet?

Are there any signs of Artificial General Intelligence in what we are seeing now in Generative AI, LLMs etc?  Any emerging signs of the holy grail? Or do we need to rethink or confirm what we mean by AGI?

AGI not yet – it seems

Interesting piece in Techcentral last week (13 Oct 2023) – featuring an interview with Prof. Barry O’Sullivan, one of Ireland’s leading AI experts. He is quoted as saying: ‘These systems are still extremely limited. The primary challenges of AI still stand. While it has been a great year, it’s not a solved problem, most certainly. Far from it. These systems can’t reason, they can’t really do mathematics, and they don’t really have an understanding of the world’. And the discussion leads on to some of the irresposnsible argument about existential threats arising from a lack of understanding of the current tools and their capacity. I posted previously with respect to the contracting views of Hinton and Lecun on existential treats re AI.

AGI now? – or what does it mean?

Almost the same day I read an interesting paper published by the Berggruen Institute, authored by Blaise Aguera y Arcas of Google and Peter Norvig of Stanford University. And their headline:

‘Artificial General Intelligence Is Already Here – Today’s most advanced AI models have many flaws, but decades from now, they will be recognized as the first true examples of artificial general intelligence’.

They argue that the new ‘frontier’ models ‘have achieved artificial general intelligence in five important ways: topics, tasks, modalities, languages, instructability – and flesh out each of these in the paper.

They attrubute the reluctance of many commenters to acknowledge this to any/all of four reasons:

  1. A healthy skepticism about metrics for AGI
  2. An ideological commitment to alternative AI theories or techniques
  3. A devotion to human (or biological) exceptionalism
  4. A concern about the economic implications of AGI

So – what does this mean for society and/or users of these tools?

I have a sense that we are now using tools which, even with their limitations, are offering huge opportunties for progress and change . The reality of most technological change to date will continue, I suspect: there will be winners and losers.  Will we come to  another hiatus, another ‘AI winter’.  I don’t think so this time.  And we need the likes of Prof Barry O’Sullivan, Blaise Aguera y Arcas and Peter Norvig – and many others – looking down the road to see where we are headed.

Loving the public library – learning to love 1,000s of books

Loving the public library – learning to love 1,000s of books

Nice piece in today’s Sunday Times ‘The library is so much more than just books’ by Sarah Breen. Reminded me of so much of my growing up and fascination with public libraries – and their content: BOOKS: loving the public library

drumcondra library
Loving the public library - learning to love 1,000s of books 4

Every other Saturday morning Dad (sometimes Mum) took us (the older siblings) to the Drumcondra Public library. Each of us could take out two books – and had two weeks to read them. feeling we were missing out on something in the adult section. And we had to take our books from the children’s section of the libary – already thinking we were missing out on something. And cof course Dad also took out books (and eventually music tapes also).

phibsborough l 1
Loving the public library - learning to love 1,000s of books 5

I graduated to the adult library in Phibsborough (always liked that building) and the mobile library which used to visit the shopping centre in Cabra weekly.

Fascination with libraries continued: the school library in Belvedere College (where Ulysses was kept behind the counter). Spent many hours there – always potentially distracted from the core subjects by the variety of the well stacked shelves.

Library Reading Room 1937
Loving the public library - learning to love 1,000s of books 6

In Trinity there were multiple libraries: 1937 Reading Room, Arts Block, Law library, Science Library. And apart from some study and books – the social element was also critical (engineers meeting students from other callings).

And with my own children, certainly as parents, we both looked to instill a love in/ appreciation of libraries (and their BOOKS) with all of our children – with varying success. But all of them would have spent significant time studying in libraries – and I suspect there has always been a social side also.

As Sarah Breen says: ‘For adults, you can use your library card to access newspapers and magazines, listen to audiobooks, learn a language and join a club. It’s a place to meet people or avoid them, connect or disconnect entirely’. The service has moved on with the times – and can be of great benefit to all ages.

In his recent book ‘Knowing what we know – the transmission of knowledge: from ancient wisdom to modern magic’ Simon Winchester dedicates Section 2 (‘Gathering the Harvest’) to a review of the great libararies of world history – and the attempts to destroy them by various invaders. If you read this you may think of your local library with a great deal more respect.

Psychological safety at work – 3 musts to achieve the baiscs

Psychological safety at work – 3 musts to achieve the baiscs

Just came across this piece by McKinsey: ‘What is psychological safety?’ Rang a few bells for me – in the context of change, post COVID, business reorg, major projects. And also in the context of pursuits outside work e.g. finding your level in a cycling group, coaching a football team, building new relationships. Psychologicial safety at work is just a sine qua non.

Definition

Psychological safety means feeling safe to take interpersonal risks, to speak up, to disagree openly, to surface concerns without fear of negative repercussions or pressure to sugarcoat bad news‘. Seems a reasonable definition for many different settings. I think often overlooked is the responsibility of the manager or supervisor to be available to facilitate ‘speaking up’ – in different situations. Maintaining a very busy status all the time is tantamount to killing the safe psychological space.

What is the reality?

Per McKinsey: ‘Psychological safety is not a given and it is not the norm in most teams’. If you believe that psychological safety is important for the individual and important to the development and sustainability of the organisation then this assessment should be of great concern to any organisation finding itself in this status.

Leadership development

I have always thought the first basic requirement for any effective manager is to take an interest in team members. There should be time to ask how are things going, how was the weekend, how are the family – or whatever works for some genuine interaction and listening. Think McKinsey right on the requirement for ongoing leadership development:

    • Go beyond one-off training programs and deploy a scaled system of leadership development. 

    • Invest in leadership development experiences that are emotional, sensory, and create moments of realisation.

    • Build mechanisms to make development a part of leaders’ day-to-day work.

Again, if people are your number one asset, if providing a psychologically safe environment/ experience is a priority, then failing to invest in development of these skills across the leadership team is, simply, failure.

Mental health

Now seems to be on everyone’s agenda. Some of the stigma associated with talking about mental health challenges seems to be dissipating (but far from gone). McKinsey identify a number of practical steps – and I think ongoing changes post Covid, change in hybrid work and impact of AI will all drive greater requirements to understand and manage mental health.

Lower earners

Lots of good sense in this paper from McKinsey. But this last piece really caught my attention. Work is made up of people of different abilities, education, age, career direction and earnings. But all need psychological safety – all are needed to make the business work. And perhaps in the lower earning group there are greater challenges and insecurities – need to be aware of this and act accordingly. In a different environment the backs may not be making the money the forwards make but you need the whole team. In fact when the backs let you down the cookie crumbles pretty quickly.

Hinton on AI and the existential threat

Hinton on AI and the existential threat

Is AI as smart as us?

geoffrey hinton 768x540 1
Hinton on AI and the existential threat 8

Interesting recent interview with Geoffrey Hinton (‘father of AI’). ‘they still can’t match us but they are getting close…they can do little pieces of reasoning’. It’s not ‘just autocomplete or statistics’. ‘It’s inventing features and interactions between features to tell what comes next’. Reviews dangers and, in particular, the existential threat.

‘We are just a machine…just a neural net…no reason artificial nets cannot do what we do’. We are much more efficient in terms of use of energy. But the machines more efficient in learning from data.

Differing views – Hinton and Lecun

We are entering a period of huge uncertaintly. Yann Lecun has a different view to Hinton. If they end up smarter then us and decide want to take control – then trouble. Yann Lecun – AI has been built by humans (will have a bias towards good). Per Hinton depends on whether made by good people.

If you send battle robots to attack then becomes easier for rich countries to attack/ invade poorer countries.

Hinton the socialist?

Big Language Models will cause a big increase in productivity. Gave example of answering letters and complaints for a health service. Can do 5 times as much work. If get big increase in productivity – wealth will go to making the rich richer – particularly in a society that does not have strong unions.

Big chatbots – replacing people whose job involves producing text. How do we know more jobs will be produced than lost?

Jobs to survive – where you have to very very adaptable and skillful e.g. plumbing (working in awkward spaces). What about reasoning?

Multimodal AI

Most impactful developments in AI over next 5 years – multimodal language models – to include video (e.g. youtube videos). Yann Lecun would say language is so limited. Soon will be combined with multiple modalities. Attach visual AI to text AI (cf. Gemini at Google).

Thoughts on development of AI

Transformer architecture was invested at Google. Announced in a paper in 2017. Bard was delivered a couple of years later by Google – but Hinton took a couple of years to realise the significance of this.

If we keep training AI based on data created by AI – what will be the impact? Hinton says does not know the answer. Would be much easier if all fake data were marked as fake data.

How could you not love making intelligent things? We want experimentation but we do not want more inequality. How do we limit the potential harms?

Top 6 harms

  • Bias and discimination – present now, but relatively easy to fix (have a system which is significantly less biased than the system it is replacing. Analyse the bias and correct.
  • Battle robots – will be built by defince departments. How do you stop them? Some form of Geneva Convention?
  • Joblessness. Tey to ensure increase n productivity helps people who loase thier jobs. Need some form of socialism
  • Warring echo chambers – big companies wanting you to click on things and make you more indignant – begin to believe conspiracy theories. This is a problem to to with AI – not LLMs
  • Existential risk – important to understand this is not just sicence fiction/ fearmongering. If you have something a lot smarter than you which is very good at manipulating people – do people stay in control? We have a very strong sense of wanting to achieve control. AI may derive this as a way of achieving other goals. Jann Lecun argues that the good people will have more resources than the bad people. Hinton not so sure. Not conviced that good IA will win over bad AI.
  • Fake news – need to be able to makr everything that is fake as fake. Need to look at how to do with AI generated stuff.

Managing the risks

How can we limited the risk? Before the AI becomes super intelligent can do empirical work to understand how it might go wrong/ take control away. Government could encourage companies to put more resources into this. Hinton has left Google to participate in this discussion/ research.

How to make AI more likely to be good than bad? Great uses: medicine, climate change, etc. But need to put effort unto understanding the existential risks.