Hinton on AI and the existential threat

Uber profitable

Written by Barry O'Gorman

Independent Business Advisor - Business Advantage through Technology (Strategy, Commercials, Transformation).

Post Date 03/08/2023

Is AI as smart as us?

geoffrey hinton 768x540 1
Hinton on AI and the existential threat 2

Interesting recent interview with Geoffrey Hinton (‘father of AI’). ‘they still can’t match us but they are getting close…they can do little pieces of reasoning’. It’s not ‘just autocomplete or statistics’. ‘It’s inventing features and interactions between features to tell what comes next’. Reviews dangers and, in particular, the existential threat.

‘We are just a machine…just a neural net…no reason artificial nets cannot do what we do’. We are much more efficient in terms of use of energy. But the machines more efficient in learning from data.

Differing views – Hinton and Lecun

We are entering a period of huge uncertaintly. Yann Lecun has a different view to Hinton. If they end up smarter then us and decide want to take control – then trouble. Yann Lecun – AI has been built by humans (will have a bias towards good). Per Hinton depends on whether made by good people.

If you send battle robots to attack then becomes easier for rich countries to attack/ invade poorer countries.

Hinton the socialist?

Big Language Models will cause a big increase in productivity. Gave example of answering letters and complaints for a health service. Can do 5 times as much work. If get big increase in productivity – wealth will go to making the rich richer – particularly in a society that does not have strong unions.

Big chatbots – replacing people whose job involves producing text. How do we know more jobs will be produced than lost?

Jobs to survive – where you have to very very adaptable and skillful e.g. plumbing (working in awkward spaces). What about reasoning?

Multimodal AI

Most impactful developments in AI over next 5 years – multimodal language models – to include video (e.g. youtube videos). Yann Lecun would say language is so limited. Soon will be combined with multiple modalities. Attach visual AI to text AI (cf. Gemini at Google).

Thoughts on development of AI

Transformer architecture was invested at Google. Announced in a paper in 2017. Bard was delivered a couple of years later by Google – but Hinton took a couple of years to realise the significance of this.

If we keep training AI based on data created by AI – what will be the impact? Hinton says does not know the answer. Would be much easier if all fake data were marked as fake data.

How could you not love making intelligent things? We want experimentation but we do not want more inequality. How do we limit the potential harms?

Top 6 harms

  • Bias and discimination – present now, but relatively easy to fix (have a system which is significantly less biased than the system it is replacing. Analyse the bias and correct.
  • Battle robots – will be built by defince departments. How do you stop them? Some form of Geneva Convention?
  • Joblessness. Tey to ensure increase n productivity helps people who loase thier jobs. Need some form of socialism
  • Warring echo chambers – big companies wanting you to click on things and make you more indignant – begin to believe conspiracy theories. This is a problem to to with AI – not LLMs
  • Existential risk – important to understand this is not just sicence fiction/ fearmongering. If you have something a lot smarter than you which is very good at manipulating people – do people stay in control? We have a very strong sense of wanting to achieve control. AI may derive this as a way of achieving other goals. Jann Lecun argues that the good people will have more resources than the bad people. Hinton not so sure. Not conviced that good IA will win over bad AI.
  • Fake news – need to be able to makr everything that is fake as fake. Need to look at how to do with AI generated stuff.

Managing the risks

How can we limited the risk? Before the AI becomes super intelligent can do empirical work to understand how it might go wrong/ take control away. Government could encourage companies to put more resources into this. Hinton has left Google to participate in this discussion/ research.

How to make AI more likely to be good than bad? Great uses: medicine, climate change, etc. But need to put effort unto understanding the existential risks.

You may also like