Thinking AI – why is AI so hot? (AI -1)

Thinking AI – why is AI so hot? (AI -1)

Artificial Intelligence is not new

So, why is AI so hot? What is driving the market? What has changed (As an undergraduate engineer in Trinity College Dublin in the early 80s I was learning LISP and writing programmes in Pascal to do basic image recognition (limited to recognising geometric shapes))?  We have had Natural Language Processing solutions and Robotics since the 1950s, Computer Vision solutions since 1960s and Expert Systems since the 1970s.

Machine Learning through to Generative AI

Machine learning (building systems that can learn from data) initially emerged in the 1950s. Within this field there has been very significant progress through the last 60 years in neural networks (designed to mimic neuron structures of the brain) and more recently in processing power to support their deployment. We have seen largescale deployment of neural networks within various AI solutions (including NLP, Computer Vision, Expert systems and Robotics). In the last 18 months the excitement has centered on Generative AI solutions, creating new data – text, image, sound – based on training data sets.

AI for everyone

When I was learning LISP artificial intelligence seemed to be something limited to programmers. Now people have ChatGPT on their phones – with a simple to use interface, access to limitless amounts of information and processing power to deliver real time answers.

I remember concerns when internet access was being rolled out in corporates – how will we prevent people spending all their time scrolling though websites. Web2.0 brought even more concerns with social media platforms and the read/write web. As a consultant and a CIO I was often pulled into discussions about ‘shadow IT’. Now we have ‘shadow AI’ – ChatGPT and its competitors being used widely.

How do we leverage AI without throiwing out the baby with the bathwater?

We are putting together a number of posts re Artificial Intelligence to provide background information, context and a framework for evaluating modern AI’s relevance and potential deployment in your organisation. Like the internet, it’s not going away. But what are the things in your business that you might do differently, better, more efficiently using some of these tools and platforms? And how will you do this without damaging your business or your team?

Other AI posts:

Human centered AI – Dr Fei-Fei-Li

Artificial General Intelligence – are we seeing it now?

Hinton on AI and the existential threat

 

Human centered AI – promoted by Dr. Fei-Fei Li

Human centered AI – promoted by Dr. Fei-Fei Li

Dr. Fei-Fei Li: The worlds I see Curiosity, Exploration and Discovery at the Dawn of AI

 

Dr. Fei-Fei Li is one of the academics very much at the centre of developments in human centered AI in the last 15 years.  She is currently a Professor of Computer Science at Stanford University. She is probably best known for her work on Imagenet (https://www.image-net.org/) while at Princeton University (she developed a large-scale, structured database used to improve object recognition algorithms – core to development of deep learning solutions in AI).

The book neatly intertwines three themes: the immigrant story of the young Chinese girl and her parents making their way in the US, the emergence of artificial intelligence from the 1950s through to the present day and Dr. Fei-Fei Li’s own role in and contribution to human centered artificial intelligence.

 

Human centered revolution

The book opens with her arriving to testify at the House Committee on Science, Space, and Technology on the topic of artificial intelligence, June 16, 2018. Her own thoughts ahead of the Committee were: ‘I had one idea to share today, and I repeated it in my thoughts like a mantra. It matters what motivates the development of AI, in both science and industry, and I believe that motivation must explicitly center on human benefit’. And she was clear on the scale of change: ‘I believe our civilization stands on the cusp of a technological revolution with the power to reshape life as we know it… This revolution must, therefore, be unequivocally human-centered’

Immigrant story

The immigrant story is yet another reminder of the contributions made by immigrants in all societies.  And she has a number of interesting insights. ‘What made the work draining was the uncertainty that hangs over the immigrant experience. I was surrounded by disciplined, hardworking people, all of whom had stories like mine, but none who’d escaped the cycle of scarcity and menial labor to which we seemed consigned. We’d come to this country in the hopes of pursuing opportunities that didn’t exist elsewhere, but I couldn’t see a path leading anywhere near them. As demoralizing as our circumstances could be, however, the lack of encouragement within our community was often worse’.

She recalls one case of an immigrant being assaulted and her own helplessness to assist: ‘I wanted to say something, even if it was nothing more than a single-word plea for the violence to stop, but I noticed something strange: in the confusion of the moment, I didn’t know which language to use’

The immigrant story also has so many positives – the openness of teachers, the opportunities, the huge support and encouragement of one teacher and his family, her parents getting going in work. ‘There were moments that I had to step back and simply watch. These were the people I grew up with in China: strong, resourceful, impressive. It’d been far too long since I’d seen them. I was proud to witness their return’.

Development of AI

The history of developments in artificial intelligence is well documented in many places.  But Fei-Fei Li captures the momentum and the hiatuses – from Turing (“Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? ”) to McCarthy, Minsky, Rochester and Shannon (Dartmouth), Feigenbaum (knowledge engineering), Rosenblatt (perceptron), Hubel and Wiesel (visual cortex of a cat), Fukishima (multiple perceptrons), Rumelhart and Hinton (backpropagation) and many more.

Academic development

And then her own academic development.  The difference between Chinese and US school styles (moving between class rooms).  Her first-hand experience of discrimination against girls in education (‘I asked the girls to leave because the time has come to tell you that your performance as a group is unacceptable . As boys, you’re biologically smarter than girls’).

Fei-Fei Li’s original love was physics – but she notes from history how many great physicists became fascinated by biology.  She develops this interest in the brain and has the opportunity while an undergraduate to participate in a key research project at UC Berkeley. And eventually computers and computer science attract her attention – leading ultimately to this combination of neuroscience/ cognitive science and computer science.

Light

Chapter 5 is a great explanation of the importance of light and vision in the development of the human brain. ‘The perception of light was the first salvo in what would become an evolutionary arms race in which even the slightest advantage — a nominal boost in depth or a near – imperceptible increase in acuity — could propel its lucky possessor and its progeny to the front of the pack in an eternal hunt for food, shelter, and suitable mates’. And ‘Intrinsic to this astonishing progression, even now, is our sensory connection to the world.’

Scientist

We see the scientist at work and her original thinking.  There was so much focus on development of brilliant algorithms – but Fei-Fei Li’s contribution was to realise the importance of data – data to be used to train, test and ultimately improve these algorithms.  We also see her persistence – when having developed one dataset she realised the requirement for a much larger data set (‘Biederman’s number — a potential blueprint for what our ambitions as researchers demanded — was big, Really big. It wasn’t 1,000, 2,000, or even 5,000. And it certainly wasn’t the 101 we spent months cataloging. It was 30,000’).

And the initial disappointment when expected improvements did not occur. But if at first you don’t succeed, try again – and she did. ‘ImageNet was more than a data set, or even a hierarchy of visual categories. It was a hypothesis — a bet — inspired by our own biological origins, that the first step toward unlocking true machine intelligence would be immersion in the fullness of the visual world’. ‘The winner was dubbed AlexNet, in homage to both the technique and the project’s lead author, University of Toronto researcher Alex Krizhevsky.’

Human dignity and human centered

And there are other very significant research projects – both at Google and Stanford. But what really captured my attention was the feedback – from her mum in hospital: ‘ You know, Fei – Fei, ” she said softly, “ being a patient … it’s just horrible…It’s not just the pain. It’s the loss of control. It was like my body, even my mind, didn’t belong to me in that room. There were all these strangers — doctors and nurses, I know, but they’re strangers to me — and that expectation to follow their every order … It just became intolerable…My dignity was gone. Gone.’  And from this her clear conclusion: ‘But the deepest lesson I’d learned was the primacy of human dignity — a variable no data set can account for and no algorithm can optimize. That old, familiar messiness, reaching out to me from behind the weathered lines and weary eyes of the person I knew best and cared for the most’.

Li is confident that we can get AI right – not without risks.  She reminds us: ‘The common denominator to all of this, whether it’s addressing the bias in our data or safeguarding patients in hospitals, is how our technology treats people. Their dignity, in particular. That’s the word I keep coming back to. How can AI, above all else, respect human dignity? So much follows from that.

The future

She concludes on a cautious, but positive, note: ‘The future of AI remains deeply uncertain, and we have as many reasons for optimism as we do for concern. But it’s all a product of something deeper and far more consequential than mere technology: the question of what motivates us, in our hearts and our minds, as we create. I believe the answer to that question — more, perhaps, than any other — will shape our future. So much depends on who answers it. As this field slowly grows more diverse, more inclusive, and more open to expertise from other disciplines, I grow more confident in our chances of answering it right.

AI4ALL – another element of human cetered AI

In 2015 Dr. Li cofounded AI4ALL  with Dr. Olga Russakovsky and Dr. Rick Sommer, now a national nonprofit with the mission to make AI more diverse and inclusive.

Thinking Digital and AI Transformation – Rewired by McKinsey

Thinking Digital and AI Transformation – Rewired by McKinsey

Digital and AI Transformation

‘A digital and AI transformation is the process of developing organizational and technology-based capabilities that allow a company to continuously improve its customer experience and lower its unit costs and over time sustain a competitive advantage’ – great opening definition from McKinsey in ‘Rewired’.

    Are you serious about digital and AI transformation?

    AI and Digital Transformation seem to be on every corporate agenda.  AI has caught the ‘corporate imagination’ with all of the recent GPT development.  But AI only relevant when you have digital – so another driver for digital transformation.

    We seem to have been talking Digital Transformation for at least a decade.  And those who are succeeding have vision, commitment, dedicate resourcing. The 6 Considerations oppostie need to be examined before engaging ina ny meaningful transformation.

     

    Considerations in contemplating Digital and AI Transformation

    1. Roadmap for a digital transformation? Can you, the leaders of the business, impagine the technology driven business – as against the current business
    2. Do you have the inhouse digital talent to drive the digital and AI transformation?
    3.  Is your business operating model customer-centric and focused on speedy delivery?
    4. Are you ready to/ capable of adopting the required software engineering practices to drive development speed, quality and operational performance?
    5. Will your data architecture and data governance framework enable you to embed data everywhere – to drive quality, ease of consumption, ease of reuse?
    6. Do you understand what will be required to drive adoption and scaling across the business – are you committed to this?

    So where do we start – what should we be transforming?

    Big enough to make a difference, small enough to get it done – seems a good guide for kicking off with meanigful digital and/or AI transformation. McKinsey descibe this as assessing projects in terms of value against feasibility.

    Value potential

    So, in looking at potential value of a transformation project, McKinsey suggest:

    1. Improving customer experience
    2. Financial benefits – customer growth, reduced customer churn, improved yields, reduced costs
    3. Time to realise this value e.g. 6 to 36 months
    4. Synergy – can this transformation be leveraged in other parts of the business e.g. data, techanology, change management

     The guide itself

    This is a well put together book taking you through the elements of Digital and AI Tranformation – and what is required in each element to drive success. It is also backed up by controbutions form a number of clients – and, finally, a number of good case studies.

    As a more technical example, Section 5 deals with the concept of Embedding Data Everywhere.  Inter alia this requires a suitable Data Architecture.  In Chapter 26, headed ‘Data architecture and or the the system of data pipes’ they review different architecture archetypes, including: cloud-native datalake, cloud-native data warehouse, lakehouse, data mesh and data fabric.

    Section 6 addresses the key area of Adoption – and is less technical in nature, but equally if not more important.  In Chapter 28 they talk about the four elements of the influence model – in the context of successful change management:

    1. Leadership engagement
    2. A compelling change story
    3. Measurement and performance metrics
    4. Role based training

     The book offers plenty to the experienced Digital Transformation consultant.  But it is also a very good read and guide for anyone in an executive management role looking to kick off significant Digital and AI Transformation

     

     

    Artificial General Intelligence – are we seeing it now?

    Artificial General Intelligence – are we seeing it now?

    Artificial General Intelligence (‘AGI’) yet?

    Are there any signs of Artificial General Intelligence in what we are seeing now in Generative AI, LLMs etc?  Any emerging signs of the holy grail? Or do we need to rethink or confirm what we mean by AGI?

    AGI not yet – it seems

    Interesting piece in Techcentral last week (13 Oct 2023) – featuring an interview with Prof. Barry O’Sullivan, one of Ireland’s leading AI experts. He is quoted as saying: ‘These systems are still extremely limited. The primary challenges of AI still stand. While it has been a great year, it’s not a solved problem, most certainly. Far from it. These systems can’t reason, they can’t really do mathematics, and they don’t really have an understanding of the world’. And the discussion leads on to some of the irresposnsible argument about existential threats arising from a lack of understanding of the current tools and their capacity. I posted previously with respect to the contracting views of Hinton and Lecun on existential treats re AI.

    AGI now? – or what does it mean?

    Almost the same day I read an interesting paper published by the Berggruen Institute, authored by Blaise Aguera y Arcas of Google and Peter Norvig of Stanford University. And their headline:

    ‘Artificial General Intelligence Is Already Here – Today’s most advanced AI models have many flaws, but decades from now, they will be recognized as the first true examples of artificial general intelligence’.

    They argue that the new ‘frontier’ models ‘have achieved artificial general intelligence in five important ways: topics, tasks, modalities, languages, instructability – and flesh out each of these in the paper.

    They attrubute the reluctance of many commenters to acknowledge this to any/all of four reasons:

    1. A healthy skepticism about metrics for AGI
    2. An ideological commitment to alternative AI theories or techniques
    3. A devotion to human (or biological) exceptionalism
    4. A concern about the economic implications of AGI

    So – what does this mean for society and/or users of these tools?

    I have a sense that we are now using tools which, even with their limitations, are offering huge opportunties for progress and change . The reality of most technological change to date will continue, I suspect: there will be winners and losers.  Will we come to  another hiatus, another ‘AI winter’.  I don’t think so this time.  And we need the likes of Prof Barry O’Sullivan, Blaise Aguera y Arcas and Peter Norvig – and many others – looking down the road to see where we are headed.

    Hinton on AI and the existential threat

    Hinton on AI and the existential threat

    Is AI as smart as us?

    geoffrey hinton 768x540 1
    Hinton on AI and the existential threat 3

    Interesting recent interview with Geoffrey Hinton (‘father of AI’). ‘they still can’t match us but they are getting close…they can do little pieces of reasoning’. It’s not ‘just autocomplete or statistics’. ‘It’s inventing features and interactions between features to tell what comes next’. Reviews dangers and, in particular, the existential threat.

    ‘We are just a machine…just a neural net…no reason artificial nets cannot do what we do’. We are much more efficient in terms of use of energy. But the machines more efficient in learning from data.

    Differing views – Hinton and Lecun

    We are entering a period of huge uncertaintly. Yann Lecun has a different view to Hinton. If they end up smarter then us and decide want to take control – then trouble. Yann Lecun – AI has been built by humans (will have a bias towards good). Per Hinton depends on whether made by good people.

    If you send battle robots to attack then becomes easier for rich countries to attack/ invade poorer countries.

    Hinton the socialist?

    Big Language Models will cause a big increase in productivity. Gave example of answering letters and complaints for a health service. Can do 5 times as much work. If get big increase in productivity – wealth will go to making the rich richer – particularly in a society that does not have strong unions.

    Big chatbots – replacing people whose job involves producing text. How do we know more jobs will be produced than lost?

    Jobs to survive – where you have to very very adaptable and skillful e.g. plumbing (working in awkward spaces). What about reasoning?

    Multimodal AI

    Most impactful developments in AI over next 5 years – multimodal language models – to include video (e.g. youtube videos). Yann Lecun would say language is so limited. Soon will be combined with multiple modalities. Attach visual AI to text AI (cf. Gemini at Google).

    Thoughts on development of AI

    Transformer architecture was invested at Google. Announced in a paper in 2017. Bard was delivered a couple of years later by Google – but Hinton took a couple of years to realise the significance of this.

    If we keep training AI based on data created by AI – what will be the impact? Hinton says does not know the answer. Would be much easier if all fake data were marked as fake data.

    How could you not love making intelligent things? We want experimentation but we do not want more inequality. How do we limit the potential harms?

    Top 6 harms

    • Bias and discimination – present now, but relatively easy to fix (have a system which is significantly less biased than the system it is replacing. Analyse the bias and correct.
    • Battle robots – will be built by defince departments. How do you stop them? Some form of Geneva Convention?
    • Joblessness. Tey to ensure increase n productivity helps people who loase thier jobs. Need some form of socialism
    • Warring echo chambers – big companies wanting you to click on things and make you more indignant – begin to believe conspiracy theories. This is a problem to to with AI – not LLMs
    • Existential risk – important to understand this is not just sicence fiction/ fearmongering. If you have something a lot smarter than you which is very good at manipulating people – do people stay in control? We have a very strong sense of wanting to achieve control. AI may derive this as a way of achieving other goals. Jann Lecun argues that the good people will have more resources than the bad people. Hinton not so sure. Not conviced that good IA will win over bad AI.
    • Fake news – need to be able to makr everything that is fake as fake. Need to look at how to do with AI generated stuff.

    Managing the risks

    How can we limited the risk? Before the AI becomes super intelligent can do empirical work to understand how it might go wrong/ take control away. Government could encourage companies to put more resources into this. Hinton has left Google to participate in this discussion/ research.

    How to make AI more likely to be good than bad? Great uses: medicine, climate change, etc. But need to put effort unto understanding the existential risks.