Is ChatGPT eventually going to kill us all?
The 'Godfather of AI' quitting his job because he's scared is not reassuring
An audio version of this post is available here:
Everyone! Identify your nearest bunker and stock up on tins of beans – THE ROBOTS ARE COMING.
Dr Geoffrey Hinton, the ‘Godfather of AI’, quit his job ‘nurturing’ neural networks (AI systems that process data in ways inspired by the human brain) at Google earlier this week. Why? Because he strongly believes that we’re about to be overrun by sentient machines a la The Terminator.
Am I catastrophising? OF COURSE I AM. Does that mean I’m wrong? OF COURSE IT DOESN’T.
The ‘Godfather of AI’ has regrets. Bless.
“I’m just a scientist who suddenly realised that these things are getting smarter than us,” Hinton told CNN. “I want to sort of blow the whistle and say we should worry seriously about how we stop these things getting control over us.”
GETTING CONTROL OVER US?
“It knows how to program so it’ll figure out ways of getting around restrictions we put on it,” he said. “It’ll figure out ways of manipulating people to do what it wants.”
MANIPULATING PEOPLE TO DO WHAT IT WANTS? WTF HAVE YOU DONE, GEOFF?
There’s more than a whiff of Oppenheimer about this, no? Yes, he of ‘Whoops, I’ve accidentally invented the atomic bomb’ fame. Indeed, ‘just a scientist’ Hinton actually used to quote Oppenheimer, saying: “When you see something that is technically sweet, you go ahead and do it.” Apparently, he doesn’t use that quote any more – but does use the same defence: “I console myself with the normal excuse: If I hadn’t done it, somebody else would have.”
I wouldn’t have, Geoff.
Computer says ‘no’
I should clarify that Hinton isn’t actually worried about AI gaining consciousness. That’s just me freaking out. He’s worried about the tech developing too fast to understand or to control.
On 14 March this year, OpenAI released GPT-4, a ChatGPT update. Just a week later, Microsoft researchers published a report claiming that it showed “sparks of AI general intelligence”, i.e. evidence of systems that match or exceed human capabilities.
Stuart Russell OBE, a professor of computer science at the University of California, Berkeley, wrote how, “The core problem is that neither OpenAI nor anyone else has any real idea how GPT-4 works. I asked Sébastien Bubeck, lead author on the ‘sparks’ paper, whether GPT-4 has developed its own internal goals and is applying them in choosing its outputs. The answer? ‘We have no idea.’” Classic.
Computer says ‘read that’
The fear is that, as the tech develops, it will be able to navigate the boundaries put in place to control it and start defining its own goals. For example, AI can already create deepfakes and clone voices. The next iteration may be able to choose someone, imbibe their entire online presence (everything they’ve said or written online, all social media, personal messages, finances, etc.) and start communicating as them. Alternatively, it may decide to take control of drones, manipulate data and news for a LOL – or simply invent it (which is called ‘hallucination’ in AI) – or decide to make the Pope wear Balenciaga in all pics forever.
The competition between businesses and countries to develop the tech fastest (“If people in America stop, people in China wouldn’t”, Hinton said) means there’s a lack of oversight or safety nets. Unlike nuclear weapons, you can develop AI in secret – you can build a sentient robot in your shed. (I can’t – I’ve tried – but you might be able to.) So, while companies and countries talk-the-talk about safety, behind the scenes it seems as though they’re elbowing each other in the face to create Skynet – which has a lot of people worried.
Computers coming over here and stealing our jobs
The tech is moving so fast that basic ethical and societal safeguards aren’t in place yet to manage it. People aren’t even sure what safety nets might be necessary. For example, some jobs are going to be lost to AI. That’s inevitable. Those people are going to need new jobs or to be retrained, as well as financial and mental-health support. As the UK government’s chief scientific officer said, it’s a potential disruption on a scale akin to the industrial revolution and we need to get ahead of it. (This 2022 White House report on the potential impact on the job market in the US and EU is heavy, but fascinating and proves that people are taking this stuff seriously.)
Then there’s the bias issue. Robots can be sexist and racist, learning from patterns within the information they are fed. Some haven’t been able to recognise Black faces, have discriminated against female business owners when allocating loans, and have incorrectly determined that immigrant families are more likely to commit fraud (leading to financial destitution and reputational ruin for many in one of the biggest political scandals to hit the Netherlands in decades).
Amazon created an AI resume-reading software in 2018. The AI was trained to vet applicants by observing patterns in 10 years’ worth of successful resumes, so it could learn by example. What it actually learned? Amazon doesn’t hire women. If two candidates were equally qualified, it would reject the woman based upon the company’s previous behaviour. Awks.
These stories have felt very ‘one off’ before now. Bad apples. Random examples of tech gone rogue. “We took a screwdriver to him, Jack, and he’s all better now.” A general unawareness over what these systems are and how they’re used has kept universal outrage and panic contained. However, Microsoft augmenting its search engine, Bing, with a chatbot leading Google to race to implement its own (Bard), has pushed these concerns into the spotlight. Suddenly plebs like you and me (no offence) may be fed biased or fake information via our everyday search engines. (It’s also worth noting that ChatGPT’s knowledge bank only reaches September 2021 which may make things… interesting.)
Valérie Pisano, chief executive of Mila – the Quebec Artificial Intelligence Institute – said: “The technology is put out there, and as the system interacts with humankind, its developers wait to see what happens and make adjustments based on that. We would never, as a collective, accept this kind of mindset in any other industrial field. There’s something about tech and social media where we’re like: ‘Yeah, sure, we’ll figure it out later’.”
Your bunker or mine?
Stay in yours, pal. For now. Because while I slowly try to transform myself into Linda Hamilton, there is some good news: governments and those in-the-know are trying to address this.
Italy recently banned ChatGPT until it answered security concerns, including age restrictions, and Germany is currently investigating whether the tech breaches GDPR rules. Meanwhile, EU legislators have drafted a bill to try to keep things in check and many countries (including the UK) are drafting new laws to manage pornographic deepfakes, as well as looking into expanding existing laws around fraud, malicious intent and libel. President Biden wants to introduce an AI-ready bill of rights, while the UN has announced a global digital compact addressing the same concerns.
More than 1,000 tech peeps signed an open letter in March (after the launch of GPT-4) citing ‘a profound risk to society’ and calling for a six-month pause on the development of new systems so everyone could take a breath. The letter now has over 27,000 signatures. (It is worth noting though that Elon Musk is pushing ahead with his own generative AI project despite signing this letter and also tweeting that Geoffrey Hinton “knows what he’s talking about”.) Also in March, current and former leaders of the Association for the Advancement of Artificial Intelligence released their own letter, calling for developers to up their efforts on addressing societal and ethical concerns.
If utilised correctly, the entire point of AI is that it can help and complement human endeavour. An AI tool can now accurately identify cancer. I mean, that’s pretty useful. It can streamline jobs, enable faster connections and predict the impact of catastrophic events (say, oh, like a pandemic), enabling better decision making. Hell, it even cracked the ‘protein code’. (I have no idea what that means, but everyone keeps wanging on about it when you google this stuff.)
With proper global regulation and agreement, as well as inter-company cooperation, AI can and should be a force for good. A super-exciting way to improve our lives. And it will be. IF EVERYONE WOULD JUST MAKING KILLER ROBOTS.
Just One More Thing…
If you’re worried about how AI will affect everything from jobs, culture, art, friendships and even dating, please don’t feel despondent. Those in charge of these things will hopefully stop underestimating, misunderstanding and devaluing the importance of human connection, imagination and creativity. Of experience. Of diplomacy. Of being able to interpret behavioural, physical and linguistic cues. Of being able to read people and read the room. Of being original, funny and relatable. Robots do not have empathy…
…At least they don’t yet. As soon as they do, we’re all fucked.
By the way, ChatGPT wrote this piece. Only kidding! (A robot would say that though, wouldn’t it?)
I hope you enjoyed this piece! Please share and subscribe if so. It really helps! Until next time…
Love this! Exciting and scary in, I think, equal measure. Hope governments get a handle on it sooner than later (good job they’re all so capable!).