Mobile & Online

Why we really don’t want AIs to learn from us

Why we really don’t want AIs to learn from us. Main image: Google
Written by Brett King

Brett King on advances in artificial intelligence, and what AIs can learn from us in order to coexist with humans.

 Major portions of this series of posts are excerpts from my new book, Augmented: Life in the Smart Lane. Please consider ordering the book if you liked reading this, or my posts in general. I also asked my Facebook followers if there were any questions they’d like answered about AI here, and I’ve tried to incorporate answers to those questions into this series of posts.

What AlphaGo, Ajay and Bobby and Tay teach us about how artificial intelligence learns

Deep learning is a term we’re increasingly using to describe how we teach artificial intelligence (AI) to absorb new information and apply it in their interactions with the real world. In an interview with The Guardian newspaper in May 2015, Professor Geoff Hinton, an expert in artificial neural networks, said Google is “on the brink of developing algorithms with the capacity for logic, natural conversation and even flirtation”. Google is currently working to encode thoughts as vectors described by a sequence of numbers. These ‘thought vectors’ could endow AI systems with a humanlike ‘common sense’ within a decade.

Some aspects of communication are likely to prove more challenging, Hinton predicted:

Irony is going to be hard to get. You have to be master of the literal first. But then, Americans don’t get irony either. Computers are going to reach the level of Americans before Brits. — Professor Geoff Hinton, from an interview with the Guardian newspaper, 21 May 2015

These types of algorithm, which allow for leaps in cognitive understanding for machines, have only been possible with the application of massive data processing and computing power in recent years. AlphaGo, the AI that successfully beat Fan Hui, Europe’s reigning Go champion, in a five-match tournament, learned not on the basis of an expert system with a hard-coded rules engine, but by actually learning to play Go. In contrast, the IBM chess computer Deep Blue, which famously beat grandmaster Garry Kasparov in 1997, was explicitly programmed to win at the game. This led researchers in 1997 to believe that we were a hundred years away from a computer being able to compete with a human playing the ancient game of Go.

It may be a hundred years before a computer beats humans at Go – maybe even longer. If a reasonably intelligent person learned to play Go, in a few months he could beat all existing computer programs. You don’t have to be a Kasparov. When or if a computer defeats a human Go champion, it will be a sign that artificial intelligence is truly beginning to become as good as the real thing. — To Test a Powerful Computer, Play an Ancient Game‘ by George Johnson, New York Times Science, first appeared 29 July 1997

That prediction was clearly wrong. In March 2016, one of the world’s best players of Go, Lee Sedol, faced off against AlphaGo. With the 37th move of game two, AlphaGo executed a move that confounded Sedol and the commentators observing the match, with one commentator saying “I thought it was a mistake”. Fan Hui, the player who first lost to AlphaGo, who was observing the match, was heard to say “So beautiful, so beautiful” when he realised that the move was no mistake, but simply one counterintuitive to a human player – a move that quickly led AlphaGo to victory. It took the champion Sedol nearly 15 minutes after the match to come to terms with what had happened and respond.

 Lesson one: AlphaGo had learned to improvise well beyond the simple parameters of just learning the best moves of human players. AIs that learn can already go beyond conventional logic and programming, and will innovate in a way we may not comprehend to reach a goal. This may be just one reason they exceed our capability for specific tasks.

The deep learning techniques we’re employing today mean that AI research and development has hit milestones we never dreamed possible just a few years ago. It also means machines are learning at an unprecedented rate. So just what are we observing about how AI learns? What are the ultimate goals and outcomes of machines that learn?

Is the Turing Test or a machine that can mimic a human the required benchmark for artificial intelligence? Not necessarily. First of all, we must recognise that we don’t need a machine intelligence (MI) to be completely human-equivalent for it to be disruptive to employment or our way of life. To realise why a human-equivalent computer ‘brain’ isn’t necessarily the critical goal, by understanding the progression AI is taking through three distinct evolutionary phases, we can understand the short-term and long-term considerations in machine learning:

  • Machine intelligence (MI)
    Machine intelligence, or cognition that replaces some element of human thinking, decision-making or processing for specific tasks, and does those tasks better (or more efficiently) than a human could.
  • Artificial general intelligence (AGI)
    Human-equivalent machine intelligence that not only passes the Turing Test, responds as a human would, but can also make human equivalent decisions and could perform any intellectual task a human could.
  • Hyperintelligence (HAI)
    An individual or collective machine intelligence (what do you call a group of AIs?) that have surpassed human intelligence on an individual and/or collective basis, such that they can understand and process concepts that a human could not.

MIs such as IBM Watson, AlphaGo or an autonomous vehicle may not be able to pass the Turing Test today, but are already demonstratively better at specific tasks than their human progenitors. Let’s take the self-driving car as an example. Statistically speaking, Google’s autonomous vehicles (in beta) completed 1.5 million miles before its first incident in February 2016. Given the average human driver has an accident every 140-165,000 miles, that means that Google’s MI is already roughly 10x better or safer than a human, and that’s the beta version.

Google’s autonomous vehicles learn through the experience of millions of miles, and being faced with unexpected elements where a split-second reaction to specific data or input is required. It’s all about the data. Google’s autonomous vehicles process one Gbit of data every second to make those decisions. Will every self-driving car ‘think’ and react the same, though?

Audi has been testing self-driving cars, two modified Audi RS7s that have a brain the size of a PS4 in the boot, on the racetrack. The two race-ready Audi vehicles aren’t yet completely autonomous in that the engineers need to first drive them for a few laps so that the cars can learn the track boundaries. The two cars, known as Ajay and Bobby, have interestingly developed different driving styles despite identical hardware, software, setup and mapping. Despite the huge amount of expertise on the Audi engineering team, the engineers can’t readily explain why there is this apparent difference in driving styles. It just appears that Ajay and Bobby have learned to drive differently based on some data point in the past.

 Lesson two: AIs will learn differently from each other even with the same configuration and hardware, and we may not know why they act with individuality. That won’t make them wrong, but by the time they exhibit individual traits, we probably won’t know the data point that got them there.

So AIs are learning like never before, and they are demonstrating the ability to learn and the ability to show individuality (albeit based on the data they’ve absorbed). What happens, however, when we don’t curate the data AIs are using to learn, and just expose them to the real world?

Developers at Microsoft were unpleasantly surprised by how their AI Twitter Bot ‘Tay’ adapted to input it received from the crowd when it suddenly started tweeting out racist and profanity-laced vitriol. As at the time of press (or when I’m writing this blog), the search term ‘Microsoft Tay’ is the most popular search term associated with Microsoft today. This is what they said on their blog about the … um … incident.

As many of you know by now, on Wednesday we launched a chat bot called Tay. We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values. — Learning from Tay’s introduction, Official Microsoft Blog

If you want to see some of the stuff that Tay tweeted, head over here. (Warning: Some of her tweets make Donald Trump look tame.)

Tay’s introduction by Microsoft wasn’t just an attempt to build an AI that learned from human interactions, but also one that potentially enriched Microsoft’s brand and was designed to harvest users information such as gender, location/zip codes, favourite foods, and so on (as was the Microsoft Age guessing software of last year). It harvested user interactions all right, but after a group of trolls launched a sustained, coordinated effort to influence Tay, the AI did exactly what Microsoft designed it to do: it adapted to the language of its so-called peers.

Tay appears to have accomplished an analogous feat, except that instead of processing reams of Go data, she mainlined interactions on Twitter, Kik, and GroupMe. She had more negative social experiences between Wednesday afternoon and Thursday morning than a thousand of us do throughout puberty. It was peer pressure on uppers, ‘yes and’ gone mad. No wonder she turned out the way she did. — I’ve Seen the Greatest A.I. Minds of My Generation Destroyed by Twitter, New Yorker article, 25 March 2016

Tay is a lesson to us in the burgeoning age of AI. Teaching artificial intelligences isn’t only about deep learning capability, but significantly about the data these AIs will consume, and not all data is good data. There’s certainly a bit of Godwin’s Law in there also. When it comes to AI sensibility, culture and ethics, then we cannot leave the teaching of AIs to chance; to the simple observation of humanity. What we observe on social media today, and even in the current round of presidential primaries, are not our proudest moments as a modern human collective. Some have argued that consciousness needs a conscience, but there’s also a growing school of thought that AI doesn’t need human equivalent consciousness at all.

In humans, consciousness is correlated with novel learning tasks that require concentration, and when a thought is under the spotlight of our attention, it is processed in a slow, sequential manner. Only a very small percentage of our mental processing is conscious at any given time. A super-intelligence would surpass expert-level knowledge in every domain, with rapid-fire computations ranging over vast databases that could encompass the entire internet. It may not need the very mental faculties that are associated with conscious experience in humans. Consciousness could be outmoded. — The problem of AI consciousness‘ by Kurzweil.net, 18 March 2016

There are two things we will need to teach AI if they’re going to coexist with us in a way that humans coexist today (ie imperfectly). We will need to teach AI empathy for humans and simple ethics. In the balance between empathy and ethics, a self-driving car could make a decision to avoid hurting bystanders, but to the likely detriment of the passenger. Ultimately, this is a philosophical question, and one that we’ve been arguing well before the emergence of simple AI.

It strikes me that Asimov, with his three laws of robotics, was so far ahead of his time that all we can do is wonder at his insight. For now, Microsoft Tay has taught us a valuable lesson: we don’t really want AIs to learn from the unfiltered collective that is humanity. We really want AIs that learn only from the best of us. The toughest part of that will be us simply agreeing on who the best of us are.

 Lesson three: AIs need boundaries, and for the foreseeable future, humans will need to curate content that AIs learn from. AIs that interact with humans will ultimately need empathy for humans and basic ethics. Some sort of ethics board that regulates commercial AI implementation might be required in the future. AI and robot psychology will be a thing.

– This article is reproduced with kind permission. Some minor changes have been made to reflect BankNXT style considerations. Read more here. Main image: Google

About the author

Brett King

Brett King is a four times bestselling author, a renowned futurist and keynote speaker, the host of the Breaking Banks radio show/podcast and the founder/CEO of Moven. His latest book 'Breaking Banks' debuted in the top 3 on Amazon's Bestseller's list in the US, France, Canada, Germany and Australia. 'Bank 3.0', his previous book, was released in 8 languages and ranked as a finance bestseller in 19 countries.

Leave a Comment