ADVERTISEMENT
Being human in the age of Artificial IntelligenceA future where AI will be everywhere and in everything is coming sooner than we think and we need to be prepared like boy scouts advise us to.
Rashmi Vasudeva
Last Updated IST
The future is now.
The future is now.

After a while, everything is overhyped and underwhelming. Even Artificial Intelligence has not been able to escape the inevitable reduction that follows such excessive hype. AI is everything and everywhere now and most of us won’t even blink if we are told AI is powering someone’s toothbrush. (It probably is).

The phrase is undoubtedly being misused but is the technology too? One thing is certain, whether we like it or not, whether we understand it or not, for good or bad, AI is playing a huge part in our everyday life today — huger than we imagine. AI is being employed in health, wellness and warfare; it is scrutinizing you, helping you take better photos, making music, books and even love. (No, really. The first fully robotic sex doll is being created even as you are reading this.)

However, there is a sore lack of understanding of what AI really is, how it is shaping our future and why it is likely to alter our very psyche sooner or later. There is misinformation galore, of course. Either media coverage of AI is exaggerated (as if androids will take over the world tomorrow) or too specific and technical, creating further confusion and fuelling sci-fi-inspired imaginations of computers smarter than human beings.

ADVERTISEMENT

So what is AI? No, we are not talking dictionary definitions here — those you can Google yourself. Neither are we promising to explain everything — that will need a book. We are only hoping to give you a glimpse into the “extraordinary promise and peril of this single transformative technology” as Prof Stuart Russell, one of the world’s pre-eminent AI experts, puts it.

Prof Russell has spent decades on AI research and is the author of ‘Artificial Intelligence: A Modern Approach’, which is used as a textbook on AI in over 1,400 universities around the world.

Machine learning first

Other experts believe our understanding of artificial intelligence should begin with comprehending ‘machine learning’, the so-called sub-field of AI but one that actually encompasses pretty much everything that is happening in AI at present.

In its very simplest definition, machine learning is enabling machines to learn on their own. The advantages of this are easy to see. After a while, you need not tell it what to do — it is your workhorse. All you need is to provide it data and it will keep coming up with smarter ways of digesting that data, spotting patterns, creating opportunities — in short doing your work better than you perhaps ever could. This is the point where you need to scratch the surface. Scratch and you will stare into a dissolving ethical conundrum about what machines might end up learning. Because, remember they do not (cannot) explain their thinking process. Not yet, at least. Precisely why, the professor has a cautionary take.

“The concept of intelligence is central to who we are. After more than 2,000 years of self-examination, we have arrived at a characterization of intelligence that can be boiled down to this: ‘Humans are intelligent to the extent that our actions can be expected to achieve our objectives’. Intelligence in machines has been defined in the same way: ‘Machines are intelligent to the extent that their actions can be expected to achieve their objectives.’”

Whose objectives?

The problem, writes the professor, is in this very definition of machine intelligence. “We say that machines are intelligent to the extent that their actions can be expected to achieve their objectives, but we have no reliable way to make sure that their objectives are the same as our objectives.” He believes what we should have done all along is to tweak this definition to: ‘Machines are beneficial to the extent that their actions can be expected to achieve our objectives.’”

The difficulty here is of course that our objectives are in us — all eight billion of us — and not in the machines. “Machines will be uncertain about our objectives; after all we are uncertain about them ourselves — but this is a good thing; this is a feature, not a bug. Uncertainty about objectives implies that machines will necessarily defer to humans — they will ask permission, they will accept correction and they will allow themselves to be switched off.”

Spilling out of the lab

This might mean a complete rethinking and rebuilding of the AI superstructure. Perhaps something that indeed is inevitable if we do not want this “big event in human history” to be the last, says the prof wryly. As Kai-Fu Lee, another AI researcher, said in an interview a while ago, we are at a moment where the technology is “spilling out of the lab and into the world.” Time to strap up then!

(With inputs from ‘Human Compatible: AI and the Problem of Control’ by Stuart Russell, published by Penguin, UK. Extracted with permission.)

ADVERTISEMENT
(Published 19 January 2020, 01:07 IST)