Well Saturday, June 7, 2014 was a big day for the technology world. For the first time ever a computer has passed the Turing Test. This test was developed by Alan Turing in 1950 and is commonly used judge the presence of Artificial Intelligence (AI) in computing. There have been many attempts over the years, but none have been able to convince the requisite 1/3 (66.66_%) of the panel that they were human. The closest was in 2012 when a computer convinced 29% of the judges that it was human.
This event holds particular interest for me since I’m a software engineer, work in the technology industry and have always been involved in computing. It was a day I was looking forward to (and secretly fearing). As with Stephen Hawking, I fear the potential for opening “Pandora’s Box”. The day when we cross the singularity. This quote from Stephen Hawking sums up my fears perfectly.
“Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.” – Stephen Hawking
Anyone who’s seen The Matrix can imagine the potential outcomes. Once that boundary is crossed, the evolution is only limited by the speed in which the platform operates. This event would likely take place on a supercomputer. As of 2013 the fastest of these is the National University of Defense Technology (NUDT) Tianhe-2 in Guangzhou, China. This system is capable of 33.86 PFLOPS. This means it is capable of calculating 10,000,000,000,000,000 (10 Quadrillion) floating point operations per second.
Let me tell you… I’d have a hard time doing 1 mathematical operation per second let alone 10 quadrillion, per second… every second, of every minute, of every hour, of every day… endlessly. You can see how something that’s able to compute at that speed could get out of hand in very short order. How many seconds do you think it’d take it to determine humans are inferior?
Of course this doesn’t mean the end of humanity or anything. This achievement was not what I would consider A.I. It is essentially just a sophisticated chat bot and not an actual sentient computer. This begs the question though. Now that the Turing Test has been passed, how do we actually judge sentience in computing? This was thought of as the “Holy Grail” of A.I. research.
With regards to A.I. though, I’m not all doom and gloom. I don’t just go around worrying about the A.I. hiding under my bed at night. The technology could present some very real benefits to humanity as well (e.g. medical, materials, aerospace, astrophysics, etc.). Imagine being able to model every possible combination of molecular structure and test it on modeled humans without having to actually do anything. It would simply spit out a list of theoretically safe drugs to move on to lab tests with. The environment could benefit as well. What about having it think up an entirely new approach to moving humans from one place to another that wouldn’t hurt the environment or cost absurd amounts of money to implement? There’s really a bottomless pit of problems that need solving and what better than something that never needs to take a break or sleep and can operate billions of times faster than us?
Well, here’s to a new landmark! Now keep in mind that the truly sentient forms of AI I am discussing are still a long way off. Simply passing the Turing Test doesn’t automatically mean sentience, so don’t worry. However, it’s a big step in that direction!