At a Zeitgeist conference hosted in London on May 12th, physicist Stephen Hawking expressed his views on artificial intelligence and developments in the field.
While praising the advancements that are rapidly taking place in the field of robotics and artificial intelligence creation, renowned physicist Stephen Hawking also drew the public’s attention on the potential danger that these creations could have on the future of human kind if left unchecked.
What is this fear mongering all about?
Only recently a bundle of in-house names have, simply put, announced the potential of artificial intelligence to overtake humanity. Stephen Hawking joins the ranks of Tesla CEO Elon Musk or Microsoft CEO Bill Gates among other scientists and philosophers that are raising questions over the limits of artificial intelligence or „the ghost in the machine”.
Hawking predicted a gloomy future for humanity to unravel over the next no less than ten decades. It is understandable that humans are fascinated with the hypothesis of being creators in their own terms, yet the day when they will be taken aback by their creation is lurking. Robotics and artificial intelligence have the potential to destroy humanity provided scientific advances in the related fields are not coordinated and kept in check. We should allow ourselves the time to understand the implications of creating new intelligence and how ethics could be wired into the spawns.
“One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all”, stated Hawking.
Nick Bostrom, University of Oxford philosopher, developed the concept of existential risks. In this category he included artificial intelligence as one threat that humanity is facing. How far-fetched does this seem when it is included in the same category as a swiping nuclear war or an asteroid strike?
In a Reddit Q&A session, Bill Gates underpinned the idea that artificial intelligence entities will result in showing interests that may well conflict with that of the human race as we know it.
Elon Musk stated at an MIT speech in October last year that rivaling human intelligence and eventually overcoming it is the most significant threat posed by AI entities.
To counter this looming future recommendations include adequate regulatory measures and increased transparency.
On the other hand, there are those who state that such advancements in artificial intelligence development are still many years away and that using the word „intelligence” is quite confusing. This sparks the divisive limit between the two camps.
What is intelligence in relation to robotics and machine learning?
While many remain skeptical about the potential development of autonomous AI machines, others remind us that AI (a term coined in the 60s) is as old as computers. While in the past only humans were able to perform certain tasks. Searching for help in increasingly difficult tasks for the human brain, „machine learning” was brought in the limelight. Now, a more refined approach is called „deep learning” and is the seed of discontent and fear mongering.
It entails machines teaching themselves to perform certain tasks by crunching large sets of data. As a mechanical processing, this renders task learning and tasks themselves easier to perform for computers than for human which lack the capacity to crunch data so fast.
From this perspective, take the case of image classifying. A machine is accurate to the point, but the results are not motivated in any way. There is no curiosity and no goal and no consciousness in performing the tasks.
Admittedly, there are significant advances in „deep learning” resulting in autonomous weapon system creation or face recognition software among others. Nonetheless, the technology is still in incipient phases compared to the potential it could one day achieve.
Image Source: a2isystems.com