
Ver la versión en español aquí
I love a good sci-fi film as much as the next guy, but what happens when the science is no longer fiction? If you’ve been following developments in the field of computers, you may have seen recent stories regarding how some fairly well regarded individuals, namely Stephen Hawking, Bill Gates and Elon Musk, think the greatest threat to mankind may not be Isis or terrorists generally, or nuclear proliferation, but artificial intelligence. And they are not alone.
The thinking goes something like this: Once true artificial intelligence is developed, man will no longer be the most intelligent “species” (think of the Star Trek television episode where Data wanted recognition as a sentient being) on the planet, and, as was the case with man, the likely result that follows the development of artificial intelligence is that it will then proceed to dominate the world. Think Will Smith in i, Robot. Not good.
Stephen Hawking’s glass half empty perspective is downright troubling. In an interview with the BBC regarding AI, Hawking said “The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence it would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.” Superseded.
In a February 4, 2015 article, published in The Guardian, Ray Kurzweil, director of engineering at Google, “estimated that robots will reach human levels of intelligence by 2029, purportedly leaving us about 14 years to reign supreme.” IBM’s Watson computer, which won Jeopardy in 2011, has been successfully applied to medical diagnoses and can outperform doctors in some tasks. The same machine has also been transformed into an “artificial lawyer,” which can search legal databases and correspondence for possibly relevant information.
Continue Reading