The AI Takeover

Share on facebook
Share on twitter
Share on linkedin

“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.”

-Stephen Hawking

I remember watching The Terminator as a 10-year old and being absolutely terrified of the very notion of computers getting a life of their own and seeking to destroy mankind, but I’ve been told such a future is a long way away from becoming a reality, or is it?

Artificial Intelligence in its simplest form is a machine, especially computer systems simulating human intelligence. The simulation includes processes such as learning, reasoning and self-correction. This basically means that computers can think like us, act like us and develop cognitive functions to perform computational tasks in an efficient manner. Revolutionary technologies can be built using the help of AI that would help us to eradicate major world problems.

From Google Assistant to self-driving cars artificial intelligence has been growing by leaps and bounds over the last decade and this exponential development in this field of study has raised many questions among researchers and the general population alike. What happens when AI becomes competent, with goals misaligned with ours? What are the risks associated with such a phenomenon? Is this a possibility in the near future or is it mere speculation? Scientists are still trying to figure out these prying questions.

How can AI be dangerous? Well mainly it is the scenario where if we program AI to do something beneficial but it devises a destructive method to achieve that goal. You can program a self-driving car to take you to the airport in the fastest time possible and you may end up getting hit by a tree, dazed and confused vomiting all over your broken car. It becomes increasingly alarming when the goals of AI are misaligned with ours. If AI can achieve super-intelligence that surpasses human intelligence we have no surefire way of defending ourselves from the malevolence. Already this year, a so-called ‘DeepFake’ video of a Trump speech was broadcast on a Fox-owned Seattle TV network, showing a very present AI threat. The AI could synthesis the image of the American president and makes it look like as if it was actually him giving the speech. These attempts may very well lead to political unrest with the social media aggravating the situation by providing even less filtration and enables fake clips to spread with ease.

Credit: Derpfakes

Similarly Sophia, an astonishing humanoid robot created with the help of AI, was activated in February 14th, 2016. It is a social robot that can interact with a person and possesses human-like appearance and behavior like no other robot, all thanks to the wonders of AI. However, if it falls under the wrong hands, terrible agendas can be fulfilled with powers like this. This is why SpaceX CEO Elon Musk and 115 other robotics company leaders from 26 countries have petitioned the United Nations to ban “killer robots” (also known as lethal autonomous weapons). “These can be weapons of terror,” says the letter, “weapons that despots and terrorists use against innocent populations, weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.

Sophia: A humanoid robot

Musk, who also cofounded OpenAI, a research company that aims to provide and develop friendly AI, has been an advocate for dire warnings against the dangers of artificial intelligence. “[AI] it’s capable of vastly more than almost anyone knows and the rate of improvement is exponential,” said Musk at a conference in Austin, Texas. “Mark my words — AI is far more dangerous than nukes,” he added.

While many AI research analysts along with Musk have expressed concern on this matter, Facebook CEO Mark Zuckerberg deems AI would make our lives better in the future. “And I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don’t understand it. It’s really negative and in some ways I actually think it is pretty irresponsible,” he says in a Facebook live video.

Other big names in science and technology have also raised their concerns over the risks of AI such as Steve Wozniak, Bill Gates and so on, and it really reflects on how significant it is to research the safety and concern regarding artificial intelligence. Although AI achieving super-intelligence is at least decades away, it may take that long to discover different methods to enable safe implementation of AI to avoid an apocalyptic takeover.

Share on facebook
Share on twitter
Share on linkedin