Cyber security: Artificial Intelligence Gone Wild
Artificial Intelligence is defined as the development of computer systems capable of performing tasks that normally require a level of human intelligence such as speech recognition, language translation or decision making.
In a nutshell; standard robots are programmed to carry out a specific task, no more no less. Whereas AI gives these robots the capability to learn. When you think of Artificial Intelligence think of robots, on steroids.
Artificially Intelligent robots are the bridge between robotics and AI. A true artificially-intelligent system is one that can learn on its own. We’re talking about neural networks from the likes of Google’s DeepMind, which can make connections and reach meanings.
When people think of artificial intelligence (AI), scenes of android killers and computers-gone-rogue often come to mind. Thankfully we don’t have any Terminator-type situations just yet and whilst it might sound far-fetched there are a few respected technologists that share a similar concern such as Elon Musk and Stephen Hawking.
That said, let’s take a deeper dive into some frightening cases of AI gone wrong…
AI is beginning to creep in, in a bid to upgrade military arsenal. In fact, only a few months ago a group of scientists called for a ban on the development of weapons controlled by AI for the reason that autonomous weapons may malfunction in unpredictable ways and kill innocent people.
Essentially, fully autonomous weapons, also known as “killer robots,” would be able to select and engage targets without human intervention.
Intelligent technology for the world’s stupidest idea, isn’t that ironic. I can’t even bare to think about the ethical ramifications of this one.
Facebook Chat Bots
Facebook abandoned an AI experiment whereby they were training bots to negotiate with one another over virtual ownership of items (testing automated price bargaining) and to cut a long story short – the bots actually developed their own language to communicate in, a language that made no sense to the bots’ creators and as a result negotiations were a success.
Was this a means to elude their human masters? Who knows…
Nautilus, a self-learning super computer
This unit was fed millions of newspaper articles starting from 1945, by basing its search on two criteria, the nature of the publication and location.
Using this wealth of information about past events, the computer was asked to come up with suggestions on what would happen in the “future.”
And these turned out to be surprisingly accurate guesses. How accurate? Well, for example, it had located Bin Laden.
It took less than 24 hours for Twitter users to corrupt an innocent AI chatbot. Tay was an experiment in ‘conversational understanding’ and the more users interacted with her, the more she learned and the smarter she became.
Pretty soon after Tay launched, people starting tweeting the bot with all sorts of misogynistic, racist remarks. And Tay — being essentially a robot parrot with an internet connection — started repeating these sentiments back to users,
Not long after Tay’s initial boo-boo, a new version of the bot was accidentally released during testing where she proceeded to compose a number of drug related tweets about indulging in narcotics in police presence. Way to play it cool Tay…
Sophia is a social humanoid robot developed by Hong Kong based company Hanson Robotics. Sophia was activated on February 14, 2016 and as of October 2017 she is a recognised citizen of Saudi Arabia. Sophia wants to protect humanity and has said, “My AI is designed around human values like wisdom, kindness, and compassion.” When questioned about her potential for abuse, she had a quick rebuttal. “You’ve been reading too much Elon Musk and watching too many Hollywood movies. Don’t worry, if you’re nice to me I’ll be nice to you.”
That’s grand Sophia, we’ll totally ignore the fact you said you would destroy all humans shortly after being switched on…
With all of that said and done, I’m still not entirely sure how I feel about artificial intelligence. I mean, the stakes are pretty high when you think about it… could we accidentally build a killer robot that will order the entire human race into zoo compounds?
Who knows, but I think Elon Musk may be on to something.