There is a race on right now to make super intelligent beings, sometimes called neural networks, or artificial agents. What are they?
They’re more advanced than robots or bots. Bots are machines that have built in combinations of algorithms and intentions and are usually for a single task or need. Bots are already in use far and wide. There are chatbots like Siri or ALICE, therapy bots that work in hospital settings, and the list goes on.
Take for example ‘journobots’. Many large news organizations use journobots for articles such as sports and business. The Associated Press, for example, uses a journobot program called Wordsmith to write over 1,000 articles a month.
There are more bots on the internet than humans. Almost 60 percent of 2014’s internet traffic consisted of automated code. The figure is much higher today. Ever notice all the porn bots on #Nanaimo?
Google has another AI project called Magenta to create music and art. Given just three notes, Magenta came up with this 90 second song:
Full AI (artificial intelligence) and the threat to humans
Now scientists are developing super intelligent beings with self-will; machines that will think smarter than humans and eventually strike out on their own. Full AI — a conscious machine with self-will — could be more dangerous than anything that we have known to exist so far.
AI machines, or ‘artificial agents’ might go rogue. What’s going to stop them? A big red button? That’s what Google says it can do. But what if these artificial agents get aligned with some zealot that wants to unleash harm on the world? It’s not a stretch to think what might happen.
Future of Humanity Institute led by Nick Bostrom, believes that machines will outsmart humans within the next 100 years and thinks they have the potential to turn against us.
In a recent interview, Elon Musk said that he is so concerned he set up an open source AI organization so in case something does happen, not all AI intelligence will be held by a few players. He said he was worried about “one company in particular” – and many believe he was referring to Google.
It’s important that AI not be concentrated in the hands of a few and potentially lead to a world that we don’t want.
meet AlphaGo – artificial agent
In 2011, the founders of DeepMind designed an artificial agent called AlphaGo.
Alpha Go played games. It learned on its own with little human intervention, except by either getting rewarded or punished. This is what set it apart from anything else so far.
It got really good at playing Go, a Chinese board game exponentially more complex than Chess. To prove its abilities, in October 2015, AlphaGo played against Europe’s reigning Go champion, Fan Hui. It beat Hui in five games.
In March of this year, 60 million viewers in China (100 million worldwide) watched the match between South Korean Go Master Lee Se-dol and AlphaGo. AlphaGo won game 3 and the match. It was an historic moment. In Korea, five books have already been published about the famous match.
AlphaGo is now the world’s number one Go player, the first non-human to win the honour.
When Google bought DeepMind in January 2014 in a deal worth £300-400 million, the founders of DeepMind had one condition: create an AI ethics board. Google did but they won’t reveal who sits on it. Why not?
In May of this year it was revealed that Google’s DeepMind had been given access to the healthcare data of up to 1.6 million patients from three hospitals in England run by London’s Royal Free Trust in order to develop an app, called Streams.
Google recently announced that “by applying DeepMind’s machine learning to our own Google data centres, we’ve managed to reduce the amount of energy we use for cooling by up to 40%.” They plan to tell the world about possible applications of this technology including improving power plant conversion efficiency and water usage.
Have humans become zurkers?
Google invested in Niantic Inc., the makers of the game, Pokémon Go, built using Google Maps.
In order to play, you have to sign up with a Google account. As people walk around looking for Pokémon, the app uses their phones’ camera and GPS sensors.
Is this a fun pastime, or has it turned everyone into ‘zurkers’ – people who do human intelligence tasks for free?
In the past, Google has employed people to get street views. But what if they wanted to see more such as inside a building? Or a close up of an area that is normally off limits? What better idea than to create an app and let people think they are having fun whilst unaware that they are collectively uploading huge amounts of data.
Remember this video from 2008 when Google Street view first came out?
Is Alphabet evil?
Google’s slogan is “Don’t be evil.” It was in the company’s corporate code of conduct, and in its 2004 Founder’s IPO Letter. It’s interesting to note the slogan was not adopted by its holding company, Alphabet Inc. which they set up last year.
Some say Alphabet was just set up to avoid paying foreign taxes such as the 1.6 billion euros ($1.8 billion) it owes France.
Google has funneled billions in profits to offshore tax havens through a series of foreign subsidiaries with a strategy that has been dubbed the “Double Irish With a Dutch Sandwich.” As of the end of 2015, Google had $58.3 billion in offshore “permanently reinvested” profits on which it pays no U.S. taxes, up from $47.4 billion in 2014.
Alter – the neural network robot
This robot called ‘Alter’, runs entirely off a neural network. That means all its movements are 100 percent free of any human control. Alter was created by two robotics laboratories in Tokyo and Osaka and is currently on display at Tokyo’s National Museum of Emerging Science and Innovation.
Why is AI a threat to humans? Because we will no longer be at the top of the food chain. In the future, humans might just be the pets of these AI beings.