It’s a topic that worries some of the world’s greatest minds right now, from Bill Gates to Elon Musk.
Elon Musk, CEO of SpaceX and Tesla, described AI as our “greatest existential threat” and compared its development to the “summoning of the demon”.
He believes that super-intelligent machines could use humans as pets.
Professor Stephen Hawking said it was “almost certain” that a major technological catastrophe will threaten humanity in the next 1,000-10,000 years.
You could steal jobs
According to a 2016 YouGov poll, more than 60 percent of people fear that robots will lead to fewer jobs in the next decade.
And 27 percent predict that the number of jobs will fall “sharply”, with previous research suggesting that administrative and service workers will be hardest hit.
Other experts not only believe that they threaten our jobs, but also believe that AI could become “rogue” and too complex for scientists to understand.
A quarter of those surveyed predict that robots will be part of everyday life in just 11 to 20 years, while 18 percent predict that this will happen within the next decade.
They could become ‘villains’
Computer scientist Professor Michael Wooldridge said AI machines can get so complicated that engineers don’t fully understand how they work.
If experts don’t understand how AI algorithms work, they can’t predict when they will fail.
This means that driverless cars or intelligent robots could make unpredictable, “atypical” decisions at critical moments that could put people at risk.
For example, behind a driverless car, the AI could decide whether to dodge into pedestrians or crash into barriers instead of deciding to drive sensibly.
You could wipe out humanity
Some people believe that AI will wipe people out completely.
“I think there will likely be human extinction at some point, and technology will likely play a role in it,” DeepMind’s Shane Legg said in a recent interview.
He named artificial intelligence, or AI, “the number one risk for this century”.
Musk warned that AI poses a greater threat to humanity than North Korea.
“If you’re not concerned about AI security, you should be. Much more risk than North Korea, ”wrote the 46-year-old on Twitter.
“Nobody likes to be regulated, but anything (cars, planes, food, drugs, etc.) that poses a threat to the public is regulated. It should be AI too. ‘
Musk has consistently advocated governments and private institutions adopting regulations on AI technology.
He has argued that controls are necessary to protect machines from escaping human control