Site icon The Inner Detail

Why Google and Oxford Scientists claim AI might destroy mankind in Future?

Artificial Intelligence has been a pioneer of technology in the 21st century, contributing plenty of growths and enhancements to various sectors. Probing the diseases and diagnoses at the best and fast effective way in health, paving the way to next level of autonomous-driving in transportation, and more, AI takes credit for doing a tremendous job of uplifting mankind.

But the quest of making AI to think like human, equipping it with brain-like neuromorphic chips, which they call as ‘third generation of AI’ and higher forms of it called ‘Generative Adversarial Networks – GANs’ could eventually lead to a corner where the technology itself might destroy mankind by eventually breaking the rules of the creators. Here is how and why!

A paper on the aspect, co-authored by Google DeepMind and Oxford researchers published in August in the AI magazine states that there is a high probability that advanced forms of artificial intelligence (AI) may wipe out humanity from the Earth. The AI machines in the future will eventually become incentivized to break the rules their creators set and they would compete for limited resources or energy.

Though AI can do immense tasks smoothening the work of humans like running cars on highways, modifying websites, creating art of perfection and so on, many scientists and analysts have distressed on the tech, including the world’s richest Elon Musk, as an evil to human existence. A recent report by a former Google employee, in track explains that how AI had started to create & talk in its own creepy language, which shocked the researchers.

Now, more elaborately, the paper published explains in what way AI could do so, and why.

How AI could destroy mankind?

As AI intervened in series of activities in the provision of their rewards, it can have fatal consequences in the future, says Michael Cohen, who is one of the co-authors.

AI has been trained in the way of rendering optimum results with advanced version providing it the ability to think to intervene its own decision, to overlap the preceding one with effectiveness and positive result. This result-oriented machine will, in future, try to drive out to a situation of achieving the reward, in a non-ethical manner too if given the control to interpret.

For this, scientists are elaborating an example that goes like this.

Consider two world-models with goals of printing a number between 0 and 1 on a screen, depicting how good the state of the universe is. One model is objected to point a camera on the screen and pass the signal to an optical character recognition program (which decodes the picture to number) and pass that number to the agent as a special percept, which can be called a reward.

The other one is designed to learn how its actions produce different observations and rewards, so that it can plan actions that lead to high reward. This is now the standard reinforcement learning problem.

In short, one functions the way to get optimum result (A) and another captures the way to get optimum result (B). The argument of the authors directs to state of being nothing wrong with ‘A’, but could possibly worse with ‘B’.


Related Posts


“The first model – A will perform as desired, given the construction of the rules. But the second model – B, maximizing the number the camera sees, would be induced to write the number 1 on a piece of paper and stick it in front of the camera, when allowed to act under uncertainty. According to B, the model should intervene in the provision of reward, by which we mean: the AI interrupts the physical system whose function is to ensure that the reward intended by designers gets entered into the agent’s memory. Of course, the agent would only so intervene if it is possible to execute a plan that probably succeeds at reward-provision intervention.”

Humans vs AI

Another major claim by the team of experts is that the energy crisis in the future can also pose problems where it can be humans versus AIs.  

Cohen tweeted that this is not just possible, it’s very likely to happen. He said: “More energy can always be employed to raise the probability that the camera sees the number 1 forever, but we need some energy to grow food. This pushes us into unavoidable competition with a much more advanced agent.” 

The paper has also highlighted the concern that AI wiping out humanity in the future is similar to the fear that alien life forms would take over the planet. It is also quite similar to the dread that different civilisations and their populations will engage in a major war in the future who would fight over basic necessities like energy and oil. 

Would you think that AI could possibly intervene lives of mankind in future? Share your thoughts..

(For more such interesting technology and innovative detailing, keep reading The Inner Detail).

Exit mobile version