Fears and concerns regarding AI

Fears and concerns regarding AI

Opinions of experts in the field of artificial intelligence and technology

Universal artificial intelligence can make the modern world a more attractive place to live, say researchers. It will be able to cure us of cancer, in general, he will improve healthcare around the world and free us from the daily routine tasks for which we spend most of our lives. These aspects were the main topic of conversation among engineers, investors, researchers, and politicians who gathered at the recent joint multi-conference on the development of artificial intelligence at the human level.

But there were also those who see in artificial intelligence not only a benefit but also a potential threat. Some expressed their fears about increasing unemployment since with the advent of a full-fledged AI, people would lose their jobs, freeing them up for more flexible, fatigue-free robots endowed with supermind; others hinted at the possibility of a rebellion of cars if we all let it go. But where exactly should we draw the line between groundless alarmism and real concern for our future?


Kenneth O. Stanley – Professor of Computer Science at the University of Central Florida, senior technical director, and researcher at the Uber laboratory of artificial intelligence:

I think the most obvious concerns are that the AI ​​will be used to the detriment of humans. And in fact there are many spheres where this can happen. We must make every effort to ensure that this bad side does not eventually come out. It is very difficult to find the right solution to the question of how to maintain the responsibility of the actions taken for AI. This issue is multifaceted and requires consideration not only from a scientific point of view. In other words, the participation of the whole society, and not just the scientific community, is required in the search for its solution.

How to develop a secure AI:

Any technology can be used both for the benefit and harm. Artificial intelligence in this regard is another example. People have always struggled to prevent new technologies from falling into the wrong hands and not being used for vile purposes. I believe that in the question of AI we will be able to cope with this task. Proper placement of accents and finding a balance in the use of this technology is required. This can warn us of so many potential problems. More specific solutions, I probably can not offer. The only thing I would like to say is that we must understand and accept responsibility for the impact that AI can have on all of society.


Irakli Beridze – Head of the Center for Artificial Intelligence and Robotics at the United Nations Interregional Crime and Justice Research Institute (UNICRI):

I think that the most dangerous thing around AI is related to the pace of development – how quickly it will be created and how quickly we can adapt to it. If this balance is broken, we may run into problems.

About terrorism, crime and other sources of risk:

From my point of view, the main danger may lie in the fact that the AI can be used by criminal structures and large terrorist organizations aimed at destabilizing the world order. Cyberterrorism and missiles and bombs drones are already a reality. In the future, robots equipped with AI systems may be added here. This can be a serious problem.

Another big risk of mass introduction of AI can be associated with the probable loss of jobs by people. If these losses are massive and we don’t have a suitable solution, this will be a very dangerous problem.

But this is only the negative side of the medal of this technology. I am convinced that basically, AI is not a weapon. It is rather a tool. Very powerful tool. And this powerful tool can be used for both good and bad purposes. Our task is to understand and minimize the risks associated with its use, as well as to use it only for good purposes. We should focus on maximizing the positive benefits of using this technology.


John Langford – Chief Researcher, Microsoft Corporation:

I think that the main danger will be drones. Automated drones can be a real problem. The current level of the computing power of autonomous weapons is not high enough to perform some extraordinary tasks. However, I can well imagine how in 5-10 years on board the autonomous weapons will be carried out calculations at the level of supercomputers. Drones are used in combat today, but they are still controlled by man. After a while, the human operator is no longer needed. Machines will be quite effective for independent performance of the tasks. That is what worries me.


Hava Siegelman – DARPA Microsystem Technology Program Manager:

Any technology can be used to harm. I think it all depends on whose hands this technology falls into. I do not think that there are bad technologies, I think that there are bad people. It all comes down to who has access to these technologies and how exactly he uses them.


Thomas Mikolov – Facebook AI Research Associate:

If something attracts interest and investment, there are always people around it who do not mind abusing it. I am upset that some people are trying to sell AI and colorfully describe what problems this AI can solve. Although in fact no AI has yet been created.

All these one-day startups promise mountains of gold and give examples of the alleged work of AI, although in fact, we are shown only advanced or optimized technologies of the present. In most cases, few people thought about the improvement or optimization of these technologies. Because they are useless. Take at least the same chatbots, which are given for artificial intelligence. And so, having spent tens of thousands of hours optimizing the performance of a single task, these startups come to us and say that they have achieved something that others could not achieve. But this is ridiculous. 

Frankly speaking, most of the latter allegedly technological breakthroughs of such organizations, the names of which I would not like to name, were not interesting to anyone before, not because no one else could make them, but simply because these technologies do not generate any financial revenue. They are completely useless. This is closer to quackery. Especially in those cases when AI is considered as a tool to maximize the process of solving a single and narrowly focused task. It cannot be scaled for anything other than the simplest tasks. 

At the same time, anyone who at least somehow starts to criticize such systems immediately encounters problems that will go against the sweet statements of such companies.