Articles  |    |  January 15, 2018

A Review of Future and Ethical Perspectives of Robotics and AI

Article by Jim Torresen. 
Published in Frontiers in Robotics and AI. 


In recent years, there has been increased attention on the possible impact of future robotics and AI systems. Prominent thinkers have publicly warned about the risk of a dystopian future when the complexity of these systems progresses further. These warnings stand in contrast to the current state-of-the-art of the robotics and AI technology. This article reviews work considering both the future potential of robotics and AI systems, and ethical considerations that need to be taken in order to avoid a dystopian future. References to recent initiatives to outline ethical guidelines for both the design of systems and how they should operate are included…

Authors and movie makers have, since the early invention of technology, been actively predicting how the future would look with the appearance of more advanced technology. One of the first—later regarded as the father of science fiction—is the French author Jules Gabriel Verne (1828–1905). He published novels about journeys under water, around the world (in 80 days), from the earth to the moon and to the center of earth. The amazing thing is that within 100 years after publishing these ideas, all—except the latter—were made possible by the progression of technology. Although it may have happened independently of Verne, engineers were certainly inspired by his books (Unwin, 2005). In contrast to this mostly positive view of technological progress, many have questioned the negative impact that may lie ahead. One of the first science fiction feature films was Fritz Lang’s 1927 German production, Metropolis. The movie’s setting is a futuristic urban dystopian society with machines. Later, more than 180 similar dystopian films have followed,1 including The Terminator, RoboCop, The Matrix, and A.I. Whether or not these are motivating or discouraging for today’s researchers in robotics and AI is hard to say but at least they have put the ethical aspects of technology on the agenda.

Recently, business leaders and academics have warned that current advances in AI may have major consequences to present society:

  • “Humans, limited by slow biological evolution, couldn’t compete and would be superseded by A.I.”—Stephen Hawking in BBC interview 2014.
  • AI is our “biggest existential threat,” Elon Musk at Massachusetts Institute of Technology during an interview at the AeroAstro Centennial Symposium (2014).
  • “I am in the camp that is concerned about super intelligence.” Bill Gates (2015) wrote in an Ask Me Anything interview5 on the Reddit networking site.

These comments have initiated a public awareness of the potential future impact of AI technology on society and that this impact should be considered by designers of such technology. That is, what authors and movie directors propose about the future has probably less impact than when leading academics and business people raise questions about future technology. These public warnings echo publications like Nick Bostrom’s (2014) book Superintelligence: Paths, Dangers, Strategies, where “superintelligence” is explained as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.” The public concern that AI could make humanity irrelevant stands in contrast to the many researchers in the field being mostly concerned with how to design AI systems. Both sides could do well to learn from each other (Müller, 2016a,b). Thus, this article reviews and discusses published work on possibilities and prospects for AI technology and how we might take necessary measures to reduce the risk of negative impacts. This is a broad area to cover in a single article; opinions and publications on this topic come from people of many domains. Thus, this article is mostly limited to refer to work relevant for developers of robots and AI…

The article has presented some perspectives on the future of AI and robotics including reviewing ethical issues related to the development of such technology and providing gradually more complex autonomous control. Ethical considerations should be taken into account by designers of robotic and AI systems, and the autonomous systems themselves must also be aware of ethical implications of their actions. Although the gap between the dystopian future visualized in movies and the current real world may be considered large, there are reasons to be aware of possible technological risks to be able to act in a proactive way. Therefore, it is appreciable, as outlined in the article, that many leading researchers and business people are now involved in defining rules and guidelines to ensure that future technology becomes beneficial to the limit the risks of a dystopian future . . .

About the Author

Jim Torresen is a professor at the Department of Informatics at the University of Oslo. His research interests at the moment include bio-inspired computing, machine learning, reconfigurable hardware, robotics and applying this to complex real-world applications. Several novel methods have been proposed. He has published approximately 150 scientific papers in international journals, books and conference proceedings. 10 tutorials and several invited talks have been given at international conferences. He is in the program committee of more than ten different international conferences, associate editor of three international scientific journals as well as a regular reviewer of a number of other international journals. 

This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). T