The term roboethics was coined by roboticist Gianmarco Veruggio in 2002, who also served as chair of an Atelier funded by the European Robotics Research Network to outline areas where research may be needed. The road map effectively divided ethics of artificial intelligence into two sub-fields to accommodate researchers’ differing interests:
- Machine ethics is concerned with the behavior of artificial moral agents (AMAs)
- Roboethics is concerned with the behavior of humans, how humans design, construct, use and treat robots and other artificially intelligent beings
Robotics is rapidly becoming one of the leading fields of science and technology, so that very soon humanity is going to coexist with a totally new class of technological artifacts: robots. It will be an event rich in ethical, social and economic problems. “Roboethics is an applied ethics whose objective is to develop scientific/cultural/technical tools that can be shared by different social groups and beliefs. These tools aim to promote and encourage the development of Robotics for the advancement of human society and individuals, and to help preventing its misuse against humankind.” (Veruggio, 2002) It is the first time in history that humanity is approaching the challenge to replicate an intelligent and autonomous entity. This compels the scientific community to examine closely the very concept of intelligence — in humans, animals, and of the mechanical — from a cybernetic standpoint.
In fact, complex concepts like autonomy, learning, consciousness, evaluation, free will, decision making, freedom, emotions, and many others shall be analyzed, taking into account that the same concept shall not have, in humans, animals, and machines, the same reality and semantic meaning.
From this standpoint, it can be seen as natural and necessary that robotics drew on several other disciplines, like Logic, Linguistics, Neuroscience, Psychology, Biology,Physiology, Philosophy, Literature, Natural history, Anthropology, Art, Design. Robotics de facto unifies the so-called two cultures, science and humanities. The effort to design Roboethics should take care of this specificity. This means that experts shall view robotics as a whole — in spite of the current early stage which recalls a melting pot — so they can achieve the vision of the robotics’ future.
Main positions on roboethics
- Not interested in ethics (This is the attitude of those who consider that their actions are strictly technical, and do not think they have a social or a moral responsibility in their work)
- Interested in short-term ethical questions (This is the attitude of those who express their ethical concern in terms of “good” or “bad,” and who refer to some cultural values and social conventions)
- Interested in long-term ethical concerns (This is the attitude of those who express their ethical concern in terms of global, long-term questions)
Disciplines involved in roboethics
The design of Roboethics requires the combined commitment of experts of several disciplines, who, working in transnational projects, committees, commissions, have to adjust laws and regulations to the problems resulting from the scientific and technological achievements in Robotics and AI.
In all likelihood, it is to be expected that the birth of new curricula studiorum and specialties, necessary to manage a subject so complex, just as it happened with Forensic Medicine. In particular, the main fields involved in Roboethics are: robotics, computer science, artificial intelligence, philosophy, ethics, theology, biology, physiology, cognitive science, neurosciences, law, sociology, psychology, and industrial design.
As Roboethics is a human-centered ethics, it has to comply with the principles state in the most important and widely accepted Charters of Human Rights:
- Human dignity and human rights.
- Equality, justice and equity.
- Benefit and harm.
- Respect for cultural diversity and pluralism.
- Non-discrimination and non-stigmatization.
- Autonomy and individual responsibility.
- Informed consent.
- Solidarity and cooperation.
- Social responsibility.
- Sharing of benefits.
- Responsibility towards the biosphere.
General ethical issues in science and technology
Roboethics shares with the other fields of science and technology most of the ethical problems derived from the Second and Third Industrial Revolutions:
- Dual-use technology.
- Environmental impact of technology.
- Effects of technology on the global distribution of wealth.
- Digital divide, socio-technological gap.
- Fair access to technological resources.
- Dehumanization of humans in the relationship with the machines.
- Technology addiction.
- Anthropomorphization of the machines.
|Laws of robotics|
|Three Laws of Robotics
by Isaac Asimov
Since antiquity, the discussion of ethics in relation to the treatment of non-human and even non-living things and their potential “spirituality” have been discussed. With the development machinery and eventually robots, this philosophy was also applied to robotics. The first publication directly addressing roboethics was developed by Isaac Asimov as his Three Laws of Robotics in 1942, in the context of his science fiction works, although the term “roboethics” was created by Gianmarco Veruggio in 2002.
The Roboethic guidelines were developed during some important robotics events and projects:
- 1942, Asimov’s short story “Runaround” explicitly states his Three Laws for the first time. Those “Laws” get reused for later works of robot-related science fiction by Asimov.
- 2004, First International Symposium on Roboethics, 30–31 January 2004, Villa Nobel, Sanremo, Italy, organized by School of Robotics, where, the word Roboethics was officially used for the first time;
- 2004, IEEE-RAS established a Technical Committee on Roboethics.
- 2004, Fukuoka World Robot Declaration, issued on February 25, 2004 from Fukuoka, Japan.
- 2005, ICRA05 (International Conference on Robotics and Automation), Barcelona: the IEEE RAS TC on Roboethics organized a Workshop on Roboethics.
- 2005–2006, E.C. Euron Roboethics Atelier (Genoa, Italy, February/March 2006). The Euron Project, coordinated by School of Robotics, involved a large number of roboticists and scholars of humanities who produced the first Roadmap for a Roboethics.
- 2006, BioRob2006 (The first IEEE / RAS-EMBS International Conference on Biomedical Robotics and Bio-mechatronics), Pisa, Italy, February 20, 2006: Mini symposium on Roboethics.
- 2006, International Workshop “Ethics of Human Interaction with Robotic, Bionic, and AI Systems: Concepts and Policies”, Naples, 17–18 October 2006. The workshop was supported by the ETHICBOTS European Project.
- 2007 ICRA07 (International Conference on Robotics and Automation), Rome: the IEEE RAS TC on Roboethics organized a Workshop on Roboethics.
- 2007 ICAIL’07,International Conference on Artificial Intelligence and Law, Stanford University, Palo Alto, USA, 4–8 June.
- 2007 International European Conference on Computing and Philosophy E-CAP ‘07, University of Twente, Netherlands, 21–23 June 2007. Track “Roboethics”.
- 2007 Computer Ethics Philosophical Enquiry CEPE ’07, University of San Diego, USA,12–14 July 2007. Topic “Roboethics”.
- 2008 INTERNATIONAL SYMPOSIUM ROBOTICS: NEW SCIENCE, Thursday FEBRUARY 20th, 2008, Via della Lungara 10 – ROME – ITALY
- 2009 ICRA09 (International Conference on Robotics and Automation), Kobe, Japan: the IEEE RAS TC on Roboethics organized a Workshop on Roboethics.
- 2013 Workshop on Robot Ethics, University of Sheffield, Feb 2013
In popular culture
Roboethics as a science or philosophical topic has not made any strong cultural impact, but is a common theme in science fiction literature and films. One of the most popular films depicting the potential misuse of robotic and AI technology is The Matrix, depicting a future where the lack of roboethics brought about the destruction of the human race. An animated film based on The Matrix, the Animatrix, focused heavily on the potential ethical issues between humans and robots. Many of the Animatrix’s animated shorts are also named after Isaac Asimov’s fictional stories. The movie I, Robot (named after Isaac Asimov’s book I, Robot) also depicts a scenario where robots rebel against humans due to the lack of civil rights and ethical treatment.
Although not a part of roboethics per se, the ethical behavior of robots themselves has also been a joining issue in roboethics in popular culture. The Terminator series focuses on robots run by an uncontrolled AI program with no restraint on the termination of its enemies. This series too has the same futuristic plot as The Matrix series, where robots have taken control. The most famous case of robots or computers without programmed ethics is HAL 9000 in the Space Odyssey series, where HAL (a computer with advance AI capabilities who monitors and assists humans on a space station) kills all the humans on board to ensure the success of the assigned mission.
- Levy, David (November, 2008). Love and Sex with Robots: The Evolution of Human-Robot Relationships. Harper Perennial.
- Laryionava, Katsiaryna/Gross, Dominik (2012). Deus Ex Machina or E-slave? Public Perception of Healthcare Robotics in the German Print Media, International Journal of Technology Assessment in Health Care 28/3: 265-270.
- Ethics + Emerging Sciences Group
- Roboethics Info Database
- The Roboethics official website
- Technical Committee on Roboethics
- The Makkula Center for Applied Ethics at Santa Clara University, USA
- International Society for Ethics and Information Technology
- International Center for Information Ethics
- The Pugwash Conference on Science and World Affairs
- Cultural Attitude Towards Technology and Communication Conference
- Computer Professionals for Social Responsibility
- The Centre for Responsible Nanotechnology
- Union of Concerned Scientists
- The International Institute of Humanitarian Law
- The World Transhumanist Association
- Ciao Robot:no-fiction film on the birth of roboethics
- Living Safely with Robots, Beyond Asimov’s Laws, PhysOrg.com, June 22, 2009.
- Plug & Pray, documentary film on the ethics of robotics and artificial intelligence (with Joseph Weizenbaum and Ray Kurzweil)