Roboethics

The term roboethics was coined by roboticist Gianmarco Veruggio in 2002, who also served as chair of an Atelier funded by the European Robotics Research Network to outline areas where research may be needed. The road map effectively divided ethics of artificial intelligence into two sub-fields to accommodate researchers’ differing interests:[1]

  • Machine ethics is concerned with the behavior of artificial moral agents (AMAs)
  • Roboethics is concerned with the behavior of humans, how humans design, construct, use and treat robots and other artificially intelligent beings

Issues[edit]

Robotics is rapidly becoming one of the leading fields of science and technology, so that very soon humanity is going to coexist with a totally new class of technological artifacts: robots. It will be an event rich in ethical, social and economic problems. “Roboethics is an applied ethics whose objective is to develop scientific/cultural/technical tools that can be shared by different social groups and beliefs. These tools aim to promote and encourage the development of Robotics for the advancement of human society and individuals, and to help preventing its misuse against humankind.” (Veruggio, 2002)[1] It is the first time in history that humanity is approaching the challenge to replicate an intelligent and autonomous entity. This compels the scientific community to examine closely the very concept of intelligence — in humans, animals, and of the mechanical — from a cybernetic standpoint.

In fact, complex concepts like autonomy, learning, consciousness, evaluation, free will, decision making, freedom, emotions, and many others shall be analyzed, taking into account that the same concept shall not have, in humans, animals, and machines, the same reality and semantic meaning.

From this standpoint, it can be seen as natural and necessary that robotics drew on several other disciplines, like LogicLinguisticsNeurosciencePsychologyBiology,PhysiologyPhilosophyLiteratureNatural historyAnthropologyArtDesign. Robotics de facto unifies the so-called two cultures, science and humanities. The effort to design Roboethics should take care of this specificity. This means that experts shall view robotics as a whole — in spite of the current early stage which recalls a melting pot — so they can achieve the vision of the robotics’ future.

Main positions on roboethics[edit]

Since the First International Symposium on Roboethics (Sanremo, Italy, 2004), three main ethical positions emerged from the robotics community (D. Cerqui, 2004):

  • Not interested in ethics (This is the attitude of those who consider that their actions are strictly technical, and do not think they have a social or a moral responsibility in their work)
  • Interested in short-term ethical questions (This is the attitude of those who express their ethical concern in terms of “good” or “bad,” and who refer to some cultural values and social conventions)
  • Interested in long-term ethical concerns (This is the attitude of those who express their ethical concern in terms of global, long-term questions)

Disciplines involved in roboethics[edit]

The design of Roboethics requires the combined commitment of experts of several disciplines, who, working in transnational projects, committees, commissions, have to adjust laws and regulations to the problems resulting from the scientific and technological achievements in Robotics and AI.

In all likelihood, it is to be expected that the birth of new curricula studiorum and specialties, necessary to manage a subject so complex, just as it happened with Forensic Medicine. In particular, the main fields involved in Roboethics are: roboticscomputer scienceartificial intelligencephilosophyethicstheologybiologyphysiologycognitive scienceneuroscienceslawsociologypsychology, and industrial design.

Principles[edit]

As Roboethics is a human-centered ethics, it has to comply with the principles state in the most important and widely accepted Charters of Human Rights:

  • Human dignity and human rights.
  • Equality, justice and equity.
  • Benefit and harm.
  • Respect for cultural diversity and pluralism.
  • Non-discrimination and non-stigmatization.
  • Autonomy and individual responsibility.
  • Informed consent.
  • Privacy.
  • Confidentiality.
  • Solidarity and cooperation.
  • Social responsibility.
  • Sharing of benefits.
  • Responsibility towards the biosphere.

General ethical issues in science and technology[edit]

Roboethics shares with the other fields of science and technology most of the ethical problems derived from the Second and Third Industrial Revolutions:

  • Dual-use technology.
  • Environmental impact of technology.
  • Effects of technology on the global distribution of wealth.
  • Digital divide, socio-technological gap.
  • Fair access to technological resources.
  • Dehumanization of humans in the relationship with the machines.
  • Technology addiction.
  • Anthropomorphization of the machines.

History[edit]

Main article: History of robots
Laws of robotics
Three Laws of Robotics
by Isaac Asimov
(in culture)

Tilden’s Laws of Robotics
by Mark Tilden

Related topics

Since antiquity, the discussion of ethics in relation to the treatment of non-human and even non-living things and their potential “spirituality” have been discussed. With the development machinery and eventually robots, this philosophy was also applied to robotics. The first publication directly addressing roboethics was developed by Isaac Asimov as his Three Laws of Robotics in 1942, in the context of his science fiction works, although the term “roboethics” was created by Gianmarco Veruggio in 2002.

The Roboethic guidelines were developed during some important robotics events and projects:

In popular culture[edit]

Roboethics as a science or philosophical topic has not made any strong cultural impact,[citation needed] but is a common theme in science fiction literature and films. One of the most popular films depicting the potential misuse of robotic and AI technology is The Matrix, depicting a future where the lack of roboethics brought about the destruction of the human race. An animated film based on The Matrix, the Animatrix, focused heavily on the potential ethical issues between humans and robots. Many of the Animatrix’s animated shorts are also named after Isaac Asimov’s fictional stories. The movie I, Robot (named after Isaac Asimov’s book I, Robot) also depicts a scenario where robots rebel against humans due to the lack of civil rights and ethical treatment.

Although not a part of roboethics per se, the ethical behavior of robots themselves has also been a joining issue in roboethics in popular culture. The Terminator series focuses on robots run by an uncontrolled AI program with no restraint on the termination of its enemies. This series too has the same futuristic plot as The Matrix series, where robots have taken control. The most famous case of robots or computers without programmed ethics is HAL 9000 in the Space Odyssey series, where HAL (a computer with advance AI capabilities who monitors and assists humans on a space station) kills all the humans on board to ensure the success of the assigned mission.

Notes[edit]

  1. Jump up to:a b Veruggio, Gianmarco (2007). The Roboethics Roadmap. Scuola di Robotica. p. 2. Retrieved 04/28/2011.

References[edit]

  • Levy, David (November, 2008). Love and Sex with Robots: The Evolution of Human-Robot RelationshipsHarper Perennial.
  • Laryionava, Katsiaryna/Gross, Dominik (2012). Deus Ex Machina or E-slave? Public Perception of Healthcare Robotics in the German Print Media, International Journal of Technology Assessment in Health Care 28/3: 265-270.

External links[edit]



Kommentera:

Namn *:
E-post *:
Webbplats:
Kommentar *:
*