The development of autonomous artifacts enabled by artificial intelligence techniques is creating new ethical challenges. We believe the co-existence of these artifacts and humans can only happen if the artifacts distinguish between right and wrong. Artifacts need to be embedded with the moral code that the society wherein they will act holds.
There are two fundamental aspects that need to be taken into consideration. First, morality has a dual sense. It has a descriptive facet on what are the abstract cultural and personal values that are considered right or wrong, and a normative facet which describes what is the actual behaviour that is right or wrong. Second, morality has an evolutionary aspect. Codes of conduct evolve in our societies.
To ensure that technology is responsible we propose to set values and norms as the foundations of the design process. Values and norms are the rules that govern behaviour in societies. This is in line with the Asimolar AI Principle on “Value Alignment” from the Future of Life Institute, which states that “highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation”.
However, we also propose for humans to be in control of their technologies because they want and must be the final decision point on what are the moral codes that artifacts have to abide by and how they evolve along time. This is in line with the Asimolar AI Principle on “Human Control” from the Future of Life Institute, which stresses the needs for humans to maintain control over developed AI systems.
We propose a roadmap for the design and implementation of "moral" intelligent artefacts, whose morality is dictated by the community’s agreements on their moral code.
Published on 27/12/17
Submitted on 24/10/17
Licence: CC BY-NC-SA license
Are you one of the authors of this document?