Thursday, August 27, 2009

Creating 'Moral Robots' One Step Closer

Primary source: AlphaGalileo, also Softpedia News, and ScienceDaily

The goal of offering robots a sense of morality was recently brought one step closer by researchers in Portugal and Indonesia, when they introduced a new approach on decision-making, based on computational logic. Their efforts are described in the latest issue of the International Journal of Reasoning-based Intelligent Systems, AlphaGalileo reports.

Science-fiction authors have proposed the idea of “evil” robots in their book and movie plots for a long time. In most of these instances, the robots turn to behavior that we perceive as bad, as in, for instance, the fact that they attack their creators. But what people often overlook is the fact that the machines would have first had to have an idea of what is moral for a human being. Making this a reality is still some time away, experts believe.

“Morality no longer belongs only to the realm of philosophers. Recently, there has been a growing interest in understanding morality from the scientific point of view,” the researchers say in their journal entry. The paper was written by Luis Moniz Pereira, from the Universidade Nova de Lisboa, in Portugal, and Ari Saptawijaya, from the Universitas Indonesia, in Depok. Both of them have developed a keen interest in computational logic and the field of applied robotics over the years.

They have turned to a system known as prospective logic to help them begin the process of programming morality into a computer. Put simply, prospective logic can model a moral dilemma and then determine the logical outcomes of the possible decisions. The approach could herald the emergence of machine ethics.

The development of machine ethics will allow us to develop fully autonomous machines that can be programmed to make judgements based on a human moral foundation. "Equipping agents with the capability to compute moral decisions is an indispensable requirement," the researchers say, "This is particularly true when the agents are operating in domains where moral dilemmas occur, e.g., in healthcare or medical fields."

Journal reference:

Luís Moniz Pereira, Ari Saptawijaya. Modelling Morality with Prospective Logic. Progress in Artificial Intelligence, 2007; 487499 DOI:
10.1007/978-3-540-77002-2_9
Adapted from materials provided by Inderscience, via
AlphaGalileo.