Artificial intelligence endowed with morality and ethics

Techno 19 October, 2017

2017-10-18 20:30

Artificial intelligence endowed with morality and ethics
MIT scientists tried to solve this problem and gave the machine the ability to think in the framework of morality based on the opinion of the majority of living people.

Sign up for news “UkrMedia” in Facebook, Twitter or Google+

The stronger AI is part of our daily life, the more he has to face difficult moral and ethical dilemmas to solve that is sometimes not so easy and living person, reports Rus.Media. MIT scientists tried to solve this problem and gave the machine the ability to think in the framework of morality based on the opinion of the majority of living people.

Some experts believe that the best way to train an artificial intelligence to handle morally complex situations is the use of “experience of the crowd.” Others argue that this method is not without biases, and different algorithms may reach different conclusions based on the same dataset. How in such situation to be machines, which obviously have to make difficult ethical decisions when working with real people?

Intelligence and morality

With the development of systems of artificial intelligence (AI) experts are increasingly trying to solve the problem of how best to give the system the ethical and moral basis for the implementation of certain actions. The most popular idea is that the AI concludes, studying human decision. To test this hypothesis, researchers from mit have created a Moral Machine. Visitors of the website were asked to make a choice as to how to proceed should an Autonomous car if he had to face a very difficult choice. For example, that’s the familiar dilemma of a potential accident, in which there are only two versions of events: the car can knock down three adults to save the lives of two children, and can do the opposite. Which option to choose? You can, for example, to sacrifice the life of an elderly man to save the pregnant woman?

As a result, the algorithm has collected a huge database on the basis of test results, and Ariel Procaccia from the Department of computer science at Carnegie Mellon decided to use them to improve machine intelligence. In the new study, he and one of the founders of the project, the INR Rahwan, loaded at SHEA full database project Moral Machine and asked the system to predict something, like a car on autopilot would react to a similar but still slightly different scenario. Procaccia wanted to demonstrate how a system based on the voting results, may be the solution for “ethical” artificial intelligence”. The author himself admits that such a system, it is premature to apply in practice, however, it perfectly demonstrates the concept of what is possible.

Cross-morality

The idea of choosing between two morally negative results is not new. From the ethics of it is used for a single term: the principle of double effect. But this is an area of bioethics, but to machine such a system had never been used, and therefore, the study aroused the interest of experts around the world. Co-chair of the OpenAI Elon Musk believes that the creation of “ethical” AI — this is the development of clear guidelines or policies for management of development programmes. Policy gradually listen to it: for example, Germany created the world’s first ethics for Autonomous cars. Even Google DeepMind AI, owned by Google, now has a Department of ethics and social morality.

Other experts, including a team of researchers from Duke University believe that the best way forward — the creation of a “common structure” that describes how the AI will make ethical decisions in a given situation. They believe that the Union collective moral views as Moral in the same Machine, will allow in the future to make artificial intelligence even more moral than modern human society.

Critique of “moral machines”

Anyway, currently, the principle of “majority opinion” is far from reliable. For example, one group of interviewees may have prejudice, not common to all others. The result will be that AI got exactly the same set of data can come to different conclusions based on different samples from this information.

For Professor James Grimmelmann that specializiruetsya on the dynamics between software, wealth and power, the very idea of public morality appears to be erroneous. “She is able to teach ethics II, but only gives him the likeness of ethical norms inherent in a specific part of the population,” he says. Yes, and Procaccia, as mentioned above, acknowledges that their research is nothing more than a good proof of concept. However, he believes that such an approach may in the future bring the success of the campaign to create a moral SHEA. “In a democracy, no doubt, there are a number of flaws, but as a system it works — even if some people still make the decisions, which is not according to the majority.”