Прочитай текст и выполни задания 12–18. В каждом задании запиши в поле ответа цифру 1, 2, 3 или 4, соответствующую выбранному варианту ответа.
The morality of AI
In the year 2090, an AI machine called Robin was purchased by the Rogers family from Bristol, to help them with chores at home. The code written for Robin allows the user to customise the ethical, emotional, and political settings of their new "family member". The Rogers believe all actions should be judged by their consequences, hence they choose appropriate settings. They also made Robin quite emotional. One weekend, the family decided to go on a trip to London and took their intelligent machine with them. After a long walk from Whitechapel all the way to the City, the emotional robot couldn't bear one thing. How come on one of the streets homeless people are begging for food, while right around the corner there are people dressed in suits with their pockets full of money? Robin, therefore, decided that stealing some cash from a businessman in a fancy café and handing it out to the people inhabiting Whitechapel road was perfectly reasonable, and of course, perfectly moral. Unfortunately, the police caught our friend red-handed. The case was taken to court.
In her article on the morality of AI, Silviya Serafimova posed this question: whose morality and which rationality? Since people often argue about moral claims, who should decide on the morality of intelligent machines? The Rogers decided for themselves and showed that even when considering solely the three main types of normative ethics — consequentialism, virtue, and deontological ethics — the choice is not easy. Robin is a consequentialist; the consequences of his actions are generally good as the people in need got some money. If he was programmed into virtue or deontological ethics this probably wouldn't be the case.
When the Rogers saw all the homeless people in Aldgate East, they did feel sorry for them and they knew it was good to help those in need. Yet, they did not stop to give them money or even talk to them. This is called the moral lag. Whenever you know it is good to do something but you do not do it. Well, Robin did, and now he is being punished for it. So, how do we even tell a machine that whilst helping others is good, sometimes you don't because, well, you just don't.
Ethics has not been completely codified yet. It seems impossible to present it as a decision tree. As humans, we think about things, especially moral dilemmas, not as a set of inputs leading to a set of outputs. Rather, it is a detailed picture of a situation with multiple nuances that often contradict each other, usually leading to no exact "output" at all. There are a bunch of possible plans of actions one could undertake, all with their individual consequences. This is the closest to "output" we can achieve. Hence, the concerns on how to put all of this into code, starting with how to explain all this using logic.
Classical monotonic reasoning does not seem suitable for this purpose. It seems more appropriate to make a decision after obtaining more information, as non-monotonic reasoning offers. However, nonmonotonicity does not provide enough "flexibility" to write such a code on its own. We also need semi-decidability.
A semi-decidable logic system will always tell correctly if a formula — a plan of action — is a part of a theory, a system of moral rules. However, unlike a decidable system, if the formula is not a part of the theory, it will either reject it or loop to analyse it again. Non-monotonic logic fails this requirement. If we categorise telling the truth as being good by default (thus a part of the system of moral rules), and then we add semi-decidability into this, the algorithm will loop itself on the very problem of the relation between telling the truth and "being good by default".
As pointed out by Serafimova, the failure to satisfy the semi-decidability requirement makes it impossible for intelligent agents to look for "new ethical knowledge and its alternative ways of computation". It intuitively makes sense: if a plan of action is not a part of the system of moral rules, it will be rejected rather than looped, leaving no opportunity for self-update. And this is a rather complicated problem.
Robin only stole a bit of money, but intermingling computational errors with moral mistakes can lead to much graver consequences. That's something we should keep in mind for the future and its exciting technological advancements.
12. Why did Robin decide to steal money?
Robin didn't like the fact that a businessman was rich.
Robin thought his actions were morally right.
Robin had been programmed wrong.
Robin was asked to do this by homeless people.
[ ]
13. Which phrase is closest in meaning to the idiom "to catch somebody red-handed"("...the police caught our friend red-handed…") in the 1st paragraph?
To catch someone's attention.
To know that someone's hands are red.
To do something illegal or wrong.
To see someone in the act of committing a crime.
[ ]
14. Consequentialism states that...
whether an action is good or bad depends on its outcome.
people should get what they deserve.
all actions have consequences.
judging what is good and what is bad should be based on moral rules.
[ ]
15. Which of the following situations can be called a moral lag?
A boss decides to raise salaries for all employees, but the company is bankrupt.
A criminal decides to rob a bank because they believe that rich people rob others.
A person wants to stop their friend from making a really bad decision, but they don't do it.
A person knows they shouldn't eat their sibling's birthday cake, but they still do it.
[ ]
16. According to the article, how do people solve moral dilemmas?
People use logic.
People analyse all possible options, whether they are morally right or not.
People don't solve moral dilemmas because they can't.
People create a picture of logical outcomes and choose one.
[ ]
17. What does the word "it" refer to (paragraph 6)?
non-monotonic logic.
a decidable system.
a system of moral rules.
a semi-decidable logic system.
[ ]
18. In the last paragraph, the author implies that…
moral issues should be well-thought out in AI machines.
AI machines can be dangerous.
scientists are building AI machines with too many computational errors and moral mistakes.
new technologies are amazing.
[ ]