Blog

To ERR is Human… BUT Who is responsible for Machine-Made decisions?

The second decade of the 21st century will probably be marked by future historians as the dawn of the Artificial Intelligence (AI) era. While we are yet to be hunted by legions of killer robots guided by an AI resolved to correct god’s mistake of saving Noah from the flood, we no longer use machines just to substitute or enhance human physical labor, but also as a substitute for human discretion in the decision-making process.

With much media attention focusing on autonomous vehicles making moral decisions in choosing between passengers and bystanders’ safety, it is easy to forget that whereas autonomous vehicles or self-guided drones are still under development, other AI machines are already making decisions which affect our everyday lives.

In machines we trust?

Whether they employ the latest AI technologies like artificial neural networks (ANNs), or a simple old-fashioned if-then flowchart algorithm, we use machines (which we call “computers”) daily to make decisions for us and replace our human discretion. Lost on your way or just seeking to avoid traffic? A navigation software can decide on the best route for you. Too busy to sort through your e-mails? A spam filter can decide which ones to keep and which ones to discard. Trying to figure out what to watch? An App can analyze your past choices to decide which movie you may like.

Who is responsible when you are led to a traffic jam and late for a meeting with your boss? Who is to blame when that important message from a potential client is overlooked because it was filed in the junk folder, or for an evening wasted on a boring or distasteful movie? Who is liable for the result of a wrong decision when there is no human is involved in the decision-making process?

The answer is easy; the person deciding to rely on a machine decision for his or her convenience can be made to agree to assume the risks of a wrong decision, in exchange for such convenience. The answer gets more complicated when the subjects of machine-made decisions have no choice or are unware of the use of machines in the decision-making process and especially when the decisions might have a greater effect on their life.

Need a loan? You can most likely get one online without any human involvement. A computer will calculate your credit history and asset value to evaluate your risk factor and based on this factor, determine the amount you can get and the interest rate you would be offered. The same is true for deciding what will be the insurance premium; which applicant should be admitted in a coveted school; or even when deciding on the amount of bail money a suspect would need to post to avoid detention. In such cases, the operator of the decision-making machine makes use of the machine to make many decisions affecting many individuals, to whom the liability for erroneous decision cannot be transferred.

The ContentID™ algorithm used by YouTube™ to identify copyright infringement has been the focus of numerous lawsuits for falsely identifying original or public domain creations as infringing. The Correctional Offender Management Profiling for Alternative Sanctions software “COMPAS” used by US Courts to asses flight risk and set bail is being criticized and challenged for being bias against certain minorities. Biometric Facial Recognition systems are slammed for being inaccurate and too vulnerable and easy to deceit. False, discriminatory, unfair, inaccurate or otherwise wrong decisions, exposing the ones who rely thereon or act upon them to claims, are not exclusive to humans and in fact are much more common in decision-making machines.

Machines make decisions, not responsibility.

A citation attributed to American scientist Paul Ehrlich “To err is human, but it takes a computer to really foul up things” sums things up well. The advantages of having a machine making multitude decisions faster and cheaper than any human, can easily become a disadvantage when the machine gets it wrong. In such a case, traditional defenses applicable to human decision makers, such as the mistake being an isolated incident, a deviation from the organization’s policy, bias or malice by the said personal, exceeding his or her authority or acting on their own, etc. cannot be applied to a machine without attributing the responsibility directly to the operator thereof. Thus, the multitudinous of decisions on one hand and the direct responsibility of the operator to each of them on the other hand, increases the operator exposure to undesirable results such as class actions, negative media, damage to reputation, inquires by consumer protection authorities, increased regulation, etc. 

Obviously, the operator of the decision-making machine can seek indemnification from the provider or developer of the machine. However, in most cases the operator and the developer are the same entity, or the operator itself is involved in adjusting or training the machine.

In such cases there are measures that can be taken to minimize or mitigate the exposure:

Transparency: As with any human made decision, disclosing the decision-making process and the criteria for making thereof, makes the decision less arbitrary, more predictable, and reduces the frustration of the person or people affected by it. Even in cases where full transparency is not possible for reasons such as proprietary decision-making technologies, a partial transparency, in the form of reasoning the decision or explaining what term or criteria were met or not met by the subject of the decision, is preferable. Transparency can demonstrate that the decision, although made by a machine, was not arbitrary, biased, discriminatory or otherwise unfair. Indeed, some AI technologies, such as ANN, pose challenges in implementing such transparency. 

Option to Appeal: Offering subjects of machine-made decisions the option to appeal decisions they believe to be erroneous to a human referee, even if such an appeal involves costs and bureaucratic procedure, can serve to shift some responsibility for the error in the machine-made decision process, from the operator of the machine to the subjects. This is because by exercising their discretion in deciding whether or not to appeal, and why, the subjects of the decision are no longer totally passive and thus share, at least in part, the responsibility of the final outcome of the decision. Of course, the appeal process must be reasonably available and the human referee authorized and capable of reversing or amending the decision in cases where the appeal is justified.

Alternatives: When subjects of decisions are given a choice whether the decision in their cases will be made by human or by machine, they can also be requested to assume the risks of erroneous decisions by the machine, in exchange for the benefits of receiving the decision faster, for free, etc. In conclusion, with emerging AI technologies making machine-made decisions more and more common, relying on such decisions may increase exposure to liability for wrong decisions when they occur. Operators of machine making decisions and the ones relying on those decisions should be made aware of these potential exposures and take measures to minimize or mitigate them.

Categories