Navigating Legal Challenges: Causation Tests for Black Box AI

Introduction

There is a lot of discussion about AI regulations and a lot of institutions and companies are working on objective ethical standards for the development of AI. The AI panel of Stanford University released a report identifying regulatory problems with regards to privacy and liability. In this report it's said that AI may have consequences for the way we look at civil and criminal liability, specifically considering the two most important doctrines in law: intent and causation. The reason it is hard to apply intent and causation to AI is because of specific algorithms used in AI models, for these can be considered "black boxes".

These algorithms do not structure data neatly in human-readable blocks but distribute data over a network of nodes, making it near impossible to make sense of the decision-making process by the algorithm. Some writers even suggest that understanding these algorithms can be as difficult to understand as the human brain. Before we can have the law apply to machine learning algorithms, we first have to be able to apply these two doctrines, intent and causation, to these algorithms.

Get quality help now
Doctor Jennifer
Doctor Jennifer
checked Verified writer

Proficient in: Artificial Intelligence

star star star star 5 (893)

“ Thank you so much for accepting my assignment the night before it was due. I look forward to working with you moving forward ”

avatar avatar avatar
+84 relevant experts are online
Hire writer

In this paper, I will examine the need for opening the black box as argued by David Castelvecchi with regards to the second doctrine: causation. In this paper I will discuss the doctrine of causation; what it is, how it is applied in the law, complications that arise when applying it to these "black box" algorithms, and possibilities to overcome these complications. I will not discuss who should be legally (and/or morally) liable if it is judged that an algorithm has caused a certain event since this is a different discussion entirely.

Get to Know The Price Estimate For Your Paper
Topic
Number of pages
Email Invalid email

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy. We’ll occasionally send you promo and account related email

"You must agree to out terms of services and privacy policy"
Write my paper

You won’t be charged yet!

Types Of Black Box Algorithms

The types of "black box" algorithms can be roughly separated into two groups. The first one is deep neural networks, research to this type of network dates back to the late 1800s when two scientists independently laid the theoretical base for neural networks. Currently, neural networks can be found in a scale of applications, from search recommendation engines to breast cancer detection. The other type is support vector machines (SVM), these algorithms can find patterns in higher dimensional mathematical space, which is impossible for humans to visualize. For simplicity's sake, whenever a reference to these algorithms is made, the first type is meant. This is because these types of algorithms are far more prominent in its applications and because discussing different approaches for both types will take analysis that is outside the scope of this paper.

Problem Outline

It's easy to see why this inability to understand the "thought" process of AI gives severe complications not just for intent and causation, but also for our trust in their decision. An example case is the oncology project by IBM, where Watson, the supercomputer by IBM gives recommendations for the treatment of breast cancer patients. Research showed that Watson did not always give the same recommendation as medical experts, with the medical experts simply stating that the machine is mistaken. In this case, who should we trust? It might very well be possible that Watson is better at these recommendations than humans, it would not be the first time that a computer is better at classifying something after all (Wikipedia has a list of things where the performance of AI is superhuman). Naturally, most people would side with the human experts, according to Davide Castelvecchi. He also states that before we can trust a computer we will first have to know how it came to its conclusion. In this paper, I will examine if having transparent AI is a necessity for solving the causation problem with black-box algorithms.

Causality In Philosophy

Causation (or causality, we will use these two terms interchangeably) is what connects one process, the cause, with another process or state, which is called the effect. The cause of an effect can not happen after the effect has already taken place and the cause can be completely or partially responsible for this effect. Causes can be distinguished into two types: necessary and sufficient.

If x is a necessary cause of y, then the presence of y necessarily implies the prior occurrence of x. The presence of x, however, does not imply that y will occur.

If x is a sufficient cause of y, then the presence of x necessarily implies the subsequent occurrence of y. However, another cause z may have alternatively caused y. Thus the presence of y does not imply the prior occurrence of x.

A practical example: if switch S were flipped, then bulb B would light. If S is a necessary cause then if bulb B is lit we know for certain that switch S has been flipped. Conversely, if S is a sufficient cause then if we flip switch S we know for certain that bulb B will light, but might also be caused by another switch Z. This all seems fairly straightforward, but, looking at this example, how do we get to this conclusion in the first place, how do we know that if switch S were flipped, bulb B would light?

In philosophy, there is a lot of debate regarding this question, where some philosophers even say that causality is an illusion and does not actually exist. Some widely used definitions come from one of these philosophers, Hume refers in his book to several ways to judge whether there are a cause and effect relationship between two things.

Hume, however, denies that we can ever perceive cause and effect unless we develop a habit of mind where we always associate two types of events that follow the rules of causation. This does not mean we cannot use causality in real life, another philosopher Kant expands on this by saying that: causality is like a tool, you have to think with the idea of causality, but that doesn't mean that you have to believe its an objective truth. Later a researcher Patricia Cheng developed a theory that suggests that humans actually have specific beliefs regarding cause and effect. This Power PC theory states that people filter the observation of certain events through the basic belief that causes have the power to generate or prevent their effects and therefore inferring cause-effect relations.

In conclusion, the rules to see if there is a cause-effect relationship between two things are well defined and even though they do not provide the objective truth, these relationships can be and are still applied to the real world.

Causation In Law

Now that we have established that causation does in fact serve a purpose, let's take a look at how this is applied in law. The word 'cause' is often used in statutes, regulations, and judicial decisions, but this idea that there is a causal relationship between the agency and harm is often implied even when the word is not used. In all of these instances, the use of cause is central to that specific instance, because to attribute responsibility it has to be shown that the harm was done (caused) by the agency that the law considers a basis for liability. In legal contexts the range of agency is not limited to human conduct, but can also be animals, natural forces or inanimate objects, this would also include opaque learning algorithms. What is interesting to note is that even though in law causes can be human, natural events or other, legal responsibility in modern law only applies to persons and institutes. As mentioned in the introduction, the discussion of which person/institute should be responsible if an opaque learning algorithm is deemed the cause of harm is outside of the scope of this paper.

The inquiries with which law is concerned to relate to events, for these events the question is always asked, did one event cause another? This link must be established in legal proceedings by describing the conduct of a person or agency, and this can be hard with opaque systems. Let's look at the following ruling: The Equal Credit Opportunity Act (ECOA), this regulation prohibits discrimination in any aspect of a credit transaction. It applied to any extension of credit, including extensions of credit to small businesses, corporations, partnerships and trusts. A bank is using an opaque system to decide who gets a loan, after a few months of use a man gets denied a loan and sues the bank because he thinks he got declined on grounds of his skin color. Now how can one judge if the computer algorithm caused the harm (denial of a loan based on skin color)? We have no idea on what grounds the computer algorithm made the decision and it can not explain its' reasoning to us. When judging whether an agency has caused the harm causation tests are used, these tests involve two different issues: cause in fact and proximate cause. Cause in fact is that the agency factually caused harm, an example is if a driver runs a red light and hits your car, it is likely that her conduct (running the red light) was the cause in fact. The second issue is more complicated, it means that if the harm was foreseeable the agency should have anticipated that his conduct could result in harm, an example is that when one drives drunk it could result in a serious car accident.

Causation Tests

Let's see how these two issues work when opaque computer systems are involved, starting with the second issue: proximate cause, the notion that the harm done could have been foreseen by a reasonable person. The problem that arises when using an opaque computer system is that the results of the conduct of the system are not foreseeable by the creator or user of the system. The system might reach a counter-intuitive decision or find an obscure data pattern, causing it to make unpredictable decisions. If the creator of the system cannot foresee how the system will make its decisions or its' conduct then what can be said about a reasonable person. A possibly even greater risk here is that it is also not possible to foresee the effect of decisions it makes because we do not know why the system has made a decision. The past proves that the effects of a poorly understood algorithms' conduct are very unpredictable, flash crashes on the stock market were difficult to predict because the algorithms traded at such high speed for example. In short, testing proximate cause fails when opaque computer systems are involved because of their unpredictability, another subject for the causation test has to be found.

The other issue: cause-in-fact, is to examine the connection between the conduct and the harm done, it is required that the harm suffered is related to the conduct or misstatement by the agency. Take the following example: an investment company uses an opaque system to decide which startups to invest in. One of the startups they invested in has made some misstatements about their development and is basically a sham. Now the investment company will want to prove that these misstatements have caused them to invest in the startup, but unless they saved a snapshot of the system at the time they used it to make decisions, they can not run experiments on it for proof. It could be argued that because the system took in account information based on misstatements should be enough to establish cause-in-fact, but it is possible that the system gave no weight to the information and the outcome would not have changed regardless of the misstatements. Another possibility is that in most cases that specific information would not be outcome determinative to the system.

The conclusion of analyzing these types of issues is that the use of opaque systems to make decisions makes it near impossible to test for causation, and will make it in turn impossible to give sound judgment because a cause can not be defined. The question now that we know that opaque systems are near impossible to test for causation is, how can we overcome it? How can we make it so that these tests do reach a conclusion, is full transparency the only solution?

Transparency As A Technical Problem

Before we look at a possible solution it needs to be said that transparency is inherently a technological problem, we do not know if current forms of AI like deep neural nets will become more readable and transparent in the future. It is possible that we develop methods to understand and read neural nets in the future, but on the other side, it is possible that as neural nets become more complex they also become less transparent and more difficult to analyze. If it appears that the improvement of AI takes more complexity and in turn decreases transparency, even more, setting up transparency requirements will be the equivalent of forbidding improvement, or in a worse case, have companies discover ways to circumvent these requirements. Another problem with transparency requirements is that a system of regulations will impose heavy costs on initiating endeavors in AI markets. Most AI talent is in the hand of a few big companies and these costs will make it harder for entities outside of these companies to compete. Because of the reasons above I argue against hardcoded regulations about the transparency of AI systems, they are hard to enforce and might mean the end of AI as we know it.

Now for a possible solution, instead of looking at the conduct of the AI itself, we can accept the fact that it is a black box and look at the conduct of the person or agency that is relying on the AI. An agency has to know of the unpredictability of the system they are using and it can be found that because they used an AI system harm has been caused. In this way using an opaque system, especially when operating autonomously will have a side note of "create or use at your own risk". If an agency decides to let a black box AI make autonomous decisions, they will bear the risk coming from this lack of transparency. For the causation tests, this means that the tests should examine the level of supervision or autonomy of the AI. In case of autonomy, the predictability of harm is hard to assess, and in these cases, the aim of the test should be the predictability of harm from using an AI system depending on how transparent it is. So for a causation test when an AI is autonomous and a black box, it should not focus on if the specific harm caused by the AI was foreseeable but if the harm was a predictable consequence of deploying the AI autonomously. In this way, when designing an AI system for a problem, thought needs to be put into which laws it can come in contact with, and the design of the system can be adjusted so that it can at least answer for its decisions when it gets in contact with these laws.

In the earlier example, the investment company relied on an opaque system to make their decision, and this conduct (relying on the opaque system) has caused their harm, I.e. they did this to themselves. If they relied on a system that could explain why it had invested in that particular startup and/or on a human decision-maker it would have been possible to examine if the "fraudulent" data was in fact harmful. Using this train of thought opening the black box of AI is no longer necessary to answer for causation tests, but it does, however, give the level of transparency a new role, namely in how far an agency can use AI before becoming legally liable for decisions it makes.

Summarizing the solution: if an AI is supervised and transparent, we can use the causation tests as they are. If an AI is autonomous and transparent we should look at how much care was put into monitoring, constraining, and testing the AI. If an AI is supervised but opaque we should look at if the agency is justified in using the AI as they did, with limited insight into its decisions. Lastly, if an AI is autonomous and a black box, we should just see if it was a reasonable decision to deploy it at all.

Conclusion

Current AI systems contain machine-learning algorithms that are functionally black boxes, they give no information on how they have reached their conclusion. This brings with it problems when trying to explain causation using causation tests that are prominent in almost every field of law. These tests examine what is foreseeable when the system makes decisions or what is at the basis of these decisions and are ineffective when applied to black-box AI. A solution that some offer is a regulatory framework of transparency standards since they might prevent innovation and set up barriers to entry for small firms. An approach would be to look at the action of relying on a system that is opaque rather than looking at the action the system has made. This should ensure that agencies use a reasonable level of supervision based on how opaque a system is, or the other way around, have a reasonable level of transparency based on the amount of autonomy. Using this method we can also keep using the causation tests we have been using when humans supervise the AI or when the AI is transparent.

David Castelvecchi said that we should open the black box of AI because we can not trust it, but do we also have to open the black box of AI to answer for causation tests in a legal context? The answer to that is no, we can keep using opaque AI and we will still be able to apply these tests for causation by shifting the focus of the tests to the user of the opaque system instead of the system itself. This new way of looking at causation tests for AI should promote careful considerations when developing AI systems for a specific problem. Designers will have to consider if they are willing to let an opaque system make a certain decision risking unpredictable results.

Updated: Nov 30, 2023
Cite this page

Navigating Legal Challenges: Causation Tests for Black Box AI. (2019, Dec 16). Retrieved from https://studymoose.com/black-box-algorithms-essay

Navigating Legal Challenges: Causation Tests for Black Box AI essay
Live chat  with support 24/7

👋 Hi! I’m your smart assistant Amy!

Don’t know where to start? Type your requirements and I’ll connect you to an academic expert within 3 minutes.

get help with your assignment