With the rise of self-driving car technologies, one might ask how corporations can create such a powerful piece of software that is capable for autonomous driving. To answer that question, first we need to understand what coding is: a decision-making process for people to communicate with their computers. Programmers, also known as developers, in traditional coding were writing instructions for a computer. However, recently coding achieved a new level of abstraction. By now programmers can give a set of inputs and outputs and the machine can write its own instructions to fulfill the given task.
This is called a machine-learning. On the other hand, this power comes with great ethical responsibility. Yet only around a third of one percent of humanity can code. A huge amount of ethical decisions is involved in self-driving car design and people who do the programming will be making those decisions. Inevitably, there will be some combination of circumstances that they will not have anticipated.
One of those circumstances is the trolley problem, which was introduced by a philosopher to study moral decisions in ethics.
On the assumption that a vehicle experiences an unexpected mechanical breakdown and the AI (artificial intelligence) needs to decide about the outcomes: should it try to minimize the total harm or take actions to safe the passenger at all cost? At the first choice, the car might decide to safe five bystanders even though it would result in a fatal accident for the driver. At the second choice to save its passenger, the car might hit the bystanders to avoid an accident with a wall but helps the driver to survive.
One solution could be something called the `ethical knob`. The device would enable passengers to ethically modify their vehicles to choose between various backgrounds corresponding to other ethical approaches or rules. In that way, the AI in self-driving cars would be trusted with implementing users’ moral options. Another ethical problem could be the issue of discrimination. Supposing the data that was fed to the machine-learning system has structural racism built into it then could result in discrimination towards different genders, nationalities or ethnicities. As majority of the programmers are still white and Asian males, it is very important to expand who has access to the `design` rooms where such systems are produced. With greater diversity in the workspaces different ideas can flourish and better questions can be asked. And by active correction better input data can be produced that is less influenced by opinions and can truly represent real environment scenarios.
In addition of the ethical questions with the trolley problem, the legal side had also been waiting by many people to be answered. Such a question might be who is to blame in an accident. Is it the driver, who is not even driving at that point? The car manufacturer, which produced the vehicle? The programmer, who wrote the first few hundred lines of the code but then it has evolved by itself? Or is it maybe the data provider who put together the data classifications? In a perfect world, there would not be any accidents. In reality, roughly 1.35 million people die worldwide and many more gets injured. Furthermore, one analysis found that 94% of crashes are caused by drivers. In other words, due to human error. Introducing self-driving cars to real-life traffic will not be a quick switch rather a transition from one state to another. In case of a crash at the transition time the accident scene can be reproduced due to the sensors in the cars and hence help law enforcement determining who resulted the fault. Regulators can also agree on one term and force car manufacturers to follow those laws and restrictions in the self-driving technology. In a recent survey showed that people are not too keen on these kinds of regulations. By regulating cars to do less harm towards all people (even though it may sacrifice the life of the driver in very rare cases), we might end up producing more harm. People would not want to buy this kind of technology even if it is much safer than human drivers. The other important legal riddle is that an autonomous car should be able to cross the law or not. For instance, a car suddenly stops and the autonomous vehicle stuck behind it. Can the vehicle cross the double yellow line in order to continue its journey? It will depend on state or country and how the lawmakers will reform their agreements in their own jurisdictions.
Although the ethical and legal side of this rapidly growing industry is quite important, cultural and privacy questions also need to be answered. As it was mentioned above some of the trade-offs that humanity need to make will vary from country to country. Cultural differences can be seen in driving styles as well as openness and trust in new technology. Some countries will be more open and be the testing ground where others might adapt slowly and regulate more. Additionally, ninety-three percent of the world’s road deaths occur in low- and middle-income countries, despite the fact these countries have about 60 percent of the world’s vehicles. On the privacy side, as self-driving cars are equipped with lots of sensors, they can record not just what is happening outside the vehicle but also inside it. This gives a lot of power and potential responsibility for the manufacturer to choose if they want to cooperate with law enforcement or to report on drivers.
Whether a self-driving car is safe depends not only on the behavior of the automobile itself but also on the behavior of the people around it. It is unwise to rely exclusively on AIs to ensure safety. Instead, the car-manufacturing industry also has to think about the people who will be outside the vehicle. Overall, it might be said that it is not just a technological problem but also a society cooperation issue. Society needs to come together to discuss what they are willing to trade-off and how to enforce those trade-offs. This technology could be great, but it is not going to be by itself.