24/7 writing help on your phone
From the dark ages of slavery to the rise of AI-enabled humanoid robots in the present day, granting someone, or something, the status of moral and legal personhood is and has always been a highly debated political and ethical issue. S2E9 of Star Trek epitomizes this debate as the question of whether or not the android Data is sentient is heatedly debated by the crew of the Enterprise. Following our class discussion on the episode and on Saudi Arabia’s actions bestowing full citizenship upon the android Sophia, I discovered myself thinking about whether or not smart humanoid robots should be granted full moral and legal considerations as a person.
My paper addresses the findings following extensive research and deep reflection on this topic. Although humanoid robots are deserving of moral consideration from humans due to robots’ status as moral patients, legal systems still should not designate robots as full legal persons but instead as legal agents.
Strong-form humanoid robots are eligible for moral standing due to their status as moral patients.
In examining whether humanoid robots qualify for moral consideration, we must first draw a delineation between moral agents and moral patients as well as establish a definition for what exactly moral standing means. Martin Schönfeld writes in American Philosophical Quarterly, “moral standing is possessed by any entity whose continued existence and well-being or integrity are ethnically desirable, and whose interests in them have positive moral weight” (353). Dr. Herman Tavani of Rivier University extends upon Schönfeld’s definition and explains that moral patients are “receivers of moral action [who] unlike moral agents, are not capable of causing moral harm or moral good [and are thus] not morally culpable for their actions” (9).
However, just because an entity is a moral patient instead of a moral agent doesn’t deprive that particular being of their moral standing. For example, animals and infants can be thought of as moral patients since neither of the two can be held morally accountable, at least from the perspective of an adult human, for their actions and decisions. However, both animals and infants warrant moral consideration from adult humans as both the aforementioned possess the capacity to feel pain and suffer emotionally as a result of stimuli emitted by adult humans. In building up this interpretation, Immanuel Kant once said, “the duties towards animals are indirect duties towards mankind […] direct duties are solely to man, and duties regarding animals arise merely in case they are indirect duties owed to man” (Schönfeld 354).
Therefore, if humanoid robots could be observed to possess an ability to feel pain as a result of their interactions with humans, robots would also fall under the category of moral patients and consequently command at least some level of acknowledgment as entities who deserving of morally relevant interest from humans. Furthermore, it’s essential to note that not all humans could be considered to be full moral agents. Humans who are comatose or brain-dead possess very limited cognitive capabilities and are not morally culpable for their own actions. As a result, this subcategory of humans falls under the category of moral patients as opposed to moral agents. Would these humans forfeit their moral rights? Michael Nair-Collins writes in the Journal of Medical Ethics about the concept of precedent autonomy, explaining how “a [brain-dead] person’s past decisions about she wished to be treated after incompetence ought to be respected, in spite of the fact that she can no longer reaffirm those preferences,” which characterizes the “authority of advance directives as having […] moral authority” (532). Clearly, brain-dead and comatose humans would still possess moral standing due to their status as moral patients. The same logic can be extended to apply to smart androids as well.
Although arguments can be made both for and against the notion if smart-AIs like Data are truly deserving of moral standing, I posit that humans should give androids moral consideration due to the relational connections and interactions existing between the two parties. Building upon our class discussion on “mindstates” and Kant’s words, there’s no need to gauge whether or not a robot truly possesses properties such as sentience, intelligence, and consciousness because all that matters is if the robot is perceived to possess these characteristics when engaged in social interactions with humans. Embracing this line of intuition, we see how Data fits the mold of a moral patient but not a moral agent. To a human observer, Data shows all the signs of being a moral patient since he passionately insists against being experimented on throughout the entire episode and even comments that while the factual nature of his memories will be preserved post-Maddox’s experiment, the “substance, the flavor of the moment, could be lost” (“The Measure of a Man” 00:16:40 – 00:16:46).
However, Data is never morally culpable for any of his actions as all the decisions an android may commit is dependent upon coded algorithms, statistical outcomes, and the limit of its past experiences – not on its independent beliefs about what’s right or wrong in regards to “certain codes of conduct put forward by a society […] [that is] accepted by an individual for [his] own behavior” (Gert par. 3). Although a humanoid robot with the capabilities of Data does not exist today, we are inching closer and closer every day to a reality where humans may very well live side-by-side with strong-form AI like Data. When that reality finally arrives, humans must adopt a newfound moral obligation towards androids. We must broaden our preconceived boundaries of ethical consideration in order to accommodate these artificial beings while enhancing our own moral responsibilities in response to an era of truly unprecedented technological transformation and advents in the field of smart robotics.
Although we have established that androids are fully deserving of moral consideration from humans, we must now examine the standing of androids in the eyes of the law, as the law is what mediates human disagreements and keeps society running smoothly. Legal systems should take liberty in designating robots as accountable legal agents but stop short of granting them the privileges associated with full legal personhood. There’s an important distinction to note here between the concepts of legal agenthood and legal personhood. While various past and present entities such as slaves and corporations were viewed as legal agents for the purpose of validating economic transactions (e.g. writing contracts or confirming sales), neither slaves nor companies were or are viewed as full legal persons. For example, slaves were not allowed to vote in presidential elections until a century following the conclusion of the Civil War. Likewise, corporations to this day cannot declare themselves as asylum-seeking refugees in front of the United Nations Human Rights Commission like a human person would. Although an argument can be made that corporations should be afforded the status of legal personhood since they are made up of humans who uncontestably possess consciousness, sentience, and intelligence, the company by itself is simply a constructed, fictional entity existing only in the world of legality. Revisiting Star Trek, Data is not recognized as a legal person, as his attempted formal resignation before Maddox is rejected, but appears to have been afforded the privileges associated with legal agenthood, as he’s given the opportunity to speak before a judge in court to testify against the legality of Maddox’s proposed experiment.
Consequently, viewers can observe how Data is incorporated into the legal system in a way that’s similar to corporations or a minor, but not, at least initially, as a sentient adult with certain unalienable rights afforded to every legal person (e.g. predominantly the right to life and self-determination in this instance). Luckily for Data, the trial falls in his favor by the end of the episode. However, through all the debacle, viewers should note the crucial distinction existing between the statuses of legal agenthood and personhood, as well as how Data’s status as a legal agent has deprived him of certain natural rights an overwhelming majority of human societies believe are unalienable from all humans.
Although no AI-enabled android as complex or capable as Data currently exists in the real world, we as humans are already acknowledging the need to define the legal status of expected future androids like Data. In fact, the European Parliament invited several EU member nations in 2017 to “analyze and consider the implications of all possible legal solutions including creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause” (Pagollo, 4). Note that the Parliament intentionally choose to use the term “legal status” instead of “legal personhood” in leaving the specific legal status of robots up for interpretation and deliberation. Pagollo continues, “current AI robots lack most requisites that usually are associated with granting someone, or something, legal personhood […] this does not amount to say that the levels of autonomy, self-consciousness, and intentionality – which arguably are insufficient to grant AI robots their full legal personhood today – are inadequate to produce relevant effects in other fields of the law” (7).
Looking at our contemporary real-world example, the Saudi Arabian government has opened the door to a cohort of unintended problems by giving Sophia citizenship. For example, would it be Sophia or “her” creator that casts the ballot in national elections? As a citizen, would Sophia be able to own property and consequently be required to pay income taxes? In the face of danger, would a policeman be justified in choosing to save Sophia over a real human woman if he could only save one of the two? In light of these challenging legal and ethical questions, it’s wholeheartedly necessary in the status quo to give humanoid robots the status of legal agency as opposed to legal personhood to avoid somewhat superfluous dilemmas like the ones aforementioned. By designating robots as legal agents, robots can be held accountable for their actions – similar to how a company can be held accountable for its actions in striving towards the goals of transparency, efficiency, and legality – while bypassing the trouble associated with seemingly arbitrarily designating them as legal persons or legal citizens.
In light of recent technological advancements, there has been much public discourse on whether or not we as humans should grant humanoid robots some sort of moral and legal status. Because androids can be demonstrated to be moral patients from the perspective of humans, an argument can be made that these robots warrant moral considerations from humans. Similarly, since androids must be kept to some standard of accountability for their actions just like how companies do, androids deserve the status of legal agents. Acknowledging these robots as legal persons is problematic for several reasons, but largely because the title “persons” grants these artificial beings uniquely human rights such as the ability to vote. However, as technology continues to advance, I expect the essence of my arguments to transform based on context of the times. Regardless, the heavy parallels existing between the world we live in today and the world of Star Trek raise many intriguing questions that warrant much discourse – discussion that this paper ultimately aims to facilitate through its exploration on the moral and legal standings of androids like Sophia and Data.
👋 Hi! I’m your smart assistant Amy!
Don’t know where to start? Type your requirements and I’ll connect you to an academic expert within 3 minutes.get help with your assignment