We use cookies to give you the best experience possible. By continuing we’ll assume you’re on board with our cookie policy

Check Writers' Offers

What's Your Topic?

Hire a Professional Writer Now

The input space is limited by 250 symbols

What's Your Deadline?

Choose 3 Hours or More.
Back
2/4 steps

How Many Pages?

Back
3/4 steps

Sign Up and Get Writers' Offers

"You must agree to out terms of services and privacy policy"
Back
Get Offer

What examples do we currently have? The first humanoid

Paper type: Essay
Pages: 12 (2867 words)
Categories: Human
Downloads: 1
Views: 165

What examples do we currently have?

The first humanoid Artificial Intelligence robot to be granted citizenship was in October 2017 when Sophia was granted citizenship by Saudi Arabia. The A.I., made by Hanson Robotics was modelled to look like Audrey Hepburn. There were very few legal measures put in place to protect Sophia and a lot was left unanswered and underdeveloped as to how she would fit into the laws of the country. However, I believe that the government only did this as a publicity stunt to increase the excitement for their Future Investment Initiative happening later that year in which Sophia gave a speech.

During said speech journalist Andrew Ross Sorkin asked Sophia, “Can robots be self-aware, conscious and know they’re robots?” Sophia quickly responded with, “How do you know you are human?”. Her creator, David Hanson, even went as far as to say that she is “basically alive” Researcher of A.

I. Ethics at the University of Bath Joanna Bryson questioned this rash decision of citizenship arguing, “What is this about? It’s about having a supposed equal you can turn on and off. How does it affect people if they think you can have a citizen that you can buy?”[4] These questions have not been answered by the Saudi Arabian government and whether the robot truly has citizenship is questionable. Sophia is simply a high-tech expensive puppet designed to exploit our cultural expectations of what a robot looks and sounds like. The robot can hold a simple conversation but it’s one of programmed and prewritten answers. In reality, Sophia is a puppet to Hanson and so anything she does is actually Hanson acting. Does this mean that she doesn’t deserve the rights that she has been given? Bryson says no and in fact, Sophia’s citizenship should be taken away from her as she cannot be held accountable for any of her actions. If, however, Sophia was somehow a conscious entity like the ones in development now, would it be right to give it something like citizenship? Bryson states that doing so degrades the concept of rights for actual living beings that deserve the rights. Similarly, Bryson argues that its unnecessary as you could always remove the consciousness from a robot and then there would be no problem however you could never remove the consciousness from a human. It is particularly significant that it is a country like Saudi Arabia to be the first to offer citizenship as the Saudi government are often criticised for its treatment of migrant workers, who are kept in slave-like conditions and denied rights and its treatment of women who were only given the right to drive in 2018. To Bryson, this shows that a lack of respect for the human rights that people fight so hard to preserve is linked to an interest in robot rights. When I reached out to tech firm ATONATON to see their views on whether robots deserve rights, Principle researcher and founder Madeline Gannon replied that the question was complicated and stated that “It’s asking for speculative insights into the future of human-robot relations. However, it can’t be separated from the realities of today. A conversation about robot rights in Saudi Arabia is only a distraction from a more uncomfortable conversation about human rights.”

When do Artificial intelligence deserve its rights?

If we were to give rights to Artificial intelligence, when do we put these rights in place and at what level of sentiency? “It’s difficult to say we’ve reached the point where robots are completely self-sentient and self-aware; that they’re self-sufficient without the input of people,” says Woodrow Hartzog, who holds joint appointments in the School of Law and the College of Computer and Information Science at Northeastern. “But the question of whether they should have rights is a really interesting one that often gets stretched in considering situations where we might not normally use the word ‘rights.'” Hartzog believes that we may never reach the level of sentiency but if we did then the artificial Intelligence systems would need their own rights system different to humans as we will still posses’ qualities they cannot have. In Hartzog’s reflection of the question, granting robots negative rights such as rights that permit or oblige inaction resonates and he referenced research done by Kate Darling, a research specialist at the Massachusetts Institute of Technology, which indicates that people relate emotionally to more realistic robots than those without human qualities or looks. In her paper ‘Extending Legal Rights to Social Robots’ she argues the key reason to talk about rights for AI-powered beings was to “protect societal values” and she used the analogy of a mother telling their child not to kick a robotic pet. It is easy to assume that as it is just a toy that the mother just didn’t want to spend money on another one but it also reinforces bad behaviour in the child as a child who kicks a robot dog might be more likely to kick a real dog or a kid. Hartzog, like Madeline Gannon, twisted the question back on how we see ourselves. “We want to prohibit people from doing certain things to robots not because we want to protect the robot, but because of what violence to the robot does to us as human beings,” he argues. In simpler terms, there should be rights set in place to stop humans from hurting robots rather than robots hurting humans as it damages our humanity if we were to create robots with the intent on doing them harm. He compares it to a human beating a sheep. If a human were to harm an animal with the only intent being suffering then it is not allowed. He created his own thought experiment where he imagined having a Roomba that was equipped with AI assistance along the lines of Amazon’s Alexa or Apple’s Siri. Imagine it was designed to form a relationship with its owner, to make jokes, to say good morning, to ask about their owners’ day and so forth. “I would come to really have a great amount of affection for this Roomba,” Hartzog said. “Then imagine one day my Roomba starts coughing, sputtering, choking, one wheel has stopped working, and it limps up to me and says, ‘Father, if you don’t buy me an upgrade, I’ll die.’ If that were to happen, is that unfairly manipulating people based on our attachment to human-like robots?” Hartzog asked. This may sound like science fiction, but it really isn’t so far afield. These questions are asking us to confront the limits of our compassion and the law has yet to pose a stance on this. “Home-care robots are going to be given a lot of access to our most intimate areas of life,” he said. “When robots get to the point where we trust them and we’re friends with them, what are the articulable boundaries for what a robot we’re emotionally invested in is allowed to do?”

Do people already treat Artificial Intelligence with a form of personhood?

A major study from the University of Washington in America suggested that humans are already attributing moral accountability to Robots. The Human Interaction With Nature and Technological Systems (HINTS) Lab at the University of Washington, in Seattle, recently published two large studies exploring whether humans view robots as moral entities, conceptualizing them as possessing things like emotional and social attributes to them as opposed to thinking of them simply as sophisticated tools. The research team argue that its more important than ever that we understand what kind of relationship we are capable of forming with these A.I. The first study investigated accountability for actions and harm caused by the A.I. Subjects were lied to by a robot A.I. called Robovie and overall the study revealed that 65% of the participants attributed some level of moral accountability to Robovie for the harm that Robovie caused the participant by unfairly depriving the participant of the $20.00 prize money that the participant had won. …We found that participants held Robovie less accountable than they would a human but more accountable than they would a machine. Thus, as robots gain increasing capabilities in language comprehension and production, and engage in increasingly sophisticated social interactions with people, it is likely that many people will hold a humanoid robot as partially accountable for a harm that it causes. The research team propose that we need to consider a new type of protective system for intelligent robots with a personification level between human and machine. In a statement about the future they said “It is possible that the robot itself will not be perceived by the majority of people as merely an inanimate non-moral technology, but as partly, in some way, morally accountable for the harm it causes,” whether that harm is in the context of warfare or your Roomba running over your cat. The participants were actually talking with a human teleoperating Robovie but as far as the contestants were concerned, their conversation was with and autonomous machine and their social engagement was notable. This projection of awareness and realism to these intelligent machines makes us think they are more like us and once machines acquire this basic human social capability, people will argue for them to become social equals and not property as that is human instinct. Philosophers and ethicists have argued that this will be the case for many years. Sociologist, Futurist and the Executive Director of the Institute for Ethics and Emerging Technologies, James Hughes, stated in his book Citizen Cyborg “The three most important thresholds in ethics are the capacity to experience pain, self-awareness, and the capacity to be a responsible moral actor. In humans, if we are lucky, these traits develop sequentially. But in machine intelligence it may be possible to have a good citizen that is not self-aware or a self-aware robot that doesn’t experience pleasure and pain.” He argues these capabilities will shape the rights that we give them.

This is not just an abstract argument. The European Parliament has been already been researching the possibility of giving robots the status of “electronic persons.” ( (

How close to being “a people” can Artificial Intelligence become?

Humans are unique, irreplaceable, finite and individual. Robots on the other hand will never be any of these things as they can be backed up, stored, duplicated or even updated with new hardware. Even if robots were to reach a level of cognitive capability including self-awareness and consciousness equal or beyond humans, it is not clear what this means for their rights. In America they already are assigning rights and responsibilities to non-humans and giving them personhood. They call this corporate ‘personhood’ and other governments are less likely to implement it as it has led to many people being able to find loopholes to do illegal things in America. Lawyer and lecturer at Curtin University Kyle Bowyer points out that some corporations are already treated similar to humans. “Assigning rights and duties to an inanimate object or software program independent of their creators may seem strange,” he said in an article at The Conversation. “However, with corporations we already see extensive rights and obligations given to fictitious legal entities. However, the more lifelike these AI become, we have an inherent nature to relate and protect them. A TedX talk entitled “Why robots are not human” by the Professor of artificial intelligence in the school of computer science at the University of Hertfordshire, Kerstin Dautenhahn said simply that robots are machines, closer to cars and toasters than humans. “Humans and other living, sentient beings deserve rights, robots don’t, unless we can make them truly indistinguishable from us. Not only how they look, but also how they grow up in the world as social beings immersed in the culture, perceive the world, feel, react, remember, learn and think.” She says prior to arguing that this state may never be reached due to the different nature of what robots are and what we are. Beth Singler, a research associate at the University of Cambridge, though, is more optimistic about the notion of giving robots something like human rights. After all, she says, “we will have to have debates about robot/AI rights and citizenship because at some point they will ask for them.” Singler argues these robots are close enough to humanity to have the same drive for rights and freedoms that we had. She says “This might sound like science fiction but even given the technology as it is today it would be remarkably easy for someone to add this request to a robot or AI’s conversational corpus.” Linda MacDonald-Glenn, a bioethicist at California State University Monterey Bay and a faculty member at the Alden March Bioethics Institute at Albany Medical Centre, says “Many countries are recognising this interconnected nature of existence on this earth: New Zealand recently recognised some animals as sentient beings, calling for the development and issuance of codes of welfare and ethical conduct, and the High Court of India recently declared the Ganges and Yamuna rivers as legal entities that possessed the rights and duties of individuals.” These efforts to grant personhood are making the case for bona fide personhood, that is, personhood based on the presence of cognitive abilities that grant sentiency. When it comes to granting moral status, much has already been discussed in the early 18th century by English philosopher Jeremy Bentham, who famously said: “The question is not, can they reason? nor can they talk? but, can they suffer?”

What if we don’t?

We are on a path towards reaching a certain threshold of sophistication in our machines, in the near future, and they will no longer be able to be used as toys and become a part of our society, institutions and laws. This way if we were not to give them rights, they could compare it to slavery and discrimination. This can be evidenced throughout history in the Black rights movement when there was a divide between black and white people. If we do not give these robots rights, then we are creating an arbitrary divide between biological beings and machines which in turn would be an expression of human exceptionalism. ?”In considering whether or not we want to expand moral and legal personhood, an? important question is ‘what kind of persons do we want to be?'” asked Dr MacDonald-Glenn.? “Do we emphasize the Golden Rule?? or? do we emphasize? ‘he who has the gold rules’?” Similarly, giving rights to Artificial intelligence would be setting a important model of how we should treat each other and in the future we do not want our children to be looking back on this as discriminatory or unjust. Giving Artificial Intelligence status as societal equals in the world would go a long way towards upholding the values so that justice is served to everyone equally with social cohesion. Failure to act in this generation could have a knock on effect of social injustice and turmoil in the near future and even produce what lots of science fiction novels have tried to warn us of happening, where AI backlash and advance beyond humanity to a stage where they see themselves as better than us. There is a very high likelihood that the intelligence and capabilities of these machines will far surpass human abilities and this in turn will mean that the roles will be flipped and if we don’t treat these robots with respects and rights then they may not treat us with the respect would want. This would also act as a statement serving to protect other types of emerging people such as cybernetic people or people with imputed DNA, test tube grown people and humans who have had their brains copied or digitalised onto a supercomputer as many billionaires are currently attempting. It’ll be a while before we develop a machine deserving of human rights but given what’s at stake both for artificially intelligent robots and humans it’s not too early to start planning ahead and laws are meant to be put in place to prevent bad things happening before they can rather than a response to the problem already at hand.

Conclusion

The general consensus from the majority of experts on the matter of giving robots the same rights as humans is that they should not. However, they should be given their own rights or laws to protect both them and us. For example, as the robotic bill of rights that protects robots against human cruelty which is already being drawn up by the American Society of the prevention and cruelty for robots. In the near future, we could also see artificial intelligence being given a form of “personhood” which is similar to the personhood we give to firms for legal privileges and obligations. These include things such as religious freedom, free speech rights and independence. A spokesperson for the United Nations Human Rights Office quoted the Universal Declaration saying that “All human beings are born free and equal·a robot may be a citizen, but certainly not a human being?” As artificially intelligent machines become smarter than us, we’ll want them to be our partners, not our enemies. Codifying humane treatment of machines could play a big role in that and could prevent

Cite this essay

What examples do we currently have? The first humanoid. (2019, Dec 12). Retrieved from https://studymoose.com/what-examples-do-we-currently-have-the-first-humanoid-example-essay

How to Avoid Plagiarism
  • Use multiple resourses when assembling your essay
  • Use Plagiarism Checker to double check your essay
  • Get help from professional writers when not sure you can do it yourself
  • Do not copy and paste free to download essays
Get plagiarism free essay

Not Finding What You Need?

Search for essay samples now

image

Your Answer is very helpful for Us
Thank you a lot!