To install StudyMoose App tap and then “Add to Home Screen”
Save to my list
Remove from my list
The development of technology over the last couple decades has been what could only be called impressive. To take an obvious example, the internet barely existed before the year 1990, whereas life without the internet is inconceivable today in the modern world. It would be an understatement to say that the internet has revolutionized the field of communications. This fact about technology, however, may not be an unequivocal cause for celebration. In particular, several concerns have emerged over time regarding the specific technology of artificial intelligence.
With the development in artificial intelligence it will cause threat to the future generation such as end of human work force; supper intelligence leading to risk of safety and security and poses danger to human moral and philosophy.
As an AI system becomes more powerful and more general it might become super intelligent - superior to human performance in many or nearly all domains. While this might sound like science fiction, many research leaders believe it possible.
This could potentially pose catastrophic risks of safety and security. In this context, the following concern expressed by Gates (Holley) is quite noteworthy:
"I am in the camp that is concerned about super intelligence. First, the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I don't understand why some people are not concerned" (paragraph 15).
Essentially, Gates is suggesting that if artificial intelligence becomes advanced enough, then there is a real risk that it will become too intelligent for human beings to actually manage or control an effective fashion.
So, this will lead to insecurity and unsafety where a super intelligent AI would be an economic and military asset to its possessor and giving it a decisive strategic advantage.
Another high-profile comment regarding artificial intelligence is that it ends the human race where the most work are done by the machine and robots. The basic assumption here is that human beings would be able to design artificial intelligence in such a way that the artificial intelligence would become far more advanced than the natural intelligence of human beings themselves. According to Stephen Hawking view by Rory (2014, December 2) it points out that the development of artificial intelligence could spell the end of the human race where it would do it on work at an increasing rate which will suppress the work of human who cannot compete at that race. For example, if a human being were to manually perform an advanced mathematical calculation, it would likely take him several minutes to work through the problem on paper and produce an answer. On the other hand, a computer (or calculator) would be able to produce the same answer in less than a second. Similarly, then, the possibility described by Hawking is one in which the algorithms (or "inputs") for artificial intelligence would be such that the artificial intelligence would be able to perpetuate itself at an ever-increasing rate. Machines would then essentially become the most advanced "species" on the planet, leaving human beings far behind.
The comments discussed above by Gates and Hawking have focused on the sheer processing power that artificial intelligence may one day achieve, thereby eclipsing the powers of the human mind. In addition to the above points the advanced in AI will also pose some danger to the morality and philosophy of a mankind. According to Bostrom (2014, September 11), claims that the AI have no point on projecting the human's emotions on the entity of which is fundamentally alien and cannot believe on the supper intelligence of AI to share human value and could survive on it own. The idea here is that whatever kind of "mind" an artificial intelligence may have, it will not resemble the human mind, insofar as the human mind has non-rational aspects (i.e. emotions). There is thus no telling how it may behave or what decisions it may make. Bilton has also echoed this concern that artificial intelligence will not necessarily possess anything like morality as human beings understand it. The danger present here could be called as one of uncertainty: If one cannot know how an artificial intelligence would act, then it may not necessarily act in horrific ways; but on the other hand, it is almost inevitable that it would eventually do something or another that most human beings would find morally horrific.
The implicit concession here is that the human mind is not driven by pure reason: rather, the reasoning processes of human beings are contextualized with a framework of emotions, chief among which are basic emotions such an empathy that give rise to basic morality. There is no telling what an intelligent agent might do if it is in possession of pure reason but lacks any such broader framework of emotions. Indeed, from a human perspective, the one would imagine that such an agent would engage in behaviors that would generally be called sociopathic.
In conclusion there are strong economic incentives for the development of new technologies to take place as fast as possible without "wasting" time on expensive risk analyses. These unfavorable conditions increase the risk that we gradually lose our grip on the control of AI technology and its use. This should be prevented on all possible levels, including politics, the research itself, and in general by anyone whose work is relevant to the issue. A fundamental prerequisite to directing AI development along the most advantageous tracks possible will be to broaden the field of AI safety. This way, it can be recognized not only among a few experts but in widespread public discourse as a great (perhaps the greatest) challenge of our age. A final addition to the concrete recommendations given above, we would like to conclude by pleading that AI risks and opportunities be recognized as a global priority akin to climate change, or the prevention of military conflicts as soon as possible.
Nick, Bilton. (2014, November 5).Artificial Intelligence as a Threat. New York Times. threat.html?_r=0.
Nick, Bostrom. (2014, September 11). You Should Be Terrified of Superintelligent Machines. Slate.
Rory, C. (2014, December 2). Stephen Hawking Warns Artificial Intelligence Could End Mankind. BBC.
Holley, Peter, H. (2015, January 29). Bill Gates on Dangers of Artificial Intelligence. Washington Post. switch/wp/2015/01/28/bill-gates-on-dangers-of-artificial-intelligence-dont-understand- why-some-people-are-not-concerned/.
The Benefits & Risks of Artificial Intelligence. (2019, Nov 14). Retrieved from https://studymoose.com/the-benefits-risks-of-artificial-intelligence-essay
👋 Hi! I’m your smart assistant Amy!
Don’t know where to start? Type your requirements and I’ll connect you to an academic expert within 3 minutes.
get help with your assignment