To install StudyMoose App tap and then “Add to Home Screen”
Save to my list
Remove from my list
For AI to overthrow humanity there are four things that have to happen : an AI would have to develop a sense of self, distinct from others and have the intellectual capacity to step outside the intended purpose of its programmed boundaries, it would have to develop, out of the billions of possible feelings, a desire for something that it believes is incompatible with human existence, it would have to choose a plan for dealing with its feelings that involves death, destruction and mayhem and it would have to have the computing power or intelligence or resources to enact such a plan.
For an AI achieving any of the above four factors is highly improbable, making it impossible for AI to overthrow humanity.
“The development of what is understood as ‘consciousness’ in an AI is improbable”, according to the writer, Tim Oates. For programs like Deep Blue, the chess winning program to be able to learn chess, they had to be shown millions of data in order to teach them the rules, anticipate the opponent’s moves and in the end be able to make statistically sound decisions based on that.
This the program can learn, but only about one thing, and therefore we will be left with an incredibly smart machine that can only reason about for example chess, but could be beaten by anyone at any other game, say checkers, because it never learnt about it and according to it , the game is essentially nonexistent, therefore for an AI to be able to develop an intelligence to step out of this programmed world is rather unfathomable.
Tim Oates also contends that if an AI actually managed to become this intelligent (super intelligent, if you will) and managed to “step out” of its programming, like people it would also be left with a billion questions and millions of choices for the course of action to take.
That would make the probability of it choosing to dominate the world in the order of one in a billion, in other words, next to impossible. However, in this incredibly unlikely scenario, if the AI really decided to dominate the world, as it were, where would it get the resources to put in motion a plan to destroy the world. As Grady Booch said in his Ted talk no one is building AIs that control the weather or that command us “ chaotic,capricious humans”. There is no single AI that is in control of all the resources on earth, or even the computational resources for that matter. One of the darkest possible outcomes of the so-called AI apocalypse would be the rise of an intelligence with better capabilities than human beings, that is the ability to self improve by using complex algorithms, access to big data, and some other methods. This super intelligence will pose a serious competition to humanity, and its characteristics will make it easier to achieve its goals.
In order to achieve these goals without hindrance and essentially self preserve,these AI may take some cautionary steps to protect itself from humanity. These steps would lead to it competing with humans and it will physically expand itself drastically. If this AI manages to get access to military technologies,like Skynet did in the movie Terminator,it will easily kill every living being on earth that it perceives as a threat.While this state of affairs is rather interesting, from a technological standpoint there is no known way from today’s most advanced technology to this supposed super-mind.Modern AI agents are all Turing machines and they learn mostly by statistical modeling and optimizing on huge sets of data. Granted, we cannot prove beyond any doubt that this analysis of large sets of data will not lead to the rise of “consciousness”, but there is no solid, in that reasoning, scientific evidence indicating that such an emergence of consciousness could happen, or how it could do so. In fact, besides all the hype about AI and all the recent developments in applicable narrow AIs the level of development AI is at right now is still very far from human level, let alone super-intelligence. This is a graph depicting the relative intelligence of current state of the art AI and average human level intelligence.
It would seem obvious that AIs will have the know how to manipulate and use all computational resources to further its goals. However experts in computational and mathematical logic actually agree that many computer programs are worse with computers than human beings. It can be concluded therefore that AI poses virtually no threat to humanity’s existence.
When it comes to the threat of artificial intelligence, the biggest advantage that is available to humans is that they get to develop these AI systems. What is normally depicted in the dark scenarios of AI development is that it develops its own knowledge parallel to the human understanding. However, humans could choose to develop AI systems that can synergistically work with humans in order to enhance, rather than replace human decision making and the human experience. Emphasis needs to be placed on AI systems that empower people’s decision making and channel their inventiveness and creativity,rather than the so called prescriptive systems, whose focus is primarily to tell people what to do; systems that improve people’s abilities and knowhow to do tasks that are more complex,rather than doing the tasks for them; systems that have been designed with human ethical values in mind and systems that are transparent and understandable,and therefore predictable as they deal with pressing issues in the ever-changing world of today. This means that in the process of development of AI various stakeholders from different facets of life need to come together- political leaders, societal stakeholders, research and technology experts-and work towards achieving that goal.
To make it all clearer, let’s say for example, there is a manager for a company or a societal leader who wants to make a decision based on a large pool of data, and he or she does not necessarily have the core competence needed to make the best decision by themself due to the various aspects involved. Consider also that this decision will have serious consequences to individuals and society at large. Completely replacing the human with an AI system may definitely be feasible and may seemingly reduce the workload, but it may not be wise. There needs to be a human being there to make the final decision with the guidance of an AI system. That way there is best of both-the highly efficient AI system and the human element in the decision.
This “human centered” AI system should be designed to understand and essentially discuss the issue with the human counterpart. It needs to have an understand of the human reasoning, appreciate the human motivations and moral and ethical assumptions and their implications. It also needs to relate to human emotions. This way, the AI can develop alternative ways to deal with certain issues from a common background with the human counterpart. In this way, owing to the robust analytic abilities of AI systems, a decisions may be reached fast enough without completely replacing and/ or reducing human creativity, inventiveness and perhaps most importantly, our ethical values.
Currently there are two main hurdles to achieving this kind of AI systems,the AI “Grand Challenges”,if you will. One of them is that the model of machine learning that is being used in the majority of AI applications now is rather unexplainable. This model involves using complex statistical analysis tools on huge sets of training data, and much as it is effective and powerful, it is difficult to explain and therefore unpredictable. Another challenge is the capability to build comprehensive world model. One of the significant features of human intelligence is a model characterized by large amounts of experience. Such experience and a web of associations and ambiguous semantics are the basis of human inventiveness. Such a world model has not yet been reached (or replicated) in AI systems.
“An example of the differences between the two approaches, in the Function-Oriented AI approach, a company may use a deep learning algorithm to make personnel decisions. A deep learning network, based on the past history of productivity of workers with different characteristics, would develop its own algorithm that would be encoded in connection to strength between nodes of the network. This algorithm would not be accessible to humans. If the company adopts this algorithm, it would make personnel decisions in a way that no one in the company understands. Moreover, because this algorithm would reflect only past experiences, it would be likely to fail if the business environment changes.
In contrast, Human-Centered AI would analyze a huge amount of data about worker productivity, and would reveal complex patterns underlying that productivity to managers. This knowledge could be used to formulate rules that would underlie hiring and firing decisions in the company. These rules could be revealed to workers, and if needed implemented into software for automated decision making. The decision-making software could be changed by managers proactively in anticipation of planned changes in the company’s strategy.”
Another important consideration in the development of human centered AI systems is the inculcation of human moral values and societal principles and norms into the systems. It is no secret that AI is influencing more and more aspects of our lives with time,from personal assistants like Siri and Alexa, driverless cars, facial recognition softwares, blood sugar level monitors for diabetics just to mention a few. Accordingly, their actions and motivations need to be in line with the morality and ethical expectations of individuals and society at large. This means that there needs to be developed systems which are capable of ambiguous moral reasoning and many societal stakeholders need to come together and decide the framework which these systems need to adopt as a moral basis. Additionally, the design parameters of what the values to be included are going to be need to be clear and visible to the human users, so that they understand those values and be able to predict to a reasonable what the AI system is going to do. Granted this is not easy, but if the supposed threat of AI is to be diminished or eliminated, this needs to be done, because if humanity’s track record is to be testimony, innovation is going to continue, its only up to us as humans to develop systems that will not be a means to our own end.
AI agents and systems can be built to be controllable. With the current rate and funding going into AI research, it stands to reason that AI is going to keep on advancing and therefore more potentially dangerous, if not catastrophic. However, humanity can still get ahead of this by developing safe AI guidelines and control methods along with AI development. Research has already started into AI control using two different approaches: “ capability control” and “motivational control” .
Capability control involves creating AIs that are not capable of pursuing harmful plans. One of the methods being researched pursuant to this is building machines with kill switches. Stuart Russell, leading AI researcher and best selling author of the textbook, AI: A Modern Approach says that this method could actually be effective on two levels, the kill switch being used as intended, and also the AI “well-behaving“ to avoid the consequences.There are potential problems with this approach in that the super-intelligence might develop a way to trick people into not killing it.
Capability control methods are only helpful to a certain extent therefore it is expedient that focus be put to the other approach:motivation selection methods. Research needs to be carried out on how to build the first super-intelligence with goals ghat are human friendly and always aligned with human goals. In this light, systems need to be designed with the understanding of things like autonomy,happiness and the full range of human emotions. Also this system needs to be designed such that even as it undergoes upgrades, these embedded motivational control methods stay intact.
Artificial Intelligence Threat to Humanity. (2022, May 04). Retrieved from https://studymoose.com/artificial-intelligence-threat-to-humanity-essay
👋 Hi! I’m your smart assistant Amy!
Don’t know where to start? Type your requirements and I’ll connect you to an academic expert within 3 minutes.
get help with your assignment