24/7 writing help on your phone
During the doldrums of World War II, “Project PX”, a machine of revolutionary complexity was being developed clandestinely in a secret U.S. laboratory. Literally a beast of a machine, it weighed over 30 tons, took up the space of an entire room, and consumed over 150 kilowatts of electricity. When its completion was announced in 1946, it graced the headlines of newspapers as the “Giant Brain”. Commonly known as ENIAC, it is hailed in history books as the first electronic computer. Even though ENIAC was created during a time where typewriters were widespread and there was no such thing as the Internet, paradoxically, the creation of ENIAC stirred up little fervor.
65 years later, the computer system Watson was developed in 2010. Watson, a computer designed to respond with answers to questions in natural human language, shocked the world when it soundly defeated Jeopardy champions Ken Jennings and Brad Rutter on national television. This feat led some to consider Watson as the turning point of computer intelligence, a signal of the post-human era to come.
Most authors that we read in class would assert that the difference in reaction to ENIAC versus Watson is solely attributed to technological change. However, I contend that changes within humanity are more responsible for this remarkable difference in reception because of computers inherent dependence on humans for development, the role of humans in determining a computer’s intelligence, humanity’s increasing dependence on computers, and the assimilation of the computer into popular culture.
Beneath the surface, computers have undoubtedly evolved with time from ENIAC to Watson.
Although technical advancement in computers is responsible for their exponential increase in computational power, most users are only concerned with the effects above the surface, such as increases in speed and stability. Most computer users have no interests concerning the wafer manufacturing process behind the creation of semiconductors, let alone how the chip architecture within processors works. This has occurred because computers have adapted to the needs of the general public. Since historic computers like ENIAC were developed in scientific labs, they required sophisticated knowledge of binary operations and their inner workings to use them. In order for computers to appeal towards a larger audience, the design of computers gradually conformed to accommodate towards natural human thinking. According to researchers at IBM, “Computers have evolved quite differently (compared to historical tools such as the axe). Originally designed for specialized use by highly trained users, they have only recently begun to be adapted to everyday life.” (Benedek). Human influence on overall appearance and functions of computers is visualized through developments in the aesthetics of computers and their interfaces from Spartan-like beige boxes displaying unappealing lines of green text, to visually pleasing designs with polished and colorful graphical user interfaces (GUI’s).
According to Human Factors and HCI, “Unless human characteristics are considered when designing or implementing technologies, the consequences can be errors and a lack of human productivity (Human Factors and HCI) Poor design of computer systems can have disastrous affects – the 1989 Kegsworth Air Disaster is commonly referenced as a tragic consequence of poor technological system design, which caused an accident resulting in the death of 47 people. “No matter how sophisticated and refined…applications become in the future, there is going to be a fundamental need to ensure that the human element is not lost in the equation…Likewise, when humans take to the skies, the cockpit in which they operate must be designed with their unique needs in mind, rather than what a computer model suggests is the most efficient solution.” (Curtney, p. 107) The effect of human influence on the design of technology has a direct correlation towards the personification of technology. As the designs of computer systems continually adapt more to efficiently serve the needs of humans, computers are perceived less as tools and more as human assistants.
In his article Is Google Making Us Stupid?, Nicolas Carr asserts that the Internet, a form of technology, has the potential to adversely affect our ability to interpret information. Although this may be true, it is important to note that the Internet was not created by computers, but by humans. In fact, all of the programming behind computer applications and websites were created and designed by humans; computers just serve as a medium through which this coded information gets translated into a visual/textual form that we can understand. In a sense, whenever we are using a program or visiting a webpage, we are confined to what the programmer has defined is possible within that particular ‘realm’, and can only view and see what the programmer made available for his/her audience. Donald Ruskoff, author of Program or be Programmed, targets this issue. He states that the difference between a computer user and a computer programmer is that “The computer programmer creates the environment and terms through which the computer user functions. In some cases, the computer user is aware that his or her actions have been completely circumscribed by a programmer (who may in turn be working for some other person or corporation to achieve a particular purpose). But in many — even most — cases these days, users are unaware of the programmer and the agendas underlying the functionality that has been afforded him. He thinks this is simply what the computer can do. So the real difference is the programmer understands that the machine can model almost anything. The user only knows how to behave inside that model. So it’s like the difference between a playwright and a character — or, at best, and actor who knows he is reading a script.” (Ruskoff) Thus, every example of technological influence on humans has its origin from humans. An example of this is the printing press, an invention praised as making information widely available. However, by itself, a printing press is merely a block of metal. Likewise, the Internet is not able to influence us without the “invisible hand” of a human guiding it. Without human knowledge of how to use technology, technology becomes meaningless. To illustrate, if in a post-apocalyptic world all DVD-readers were destroyed, and all knowledge of how to build a reader was lost, then all the information on DVD’s would be lost, even if the technology of DVD’s still exists. Technology inherently depends on the existence of humans in order for it to have significance and meaning.
This concept of an unbreakable symbiosis between man and computer is similar to the ideas expressed by Jaron Lanier in his essay You Are Not a Gadget. Lanier contends that thinking of computers and humans separately is a seriously flawed thought, arguing,- “The antihuman approach to computation is one of the must baseless ideas in human history. A computer isn’t even there unless a person experiences it. There will be a warm mass of patterned silicon with electricity coursing through it, but the bits don’t mean anything without a cultured person to interpret them” (Lanier, p.26) The perception of the intelligence depends on the background and experience of the user. According to Machines and Mindlessness: Social Responses to Computers, “Orientations to computers are derived from a host of individual, situational, and technological variables.” (Nass and Moon, p.82) For instance, a native tribesman never exposed to modern society would probably be bewildered by the capabilities of the modern computer, and would fear it. On the other hand, an experienced computer programmer would have an educated understanding of the mechanics that run beneath the surface that power today’s computing possibilities. The human element is integrated within the perception of the intelligence of computers and how we change has more bearing on how we perceive technology than how technology itself changes. Lanier contends that it is humans, not computers, who are responsible for the perception of the intelligence of computers. Lanier says that “You can’t tell if a machine has gotten smarter or if you’ve just lowered your own standards of intelligence to such a degree that the machine seems smart…every instance of intelligence in a machine is ambiguous…the exercise of treating machine intelligence as real requires people to reduce their mooring to reality” (Lanier, p.32). Lanier’s assertion can be supported by how the perception varies a person from the 1940’s and a person from today. For example, while in the 1940’s newspapers were heralding ENIAC as the “Great Brain”, but by today’s standards, ENIAC belongs in the computer history museum.
The growth in the prevalence of computers is truly remarkable. According to The Evolution of Computers and Computing, “in the mid-1950’s there were fewer than 1,000 computers in the United States.” Today, there are over one billion installed computers, with two billion expected in 2014 (Gartner). These astounding figures quantify how rapidly computers are being adopted in daily life globally. One of the main reasons behind this growth is that computers have adapted their form and function to appeal to changing human needs. When computers first came out, they served the needs of researchers, as their primary function was crunch complex mathematical calculations. Now, computers have become our sources of information, hubs of communication, and our personal assistants. When we humans depend more on computers in our daily lives, it causes us to personify them more, because without them, our lives would be drastically different. This dependence leads to a greater emotional connection with computers; it is not uncommon to see enraged computer users yell and scream to computers, hoping that their uncooperative machines will somehow hear their frustrations and work again. According to a police report in California from 2002, “Shortly after midnight, a resident…called the police to report hearing a man inside a house nearby screaming “I’m going to kill you! The officers found no victim inside the house. The man had been yelling at his computer” (Fogg). It is within human nature to personify intimate objects that we place value. Just as sailors refer to their vessel as a female object, we are gradually elevating the computer from a mere machine. It has become commonplace for people to talk to technology, without being perceived as insane. For example, it is completely reasonable to ask questions to Apple’s Siri, but seriously asking questions to a calculator would be considered as a bizarre sight. Thus, the more the bridge between computers and humans is shortened, the more we perceive computers more as equals.
Popular culture is another method by which humans have influenced societies notion of computers. The presence of technology in culture blossomed in the 1960’s with the shadow of the Cold War, as Sputnik and computerized nuclear missile systems initiated a building frenzy of underground nuclear bunkers in the backyards of Americans. This has caused us to regard computers as potential competition to not only our intelligence, but our existence as well. According to Hayes, we view the coming of the technological singularity as the end of humanity – “The posthuman is likely to be seen as antihuman because it envisions the conscious mind as a small subsystem running its program of self-construction and self-assurance while remaining ignorant of the actual dynamics of complex systems.” Ruskoff would be able to relate with Hayes, because he contends that it is within human nature to place negative connotations with things of great influence – “The overculture will always try to devalue anything truly threatening. If you gain access to the dashboard of civilization, then you will be called a geek. They have to keep us away from anything truly empowering. So they make the cool stuff seem uncool, and the stupid stuff seem cool.”
Hollywood producers caught on with this trend, and have made movies targeted towards this fear, such as “Terminator” and “Matrix”. The creation of these movies is directly associated with the fearful perception of computers by humans. Personification of computers has occurred as well, with the animated film “Wall-E” inducing compassion and emotions towards the lifelike robot. In his article From Impact to Social Process, Professor Paul N. Edwards asserts “Computers rarely ‘cause’ social change…but they often create pressures and possibilities to which social systems respond. Computers affect society through an interactive process of social construction” (Edwards, p.32). According to Ruskoff, the presence of computers in society depends on society’s perception and it’s connotations – “We put stigmas on them (computer programmers) for different reasons now. Before, it was because computers were associated with mathematics or staying indoors and not knowing sports. Then there was a brief moment when it was cool. The Hackers moment. Kind of between the movies War Games and The Matrix.” (Ruskoff). Popular perception of computers dramatically affects the way that we think of computers. As the knowledge and function of computers became commonplace, they became relevant to the general populace. This is an important factor because this directly affects the impact technological breakthrough such as ENIAC or Watson. When ENIAC came out, even though it was arguably more groundbreaking than Watson, its creation had little significance to the public, because computers had zero impact on their lives or society at that point. For Watson, 65 years of influence from computers has caused society to be more fearing of computers, resulting in a more significant reaction.
Throughout my essay, I have sought to show how important humans are in determining not only the intelligence of computers, but also the extent in which we perceive technology. For instance, the importance of the human element can also be illustrated by how we perceive automobiles today. When automobiles were first introduced in the 1900’s, they had a large impact on society, perhaps comparable to computers, as they revolutionized human mobility. Similarly, computers were regarded as novelty when they first came out. However, as their usefulness became well known, they became widely accepted, causing people to throw away their typewriters for computers, and horse and buggy for cars. As the adoption of technology increases, a groundbreaking event will have more impact. For instance, if a breakthrough allowed automobiles to run perpetually with no cost, it would dramatically affect how we think of automobiles. If automobiles had never been adopted by society, such a breakthrough would have little effect. This example serves to show that this phenomenon exists not just for computers, but also for all inventions, because humans are responsible for their creation and their influence on society. Technological change undeniably has occurred between ENIAC and Watson, but we can eliminate the variable of technological change by recognizing how our perception of ENIAC, even though its technology has remained the same 1946 to now. People in the 1940’s thought that ENIAC was the Great Brain, but we think that ENIAC is an archaic machine.
👋 Hi! I’m your smart assistant Amy!
Don’t know where to start? Type your requirements and I’ll connect you to an academic expert within 3 minutes.get help with your assignment