Artificial intelligence (AI) and grammar induction (GI) are of the same genre. While AI technology today is significant, what it can do at this time is not so important as its potential. Because of this potential for future development its future seems almost unlimited. Of all the new technologies coming on almost a daily basis, AI is far the most important because of this potential. Smart machines should be able to design smarter machines “particularly because AI is concerned with replicating and enhancing intelligence … ” (Jenkins 2003, p.
779). Still, Jenkins (2003) states that there is uncertainty over the development of such technology and ‘real concern over not being able to control a new and separate train of evolution that we may be setting in motion” (Ibid. ). While citizens of the 21st century may wish to lay claim to a society far richer in technology than has ever been, the roots of AI go far back into ancient history, “and the concept of intelligent machines may be found in Greek mythology” (Buchanan 2008, p. 1).
From that time onward there have been inventions, some real and some that were simply tricks and slight of hand, that claimed to do everything from adding numbers to playing chess against human opponents. Most of these accomplishments built on past inventions and improved on them. In each case it was human intelligence that made the improvements, however, and not the machine itself. There is little doubt that machines that can improve themselves can be useful to humanity and there is a high probability that such machines will be built at some point.
Like all great accomplishments of mankind, however, there is potential for evil as well as good, and standards must be put in place early on as to how to harness such power. It was in 1956 that “John McCarthy coined the term “artificial intelligence” as the topic of the Dartmouth Conference, the first conference devoted to the subject” (Ibid. ). At the heart of the discussions about AI there is the question of how to deal with a machine that has become self-aware. Such topics are no longer just science fiction, but now are actually being debated.
Moy (2002) reported that, “If you take all of today’s computers and sum them together you will end up with the equivalent intellectual power of 1x1017x flops/sec, which is what one human brain is capable of processing” (1). However, “with computer power increasing exponentially and doubling every 18 months or so computers are catching up quickly. At the current rate, it will be approximately 2021 when computers will have the equivalent processing power of all humans on this planet combined” (Ibid. ).
Grammatical inference, which is also known as grammatical induction, a form of AI, is basically syntactic pattern recognition. It has applications in many fields but primarily is used to the area of language processing and the analysis of morphemes, or small units of language. Natural language processing is the method by which computer information is transferred into language that can be read in written form by humans without computer skills. The term natural language is used to mean spoken languages of humans, as opposed to computer languages.
There are many different methodologies to approach syntactic pattern recognition. The two most often discussed are grammatical inference by trial and error and grammatical inference by genetic algorithms. According to Duda, et al. (2001) “The Bayesian decision theory is a fundamental statistical approach to problem of pattern classification. … it can be viewed as simply being a formalization of common sense procedures. … that all relevant probabilities are known” (20). Yet all methodologies seek to ease the difficulties between man and machine in the delivery of information.
Jenkins (2003) points out that there are two basic branches of AI. Their difference is in their differing goals. One branch is devoted to the idea of creating machines with real intelligence. The other branch devotes its time and resources to comprehending human cognition. The ultimate achievement of these branches would be different. For AI it would be super-intelligence that had no self-awareness; they would not be alive in any sense. However, there might come machines with a consciousness, and these would be a new force in the world.
Still, this may end up being purely science fiction. At this point no one is admitting to any attempt at building a cognizant artificial life form (779). While there is a need for all forms of AI, there is an important use that is not always associated with what is thought of actual intelligence, and that is in the brain of everyday household appliances. The toaster that senses the color of the toast for its owner or the coffee pot that turns on at 6 am to have the coffee ready for its owner both use forms of AI.
Robots who vacuum home carpeting as well as the smart homes seen today utilize AI technology. Stocks can be chosen by AI programs and machines that understand the human voice are being created. Humans may quickly become obsolete in space programs, with AI robots exploring in place of men. Yet, should AI develop machinery that become cognizant there must be ethics and rules which protect them from man and mankind from them. Like all great accomplishments of mankind, there is potential for evil as well as good, and standards must be put in place early on as to how to harness such power
References Buchanan, B (2008) Timeline: A Brief History of Artificial Intelligence Retrieved 1-02-09 from: http://www. aaai. org/AITopics/pmwiki/pmwiki. php/AITopics/BriefHistory Duda, R. , Hart, P. and Stork, D. (2001). Pattern Classification New York: John Wiley and Sons Jenkins, A. (2003). Artificial intelligence and the real world. Futures. Volume: 35. Issue: 7, 779 Moy, C. (2002). The Future of Artificial Intelligence Retrieved 1-2-09 from: http://www. sffworld. com/authors/m/moy_chris/articles/futureofai1. html