Can a Computer be Aware Like a Human?

This is a response by a subject expert to the following question from a reader on Ask a Question:

Q: Can artificial intelligence ever match the human mind in every aspect? Can a computer be “aware” like we are?

This is a fascinating but difficult question. Researchers in the field of artificial intelligence (AI) have given many different answers to this question over the years. I will summarize some of the disagreements and encourage you to read more and develop your own views. My summary is based on Stuart Russell and Peter Norvig’s book, “Artificial Intelligence: A Modern Approach,” which you can consult yourself for further reading.

Before getting to the issue of building intelligent computers, it is worth mentioning the related work in biology and biological engineering from the last few decades. In particular, you can think of cloning technology as a different way of creating “artificial” human life in a laboratory. This technology has already been successfully applied to clone plants and some animals, though the attempt to clone humans is still ongoing. It is likely that scientific advances in the next few decades will make this a real possibility. In some ways, this is not artificial intelligence, since we would be creating an actual human being, but it is still a way of creating consciousness in a laboratory as opposed to the usual way (through sexual reproduction).

This said, when people talk about artificial intelligence, they are usually referring to computers and robots. Unfortunately, it has been difficult for researchers to agree on exactly what “artificial intelligence” should mean, and to agree on what researchers in the field of AI should be trying to build. There have been four major schools:

(A) Systems that act like humans. This school is closest to the intuitive definition of AI that most people have. The Turing Test, proposed by the famous computer scientist Alan Turing in 1950, was designed to provide a satisfying operational definition of intelligence. By “operational definition,” I mean that instead of proposing a long and likely controversial list of conditions needed for intelligence, Turing suggested a simple test that could be applied to a system to determine whether or not the system was intelligent. The test works the following way: a human being poses some questions to the computer and then gets some written responses back. If the human being cannot tell that the answers were written by a person or not, then the computer passes and is considered to be “intelligent.”

In order to pass the Turing test, a computer would need to have a number of capabilities, such as the ability to communicate successfully in English or other natural language, to effectively store what it knows and reads, to perform reasoning to use the information it is given to answer questions and draw new conclusions, and to learn from conversational patterns over time. Researchers have not spent much time trying to build systems that actually pass the Turing test, but trying to solve these underlying problems has been a major part of the research effort in AI.

(B) Systems that think like humans. This school is sometimes called the “cognitive modeling” approach and defines AI as automating activities associated with human thinking, such as decision-making, general problem solving, learning, and so on. A major difficulty with this approach is that in order to be able to say that a computer is thinking like a human, we need to first figure out how humans think, which is in itself a very hard and still unsolved problem. This requires conducting psychological experiments and trying to develop a biological and cognitive theory of how the mind works. Once that theory is good enough, we can use the theory to design a computer program that matches human behavior.

One example of a system that was trying to do this was the “General Problem Solver” that Allen Newell and Herbert Simon designed in 1961. They were more interested in having the computer go through the same reasoning process as human subjects solving the same problems; they were less concerned with having the program solve the problems correctly. In the early days of AI, people were often confused about this distinction: they would argue that because a program solved a problem correctly, the program was a good model of human performance. The issue is that, of course, humans often don’t solve problems correctly, and a program that really mimics human behavior should make the same mistakes. These days, people would separate the two claims, and the field of cognitive science is more concerned with building models of human thought and behavior, while researchers in artificial intelligence now focus more on getting programs to solve difficult problems correctly.

This highlights a split between older research in AI (1950s-1970s) and more modern research (1970s-present) and brings us to the next two schools. Roughly, the two definitions above measure success in terms of how close the program is to human behavior, while the two below will measure success against an ideal concept of intelligence, which is usually called rationality. Basically, a system is rational if it “does the right thing” based on the information it has access to at the time.

(C) Systems that think rationally. The attempt to codify “thinking the right way” goes back to the Greek philosopher Aristotle’s work on formal logic. He suggested some patterns for arguments that always yielded the correct conclusions when given correct premises. For example, “Socrates is a man; all men are mortal; therefore, Socrates is mortal.” Originally, these laws of thought were intended to be the ones that governed the operation of the mind. Even by 1965, there were programs that could, in principle, solve any solvable problem that could be written in logical notation. Through the 1970s, there was a strong movement within AI called the logicist school that tried to improve on such programs to try to create intelligent systems.

There were two problems with this approach. First, it is hard to take informal knowledge and state it in formal logical notation, particularly when there is some uncertainty about the information. Second, these programs were often very slow, and solving even small problems proved impractical with these methods. This brings us to the most common modern view.

(D) Systems that act rationally. This school defines artificial intelligence to be concerned with the design and construction of rational agents, which are computer systems that acts to achieve the best outcome, or when there is uncertainty involved, the best expected outcome. Agents are assumed to have additional features that separate them from normal programs, such as operating by themselves, perceiving and reacting to their environment, and adapting to change.

One major advantage of this approach is that it is much easier to develop scientifically. The standard of rationality is very clearly defined and very general, while we still do not have a good understanding of human thought and behavior. Moreover, it is more useful to build systems that can do the right thing in a given situation, and solve problems better than humans might, rather than trying to design something that makes all the same mistakes humans do.

Because this definition is so general, however, it also includes many kinds of systems that most people do not think of as real AI. For example, a spam filter is considered a rational agent because it adapts over time depending on the email you receive. Its goal is to behave rationally by correctly classifying every email as spam or not spam. Similarly, some researchers are working on cars that can drive themselves or helicopters that can fly themselves. A robotic car is certainly not a replacement for the human mind, but it does need to be able to learn, adapt, and exert control intelligently in the context of driving. This kind of work is considered to be part of modern AI.

To summarize, your first question has been approached in different ways and even redefined over the years. Human beings are themselves agents of a certain kind that do certain things well (e.g. commonsense reasoning or communicating) and other things poorly (e.g. carrying out complex calculations quickly). Computers are different kinds of agents that, in general, appear to have complementary capabilities (e.g. they can be very logical and calculate very quickly but they cannot as yet engage in satisfactory commonsense reasoning). Whether computers will be able to match humans where they perform well remains an open question. While there is no a priori reason why such a goal is not realizable, my own view is that we will get progressively closer to it without ever quite reaching it.

Your second question was about whether a computer could be “aware” in the same way that humans are, if it could have a mind in exactly the same sense human beings have minds. In its modern form, AI is no longer trying to accomplish this goal, but there are still ongoing philosophical discussions about whether the goal is even possible.

The disagreement hinges, in part, on the difference between “strong AI” and “weak AI” — on the difference between simulating a mind and actually having a mind. Proponents of weak AI say that even an accurate simulation is only a model of the mind, while those of strong AI argue that the correct simulation really is a mind, that the appropriately programmed computer with the right inputs and outputs would have a mind in exactly the same sense human beings have minds. This is a more philosophical argument because it has nothing to do with how intelligently a computer can act, only with whether we can really call that having a “mind” in the full sense of the word.

It would be too difficult to summarize the full argument here, but much of the debate centers on a famous and controversial thought experiment of the philosopher John Searle called the Chinese Room Argument. The Chinese Room is an argument against the possibility of strong AI, of true artificial intelligence. I would suggest reading Searle’s original paper, called “Minds, Brains, and Programs”, or the following web page, which also explains the many responses to it over the years: http://plato.stanford.edu/entries/chinese-room/. Needless to say, the argument is still unresolved, but future advances in the fields of cognitive science, neuroscience, and artificial intelligence may shed more light on the debate in the years to come.

In closing, I point out that most of our tools and technologies – telephones, airplanes, calculators, and many other examples – are usually designed to extend our capabilities, not replicate them. Computers are also ultimately best viewed in this way, although the philosophical questions surrounding them are undoubtedly well worth asking.

Further reading: Computers vs. BrainsWhy Minds Are Not Like Computers and The Machine Age

Back to Main Page

About these ads

Tags: , , ,

4 Responses to “Can a Computer be Aware Like a Human?”

  1. rj Says:

    its so difficult

    • SouthAsian Says:

      rj: This is a complex subject. We can break it up into separate modules. If you have a specific question we can try and obtain a simple answer.

  2. SouthAsian Says:

    For readers interested in this subject, a new article brings us up to date on the latest developments in Artificial Intelligence (replacing the human mind) and Intelligence Augmentation (enhancing the human mind): A Fight to Win the Future: Computers vs. Humans:

    http://www.nytimes.com/2011/02/15/science/15essay.html?src=me&ref=homepage

    And Stanley Fish puts it nicely in perspective:

    http://opinionator.blogs.nytimes.com/2011/02/21/what-did-watson-the-computer-do/?hp
    http://opinionator.blogs.nytimes.com/2011/02/28/watson-still-cant-think/?hp

  3. SouthAsian Says:

    This can be used as proof either that the machine has still a long ways to go or that it is more advanced than humans. While humans invented oral sex, the machine invented (with some help from the French, who else?) aural sex.

    Read the following in Hindi then click on the button that offers an English translation:

    http://www.bbc.co.uk/hindi/news/2011/05/110515_kahn_sex_skj.shtml

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Follow

Get every new post delivered to your Inbox.

Join 155 other followers

%d bloggers like this: