Friday, 23 September 2011

Future of artificial intelligence

Artificial Intelligence (AI) is a perfect example of how sometimes science moves more slowly than we would have predicted. In the first flush of enthusiasm at the invention of computers it was believed that we now finally had the tools with which to crack the problem of the mind, and within years we would see a new race of intelligent machines. We are older and wiser now. The first rush of enthusiasm is gone, the computers that impressed us so much back then do not impress us now, and we are soberly settling down to understand how hard the problems of AI really are.

What is AI? In some sense it is engineering inspired by biology. We look at animals, we look at humans and we want to be able to build machines that do what they do. We want machines to be able to learn in the way that they learn, to speak, to reason and eventually to have consciousness. AI is engineering but, at this stage, is it also science? Is it, for example, modeling in cognitive science? We would like to think that is both engineering and science but the contributions that is has made to cognitive science so far are perhaps weaker than the contributions that biology has given to the engineering.

The confused history of AI

Looking back at the history of AI, we can see that perhaps it began at the wrong end of the spectrum. If AI had been tackled logically, it would perhaps have begun as an artificial biology, looking at living things and saying "Can we model these with machines?". The working hypothesis would have been that living things are physical systems so let's try and see where the modeling takes us and where it breaks down. Artificial biology would look at the evolution of physical systems in general, development from infant to adult, self-organization, complexity and so on. Then, as a subfield of that, a sort of artificial zoology that looks at sensorimotor behavior, vision and navigation, recognizing, avoiding and manipulating objects, basic, pre-linguistic learning and planning, and the simplest forms of internal representations of external objects. And finally, as a further subfield of this, an artificial psychology that looks at human behavior where we deal with abstract reasoning, language, speech and social culture, and all those philosophical conundrums like consciousness, free will and so forth.

That would have been a logical progression and is what should have happened. But what did happen was that what people thought of as intelligence was the stuff that impresses us. Our peers are impressed by things like doing complex mathematics and playing a good chess game. The ability to walk, in contrast, doesn't impress anyone. You can't say to your friends, "Look, I can walk", because your friends can walk too.

So all those problems that toddlers grapple with every day were seen as unglamorous, boring, and probably pretty easy anyway. The really hard problems, clearly, were things demanding abstract thought, like chess and mathematical theorem proving. Everyone ignored the animal and went straight to the human, and the adult human too, not even the child human. And this is what `AI' has come to mean - artificial adult human intelligence. But what has happened over the last 40-50 years - to the disappointment of all those who made breathless predictions about where AI would go - is that things such as playing chess have turned out to be incredibly easy for computers, whereas learning to walk and learning to get around in the world without falling over has proved to be unbelievably difficult.

And it is not as if we can ignore the latter skills and just carry on with human-level AI. It has proved very difficult to endow machines with `common sense', emotions and those other intangibles which seem to drive much intelligent human behavior, and it does seem that these may come more from our long history of interactions with the world and other humans than from any abstract reasoning and logical deduction. That is, the animal and child levels may be the key to making really convincing, well-rounded forms of intelligence, rather than the intelligence of chess-playing machines like Deep Blue, which are too easy to dismiss as `mindless'.

In retrospect, the new view makes sense. It took 3 billion years of evolution to produce apes, and then only another 2 million years or so for languages and all the things that we are impressed by to appear. That's perhaps an indication that once you've got the mobile, tactile monkey, once you've got the Homo erectus, those human skills can evolve fairly quickly. It may be a fairly trivial matter for language and reasoning to evolve in a creature which can already find its way around the world.

The new AI, and the new optimism That's certainly what the history of AI has served to bear out. As a result, there has been a revolution in the field which goes by names such as Artificial Life (AL) and Adaptive Behavior, trying to re-situate AI within the context of an artificial biology and zoology (respectively). The basic philosophy is that we need much more understanding of the animal substrates of human behavior before we can fulfil the dreams of AI in replicating convincing well-rounded intelligence.

(Incidentally, the reader should note that the terminology is in chaos, as fields re-group and re-define themselves. For example, I work on artificial zoology but describe myself casually as doing AI. This chaos can, however, be seen as a healthy sign of a field which has not yet stabilized. Any young scientist with imagination should realize that these are the kind of fields to get into. Who wants to be in a field where everything was solved long ago?)

So AI is not dead, but re-grouping, and is still being driven, as always, by testable scientific models. Discussions on philosophical questions, such as `What is life?' or `What is intelligence?', change little over the years. There have been numerous attempts, from Roger Penrose to Gerald Edelman, to disprove AI (show that it is impossible) but none of these attempted revolutions has yet gathered much momentum. This is not just because of lack of agreement with their philosophical analysis (although there is plenty of that), but also perhaps because they fail to provide an alternative paradigm in which we can do science. Progress, as is normal in science, comes from building things and running experiments, and the flow of new and strange machines from AI laboratories is not remotely exhausted. On the contrary, it has been recently invigorated by the new biological approach.

In fact, the old optimism has even been resurrected. Professor Kevin Warwick of the University of Reading has recently predicted that the new approach will lead to human-level AI in our lifetimes. But I think we have learned our lesson on that one. I, and many like me in new AI, imagine that this is still Physics before Newton, that the field might have a good one or two hundred years left to run. The reason is that there is no obvious way of getting from here to there - to human-level intelligence from the rather useless robots and brittle software programs that we have nowadays. A long series of conceptual breakthroughs are needed, and this kind of thinking is very difficult to timetable. What we are trying to do in the next generation is essentially to find out what are the right questions to ask.

It may never happen (but not for the reasons you think)

I think that people who are worried about robots taking over the world should go to a robotics conference and watch these things try to walk. They fall over, bump into walls and end up with their legs thrashing or wheels spinning in the air. I'm told that in this summer's Robotic Football competition, the losing player scored all five goals - 2 against the opposing robot, and 3 against himself. The winner presumably just fell over.

Robots are more helpless than threatening. They are really quite sweet. I was in the MIT robotics laboratory once looking at Cog, Rodney Brooks' latest robot. Poor Cog has no legs. He is a sort of humanoid, a torso stuck on a stand with arms, grippers, binocular vision and so on. I saw Cog on a Sunday afternoon in a darkened laboratory when everyone had gone home and I felt sorry for him which I know is mad. But it was Sunday afternoon and no one was going to come and play with him. If you consider the gulf between that and what most animals experience in their lives, surrounded by a tribe of fellow infants and adults, growing up with parents who are constantly with them and constantly stimulating them, then you understand the incredibly limited kind of life that artificial systems have.

The argument I am developing is that there may be limits to AI, not because the hypothesis of `strong AI' is false, but for more mundane reasons. The argument, which I develop further on my website, is that you can't expect to build single isolated AI's, alone in laboratories, and get anywhere. Unless the creatures can have the space in which to evolve a rich culture, with repeated social interaction with things that are like them, you can't really expect to get beyond a certain stage. If we work up from insects to dogs to Homo erectus to humans, the AI project will I claim fall apart somewhere around the Homo erectus stage because of our inability to provide them with a real cultural environment. We cannot make millions of these things and give them the living space in which to develop their own primitive societies, language and cultures. We can't because the planet is already full. That's the main argument, and the reason for the title of this talk.

So what will happen?

So what will happen? What will happen over the next thirty years is that will see new types of animal-inspired machines that are more `messy' and unpredictable than any we have seen before. These machines will change over time as a result of their interactions with us and with the world. These silent, pre-linguistic, animal-like machines will be nothing like humans but they will gradually come to seem like a strange sort of animal. Machines that learn, familiar to researchers in labs for many years, will finally become mainstream and enter the public consciousness.

What category of problems could animal-like machines address? The kind of problems we are going to see this approach tackle will be problems that are somewhat noise and error resistant and that do not demand abstract reasoning. A special focus will be behavior that is easier to learn than to articulate - most of us know how to walk but we couldn't possibly tell anyone how we do it. Similarly with grasping objects and other such skills. These things involve building neural networks, filling in state-spaces and so on, and cannot be captured as a set of rules that we speak in language. You must experience the dynamics of your own body in infancy and thrash about until the changing internal numbers and weights start to converge on the correct behavior. Different bodies mean different dynamics. And robots that can learn to walk can learn other sensorimotor skills that we can neither articulate nor perform ourselves.

What are examples of these type of problems? Well, for example, there are already autonomous lawnmowers that will wander around gardens all afternoon. The next step might be autonomous vacuum cleaners inside the house (though clutter and stairs present immediate problems for wheeled robots). These are all sorts of other uses for artificial animals in areas where people find jobs dangerous or tedious - land-mine clearance, toxic waste clearance, farming, mining, demolition, finding objects and robotic exploration, for example. Any jobs done currently or traditionally by animals would be a focus. We are familiar already from the Mars Pathfinder and other examples that we can send autonomous robots not only to inhospitable places, but also send them there on cheap one-way `suicide' missions. (Of course, no machine ever `dies', since we can restore its mind in a new body on earth after the mission.)

Whether these type of machines may have a future in the home is an interesting question. If it ever happens, I think it will be because the robot is treated as a kind of pet, so that a machine roaming the house is regarded as cute rather than creepy. Machines that learn tend to develop an individual, unrepeatable character which humans can find quite attractive. There are already a few games in software - such as the Windows-based game Creatures, and the little Tamagotchi toys - whose personalities people can get very attached to. A major part of the appeal is the unique, fragile and unrepeatable nature of the software beings you interact with. If your Creature dies, you may never be able to raise another one like it again. Machines in the future will be similar, and the family robot will after a few years be, like a pet, literally irreplaceable.

What will hold things up? There are many things that could hold up progress but hardware is the one that is staring us in the face at the moment. Nobody is going to buy a robotic vacuum cleaner that costs �5000 no matter how many big cute eyes are painted on it or even if it has a voice that says, "I love you". Many conceptual breakthroughs will be needed to create artificial animals. The major theoretical issue to be solved is probably representation: what is language and how do we classify the world. We say `That's a table' and so on for different objects, but what does an insect do, what is going on in an insect's head when it distinguishes objects in the world, what information is being passed around inside, what kind of data structures are they using. Each robot will have to learn an internal language customized for its sensorimotor system and the particular environmental niche in which it finds itself. It will have to learn this internal language on its own, since any representations we attempt to impose on it, coming from a different sensorimotor world, will probably not work.

Predictions

Finally, what will be the impact on society of animal-like machines? Let's make a few predictions that I will later look back and laugh at.

First, family robots may be permanently connected to wireless family intranets, sharing information with those who you want to know where you are. You may never need to worry if your loved ones are alright when they are late or far away, because you will be permanently connected to them. Crime may get difficult if all family homes are full of half-aware, loyal family machines. In the future, we may never be entirely alone, and if the controls are in the hands of our loved ones rather than the state, that may not be such a bad thing.

Slightly further ahead, if some of the intelligence of the horse can be put back into the automobile, thousands of lives could be saved, as cars become nervous of their drunk owners, and refuse to get into positions where they would crash at high speed. We may look back in amazement at the carnage tolerated in this age, when every western country had road deaths equivalent to a long, slow-burning war. In the future, drunks will be able to use cars, which will take them home like loyal horses. And not just drunks, but children, the old and infirm, the blind, all will be empowered.

Eventually, if cars were all (wireless) networked, and humans stopped driving altogether, we might scrap the vast amount of clutter all over our road system - signposts, markings, traffic lights, roundabouts, central reservations - and return our roads to a soft, sparse, eighteenth-century look. All the information - negotiation with other cars, traffic and route updates - would come over the network invisibly. And our towns and countryside would look so much sparser and more peaceful.

Conclusion

I've been trying to give an idea of how artificial animals could be useful, but the reason that I'm interested in them is the hope that artificial animals will provide the route to artificial humans. But the latter is not going to happen in our lifetimes (and indeed may never happen, at least not in any straightforward way).

In the coming decades, we shouldn't expect that the human race will become extinct and be replaced by robots. We can expect that classical AI will go on producing more and more sophisticated applications in restricted domains - expert systems, chess programs, Internet agents - but any time we expect common sense we will continue to be disappointed as we have been in the past. At vulnerable points these will continue to be exposed as `blind automata'. Whereas animal-based AI or AL will go on producing stranger and stranger machines, less rationally intelligent but more rounded and whole, in which we will start to feel that there is somebody at home, in a strange animal kind of way. In conclusion, we won't see full AI in our lives, but we should live to get a good feel for whether or not it is possible, and how it could be achieved by our descendants.

Applications of AI


game playing

You can buy machines that can play master level chess for a few hundred dollars. There is some AI in them, but they play well against people mainly through brute force computation--looking at hundreds of thousands of positions. To beat a world champion by brute force and known reliable heuristics requires being able to look at 200 million positions per second.

speech recognition

In the 1990s, computer speech recognition reached a practical level for limited purposes. Thus United Airlines has replaced its keyboard tree for flight information by a system using speech recognition of flight numbers and city names. It is quite convenient. On the the other hand, while it is possible to instruct some computers using speech, most users have gone back to the keyboard and the mouse as still more convenient.

understanding natural language

Just getting a sequence of words into a computer is not enough. Parsing sentences is not enough either. The computer has to be provided with an understanding of the domain the text is about, and this is presently possible only for very limited domains.

computer vision

The world is composed of three-dimensional objects, but the inputs to the human eye and computers' TV cameras are two dimensional. Some useful programs can work solely in two dimensions, but full computer vision requires partial three-dimensional information that is not just a set of two-dimensional views. At present there are only limited ways of representing three-dimensional information directly, and they are not as good as what humans evidently use.

expert systems

A ``knowledge engineer'' interviews experts in a certain domain and tries to embody their knowledge in a computer program for carrying out some task. How well this works depends on whether the intellectual mechanisms required for the task are within the present state of AI. When this turned out not to be so, there were many disappointing results. One of the first expert systems was MYCIN in 1974, which diagnosed bacterial infections of the blood and suggested treatments. It did better than medical students or practicing doctors, provided its limitations were observed. Namely, its ontology included bacteria, symptoms, and treatments and did not include patients, doctors, hospitals, death, recovery, and events occurring in time. Its interactions depended on a single patient being considered. Since the experts consulted by the knowledge engineers knew about patients, doctors, death, recovery, etc., it is clear that the knowledge engineers forced what the experts told them into a predetermined framework. In the present state of AI, this has to be true. The usefulness of current expert systems depends on their users having common sense.

heuristic classification

One of the most feasible kinds of expert system given the present knowledge of AI is to put some information in one of a fixed set of categories using several sources of information. An example is advising whether to accept a proposed credit card purchase. Information is available about the owner of the credit card, his record of payment and also about the item he is buying and about the establishment from which he is buying it (e.g., about whether there have been previous credit card frauds at this establishment).

Branches of AI


Here's a list, but some branches are surely missing, because no-one has identified them yet. Some of these may be regarded as concepts or topics rather than full branches.

logical AI
What a program knows about the world in general the facts of the specific situation in which it must act, and its goals are all represented by sentences of some mathematical logical language. The program decides what to do by inferring that certain actions are appropriate for achieving its goals. The first article proposing this was [McC59]. [McC89] is a more recent summary. [McC96b] lists some of the concepts involved in logical aI. [Sha97] is an important text.
search
AI programs often examine large numbers of possibilities, e.g. moves in a chess game or inferences by a theorem proving program. Discoveries are continually made about how to do this more efficiently in various domains.
pattern recognition
When a program makes observations of some kind, it is often programmed to compare what it sees with a pattern. For example, a vision program may try to match a pattern of eyes and a nose in a scene in order to find a face. More complex patterns, e.g. in a natural language text, in a chess position, or in the history of some event are also studied. These more complex patterns require quite different methods than do the simple patterns that have been studied the most.
representation
Facts about the world have to be represented in some way. Usually languages of mathematical logic are used.
inference
From some facts, others can be inferred. Mathematical logical deduction is adequate for some purposes, but new methods of non-monotonic inference have been added to logic since the 1970s. The simplest kind of non-monotonic reasoning is default reasoning in which a conclusion is to be inferred by default, but the conclusion can be withdrawn if there is evidence to the contrary. For example, when we hear of a bird, we man infer that it can fly, but this conclusion can be reversed when we hear that it is a penguin. It is the possibility that a conclusion may have to be withdrawn that constitutes the non-monotonic character of the reasoning. Ordinary logical reasoning is monotonic in that the set of conclusions that can the drawn from a set of premises is a monotonic increasing function of the premises. Circumscription is another form of non-monotonic reasoning.
common sense knowledge and reasoning
This is the area in which AI is farthest from human-level, in spite of the fact that it has been an active research area since the 1950s. While there has been considerable progress, e.g. in developing systems of non-monotonic reasoning and theories of action, yet more new ideas are needed. The Cyc system contains a large but spotty collection of common sense facts.
learning from experience
Programs do that. The approaches to AI based on connectionism and neural nets specialize in that. There is also learning of laws expressed in logic. is a comprehensive undergraduate text on machine learning. Programs can only learn what facts or behaviors their formalisms can represent, and unfortunately learning systems are almost all based on very limited abilities to represent information.
planning
Planning programs start with general facts about the world (especially facts about the effects of actions), facts about the particular situation and a statement of a goal. From these, they generate a strategy for achieving the goal. In the most common cases, the strategy is just a sequence of actions.
epistemology
This is a study of the kinds of knowledge that are required for solving problems in the world.
ontology
Ontology is the study of the kinds of things that exist. In AI, the programs and sentences deal with various kinds of objects, and we study what these kinds are and what their basic properties are. Emphasis on ontology begins in the 1990s.
heuristics
A heuristic is a way of trying to discover something or an idea imbedded in a program. The term is used variously in AI. Heuristic functions are used in some approaches to search to measure how far a node in a search tree seems to be from a goal. Heuristic predicates that compare two nodes in a search tree to see if one is better than the other, i.e. constitutes an advance toward the goal, may be more useful. 
genetic programming
Genetic programming is a technique for getting programs to solve a task by mating random Lisp programs and selecting fittest in millions of generations. 

What is artificial intelligence


Q. What is artificial intelligence?
A. It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.
Q. Yes, but what is intelligence?
A. Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals and some machines.
Q. Isn't there a solid definition of intelligence that doesn't depend on relating it to human intelligence?
A. Not yet. The problem is that we cannot yet characterize in general what kinds of computational procedures we want to call intelligent. We understand some of the mechanisms of intelligence and not others.
Q. Is intelligence a single thing so that one can ask a yes or no question ``Is this machine intelligent or not?''?
A. No. Intelligence involves mechanisms, and AI research has discovered how to make computers carry out some of them and not others. If doing a task requires only mechanisms that are well understood today, computer programs can give very impressive performances on these tasks. Such programs should be considered ``somewhat intelligent''.
Q. Isn't AI about simulating human intelligence?
A. Sometimes but not always or even usually. On the one hand, we can learn something about how to make machines solve problems by observing other people or just by observing our own methods. On the other hand, most work in AI involves studying the problems the world presents to intelligence rather than studying people or animals. AI researchers are free to use methods that are not observed in people or that involve much more computing than people can do.
Q. What about IQ? Do computer programs have IQs?

A. No. IQ is based on the rates at which intelligence develops in children. It is the ratio of the age at which a child normally makes a certain score to the child's age. The scale is extended to adults in a suitable way. IQ correlates well with various measures of success or failure in life, but making computers that can score high on IQ tests would be weakly correlated with their usefulness. For example, the ability of a child to repeat back a long sequence of digits correlates well with other intellectual abilities, perhaps because it measures how much information the child can compute with at once. However, ``digit span'' is trivial for even extremely limited computers.

However, some of the problems on IQ tests are useful challenges for AI.
Q. What about other comparisons between human and computer intelligence?
Arthur R. Jensen , a leading researcher in human intelligence, suggests ``as a heuristic hypothesis'' that all normal humans have the same intellectual mechanisms and that differences in intelligence are related to ``quantitative biochemical and physiological conditions''. I see them as speed, short term memory, and the ability to form accurate and retrievable long term memories.
Whether or not Jensen is right about human intelligence, the situation in AI today is the reverse.
Computer programs have plenty of speed and memory but their abilities correspond to the intellectual mechanisms that program designers understand well enough to put in programs. Some abilities that children normally don't develop till they are teenagers may be in, and some abilities possessed by two year olds are still out. The matter is further complicated by the fact that the cognitive sciences still have not succeeded in determining exactly what the human abilities are. Very likely the organization of the intellectual mechanisms for AI can usefully be different from that in people.
Whenever people do better than computers on some task or computers use a lot of computation to do as well as people, this demonstrates that the program designers lack understanding of the intellectual mechanisms required to do the task efficiently.
Q. When did AI research start?
A. After WWII, a number of people independently started to work on intelligent machines. The English mathematician Alan Turing may have been the first. He gave a lecture on it in 1947. He also may have been the first to decide that AI was best researched by programming computers rather than by building machines. By the late 1950s, there were many researchers on AI, and most of them were basing their work on programming computers.
Q. Does AI aim to put the human mind into the computer?
A. Some researchers say they have that objective, but maybe they are using the phrase metaphorically. The human mind has a lot of peculiarities, and I'm not sure anyone is serious about imitating all of them.

History Of AI

The Beginnings of AI:



Although the computer provided the technology necessary for AI, it was not until the early 1950's that the link between human intelligence and machines was really observed. Norbert Wiener was one of the first Americans to make observations on the principle of feedback theory feedback theory. The most familiar example of feedback theory is the thermostat: It controls the temperature of an environment by gathering the actual temperature of the house, comparing it to the desired temperature, and responding by turning the heat up or down. What was so important about his research into feedback loops was that Wiener theorized that all intelligent behavior was the result of feedback mechanisms. Mechanisms that could possibly be simulated by machines. This discovery influenced much of early development of AI.

In late 1955, Newell and Simon developed The Logic Theorist, considered by many to be the first AI program. The program, representing each problem as a tree model, would attempt to solve it by selecting the branch that would most likely result in the correct conclusion. The impact that the logic theorist made on both the public and the field of AI has made it a crucial stepping stone in developing the AI field.
In 1956 John McCarthy regarded as the father of AI, organized a conference to draw the talent and expertise of others interested in machine intelligence for a month of brainstorming. He invited them to Vermont for "The Dartmouth summer research project on artificial intelligence." From that point on, because of McCarthy, the field would be known as Artificial intelligence. Although not a huge success, (explain) the Dartmouth conference did bring together the founders in AI, and served to lay the groundwork for the future of AI research.