AI = Contextual Reasoning + Learning

Artificial Intelligence (AI) has four seasons: hype, disappointment, funding drought, and renewed interest. I’ve been involved in AI research for quite some time—I became a fellow of the Association for the Advancement of Artificial Intelligence (AAAI) in 1993—and I’ve weathered several seasonal cycles. What I’m seeing now, however, is the most puzzling cycle yet; either I’m getting old and addled, or the current cycle is unique in its magnitude.
In these Big Data days, the big talk about AI’s potential reminds me of what happened at the peak of earlier cycles (see, for example, the recent Wall Street Journal article. Once again, the focus is on a single technical component—deep learning—and hopes seem to be building that it can solve many very hard problems easily and more or less magically. Almost all discussion suggests that all we need is deep learning because enough data is available, and deep learning can identify models in that data. What we do with the models—and how we solve real problems—is ignored.
Intelligence and AI
By nature, AI evokes a strong emotional reaction in most people. We humans like to consider intelligence our unique strength, one that has made us the most powerful (animal) in nature. Intelligence made us unique. Now, we want to build machines that will also be intelligent, an attractive quest that motivates many young researchers to enter the AI field, even as it alienates those who fear the consequences.
Typically, intelligence is viewed as having the ability to perceive the environment and take actions using reasoning. To take these action in space requires various capabilities, including logic, problem-solving, self-awareness, planning, and spatial reasoning.
To solve real-world tasks or function in the real world in general, an intelligent agent must possess both perception and cognition. Perception allows agents to detect objects and events in the environment using the available sensory data, while cognition lets them use reasoning to build self awareness and solve problems. If an agent possesses only one aspect—say, logic (cognition)—its applicability will be limited.

AI Seasons: Roots and Current Reality

In earlier days, computing’s processing power and memory were too small to deal with perceptual issues. As we progressed in knowledge representation, rule-based systems, and so on, however, the AI seasons emerged. Good progress in one area created unrealistic expectations of an agent’s ability to solve problems that required as-yet undeveloped capabilities; this fueled disappointment, leading to funding drought. The most recent cycle was fueled by rule-based systems (commonly known as expert systems).
Currently, we are in the hype season, courtesy of Big Data. We now have powerful processing capabilities, which let us manage large data volumes and more accurately compute probability distributions for pattern classification. Similarly, with very powerful computing, we can build extensive techniques for doing multi-level and build-learning techniques without having to specify features that are based more on human experience than detailed data analysis. This results in significantly better pattern-classification techniques, which researchers have successfully demonstrated recently, particularly in the area of concept detectors in computer vision. This, in turn, can inspire exaggerated expectations or hype, exemplified in a recent article I read.
Pattern classification techniques are required in other related areas, including the classification of consumers into specific demographics or interest groups—a powerful application that’s attractive to almost all major businesses.

Context in Learning

Improved pattern classification has made deep learning very hot. Deep learning, a popular machine learning technique, is a component of AI perception that helps build pattern recognition models.
In machine learning, the quality of training data determines the quality of the resulting model. To obtain quality training data the machine’s designer first determines the context in which the machine will be working and then defines the machine’s application. For example, a recent deep learning approach separates a singer’s voice from the noise at a cocktail party. Here, the machine’s scope is the cocktail party; it must then be trained using spectrograms of singer’s voice as training data.
Researchers thus carefully design the context for use, and each machine works effectively only in that context. During the machine’s training phase, the context for its successful operation is carefully coded in the learning algorithm’s function. Thus, where content was once king, a new leader is beginning to emerge.

Context is (Becoming) King

As sensors become more sophisticated context is beginning to dominate all applications. The availability of many types of sensors has resulted in powerful smartphones and the Internet of Things, which are both making enormous volumes of contextual data available. In their book, Age of Context, Robert Scoble and Shel Israel note that “the changes ushered in by the Age of Context will be more significant and fundamental than what has occurred in the previous era, and they are likely to occur faster.”
With the emergence of smartphones and wearable sensors that keep getting better, smaller, and more capable of measuring almost anything, context is increasingly becoming more powerful and important. And it’s increasingly determining the relevance and role of the former king, content.
This has serious implications for AI. Most intelligent humans are self-aware and understand their context through various sensors and reasoning processes. That is, humans decide which trick to use in which context; as a result, they operate in their environments intelligently (see “The Perception of Apparent Motion” article by Vilayanur Ramachandran and Stuart Anstis at
Thus, to build on the progress Big Data has inspired, we must first train systems for specific contexts and then use contextual reasoning and specific learning techniques to efficiently and effectively solve problems.

Intelligent Problem Solving

Problem solving usually requires multiple steps: we derive models from the given data, use appropriate techniques to solve specific problems, and finally achieve the goal. Models derived from data are important, as is the application of appropriate techniques. Such techniques depend on the goal as well as on the context at different problem-solving stages. Thus, contextual reasoning is as important as learning for deriving models.
To delay—or, perhaps, avoid altogether—the next AI funding drought, we must find a way to balance contextual reasoning and deep learning. Luckily for AI researchers and enthusiasts, we now have Big Data to train our models and myriad contextual sources that can help create a holistic situation for training and using those models. The key is to maintain a focus on problem solving using all our resources, even as the season of hype over a single technique rages on.