What makes a system cognitive? Conclusion to the Whatson iteration.

In March I asked the question, Can I build Watson Jr in my basement? I performed two iterations of a basement build that I dubbed “What, son?” or “Whatson” for short. In a first iteration, I recreated the Question-Answer system outlined in Taming Text by Ingersoll, Mortin and Farris. In a second iteration, I did deep dive into a first build of my own, writing code samples on essential parts and charting out architectural decisions. Of course there is plenty more to be done but I consider the second iteration complete. I have to put the next “Wilson” iteration on hold for a bit as my brain is required elsewhere. I would like to conclude this iteration with a final post that covers what I believe to be the most important question in this emerging field … What makes a system cognitive?

Here are some key features of a cognitive system:

Big Data. Cognitive systems can process large amounts of data from multiple source in different formats. They are not limited to a well-defined domain of enterprise data but can also access data across domains and integrate it into analytics. One might call this feature “big open data” to reflect its oceanic size and readiness for adventure. You would expect this feature from an intelligent system, just as humans process large amounts of experience outside their comfort zone.

Unstructured Data. Structured data is handled nicely by relational database management systems. A cognitive system extracts patterns from unstructured data, just as human intelligence finds meaning in unstructured experience.

Natural Language Processing (NLP). A true artificial intelligence should be able to process raw sensory experience and smart people are working on that. A entry level cognitive system should at least be able to perform NLP on text. Language is a model of human intelligence, and the system should be able to understand Parts of Speech and grammar. The deeper the NLP processsing the smarter the system.

Pattern-Based Entity Recognition. Traditional database systems and even the modern linked data approach rely heavily on arbitrary unique identifiers, e.g., GUID, URI. A cognitive system strives to uniquely identify identities based on meaningful patterns, e.g., language features.

Analytic. Meaning is a two-step between context and focus, sometimes called figure and ground. Interpretation and analytics are cognitive acts, using contextual information to understand the meaning of the focus of attention.

Game Knowledge. Game knowledge is high order understanding of context. A cognitive system does not simply spit out results, but understands the user and the stakes surrounding the question.

Summative. A traditional search system spills out a list of results, leaving the user to sort through them them for relevance. A cognitive system reduces the results to the lowest possible number of results that satisfy the question, and presents them in summary format.

Adaptive. A cognitive system needs to be able to learn. This is expressed in trained models, and also in the ability to accept feedback. A cognitive system uses rules, but these rules are learned “bottom-up” from data rather than “top-down” from hard-wired rules. This approach is probabilistic and associated with a margin of error. To err is human. It allows systems to learn from new experience.

I believe the second Whatson iteration demonstrates these features.

QA Architecture III: Enrichment and Answer. Playing the game with confidence.

1-3 QA Enrich AnswerThe Question and Answer Architecture of Whatson can be divided into three major processes. Previous posts covered I – Initialization and II – Natural Language Processing and Queries. This post describes the third and final process, III – Enrichment and Answer as shown in the chart to the right.

  1. Confidence. At this point, candidate results have been obtained from data sources and analyzed for answers. The work has involved a number of Natural Language Processing (NLP) steps that are associated with probabilities. Probabilities at different steps are combined to calculate an aggregate confidence for a result. There will be one final confidence value for each result obtained from each data source. The system must decide if it has the confidence to risk an answer. The risk depends on Game Rules. In Jeopardy, IBM’s Watson was penalized for a wrong answer.
  2. Spell Correction. If the confidence is low, the system can check the original question text for probable spelling mistakes. A corrected query can be resubmitted to Process 2 to obtain new search results, hopefully with higher confidence. Depending on the Game being played, a system might suggest spell correction before the first search is submitted, i.e., Did You Mean … ?
  3. Synonyms. If the confidence is still low, the system can expand the original question text with synonyms. E.g., ‘writer’ = ‘author’. The query is submitted, with the intent of obtaining higher confidence in the results.
  4. Clue Enrichment Automatic. The system is built to understand unstructured text and respond with answers. This build can be used to enrich a question with additional clues. Suppose a person asked for the author of a particular quote. The quote might be cited by several blog authors, but the system could deduce that the question refers to the primary or original author.
  5. Clue Enrichment Dialog. If all else fails the system will admit it does not know the answer. Depending on the Game, the system could ask the user to restate the question with more clues.
  6. Answer. Once the confidence level is high enough, the system will present the Answer. In a Game like Jeopardy only one answer is allowed. Providing only one answer is also a design goal, i.e., it should be smart enough to know the answer, and not return pages of search results. In some cases, a smart system should return more than one answer, e.g., if there are two different but equally probably answers. The format of the answer will depend on the Game. It makes sense to utilize templates to format the answer in a natural language format. Slapping on text-to-speech will be easy at this point.
  7. Evidence. Traditional search engines typically highlight keywords embedded in text snippets. The user can read the full document and try to evaluate why a particular result was selected. In a cognitive system, a single answer is returned based on a confidence. It can demonstrate why the answer was selected. A user might click on a “Evidence” link to see detailed information about the decision process and supporting documents.

This post concludes the description of the three processes in The Question and Answer Architecture of Whatson.

QA Architecture II: Natural Language Processing and Queries. Context-Focus pairing of the question and results.

The Question and Answer Architecture of Whatson can be divided into three major processes: I – Initialization, II – Natural Language Processing and Queries, and III – Enrichment and Answer. This post describes the second process, as shown in the chart:

1-2-QA-NLP

  1. Context for the Question. There are two pairs of green Context and Focus boxes. The first pair is about Natural Language Processing (NLP) for the Question Text. Context refers to all the meaningful clues that can be extracted from the question text. The Initialization process determined that the domain of interest is English Literature. In this step, custom NLP models will be used to recognize domain entities: book titles, authors, characters, settings, quotes, and so on.
  2. Focus for the Question. The Context provides known facts from the question and helps determine what is not known, i.e., the focus. The Focus is classified as a type, e.g., a question about an author, a question about a setting.
  3. Data Source Identification. Once the question has been analyzed into entities, the appropriate data sources can be selected for queries. The Data Source Catalog associates sources with domain entities. More information about the Catalog can be found under the discussion of the Tank-less architecture.
  4. Queries. Once the data sources have been identified, queries can constructed using the Context and Focus entities as parameters. Results are obtained from each source.
  5. Parts of Speech. Basic parts of speech (POS) analysis is performed on the results, just like in the Initialization process.
  6. Context for the Results. The second pair of green Context and Focus boxes is for the Results text. Domain entities are extracted from the results. Now the question and answer can be lined up to find relevant results.
  7. Focus for the Results. The final step is to resolve the focus, asked by the question and hopefully answered by the result. The basic work is matching up entities in the Question and Results. Additional cognitive analysis may be applied here.

The third and final post will describe how the system evaluates results before offering an answer.

QA Architecture: Initialization. The solution is in the question.

1-1 QA DetectionThe Question and Answer Architecture of Whatson can be divided into three major processes. The first process may be called Initialization, and is shown in the chart to the left. It involves the following steps:

  1. Accept the Question. A user asks a question, “Who is the author of The Call of the Wild?” Everything flows from the user question. One might say, the solution is in the question. It is assumed that the question is in text format, e.g., from an HTML form. A fancier system might use voice recognition. The user can enter any text. It is assumed at the beginning that the literal text is entered correctly, i.e., no typos, and that there are sufficient clues in the question to find the answer. If these conditions prove wrong, a later step will be used to correct and/or enrich the original question text.
  2. Language Detection. The question text is used to detect the user’s language. The cognitive work performed by the system is derived from its knowledge of a particular language. Dictionaries, grammar, and models are all configured for individual languages. The language to be used for analysis must be selected right at the start.
  3. Parts of Speech. Once we know the language of the question, the right language dictionary and models can be applied to obtain the Parts of Speech that will be used to do Natural Language Processing (NLP).
  4. Domain Detection. A typical NLP application will use English language models to perform tasks such as Named Entity Recognition, the identification of common entities such as People, Locations, Organizations, etc. This common level of analysis is fine for many types of questions, but there are limitations. How can a Person detector know the difference between an Author and a Character? I have shown how to build a custom model for Book Title identification. My intent is to build custom models for all elements of the subject domain. The current domain of interest is English literature but a system should use the question text to identify others domains too.

The next post will describe how to use the inputs for NLP.

Hammering out the Question and Answer Architecture. The big picture.

I settled on the Tankless option for the overall architecture — see diagram and discussion. In that architecture, the Question and Answer piece was one major component. I need to hammer out the details of that component because it has the most complexity, naturally. The following is the complete picture of the Question and Answer Architecture. On the left is the flow from the the original question text, to the natural language processing and querying steps in the middle, to the clue enrichment and final answer on the right. All of these pieces need explanation. I will be presenting and discussing the pieces in three posts. Stay tuned.

1 Question and Answer Architecture

“… until the fingers let go of their numbers …”

I wish to grow dumber,
to slip deep into woods that grow blinder
with each step I take,
until the fingers let go of their numbers
and the hands are finally ignorant as paws.
Unable to count the petals, I will not know who loves me,
who loves me not.
Nothing to remember, nothing to forgive,
I will stumble into the juice of the berry, the shag of bark,
I will be dense and happy as fur.

Woods. A poem by Noelle Oxenhandler.

(thanks MLvS)

The “Tank-less” architecture is better suited to dynamic adaptation, to learning

Quick recap. The Hot Water Tank architecture for a Question-Answer system saves external content in a massive internal index along with Natural Language Processing (NLP) tags. The alternative Tank-less architecture does not have an index. It looks up external content on the fly, using a lightweight data source catalog. NLP is also performed on the fly. Here’s a comparison of the two architectures, with the preferred attribute highlighted in green.

Tank Tank-less Comment re: Cognitive Systems
Search Speed Fetching local network content is faster Fetching live content across networks will usually be slower  Speed is important for the perception of intelligence
Availability Fetching local network content is more reliable Fetching live content across networks will be interrupted more often  If a system fails to respond it will not seem very smart
Currency Background crawling and indexing is a scheduled operation. Causes stale data. Always fetches the most current data Currency is an important attribute of intelligence. Currency is vital when using social media. Twitter, for example, is valued for the speed at which it can break news.
Disk Space Terabytes of content Lightweight A tank system could be a cloud service accessed by a smart-phone, but a tank-less can be installed locally on a smart-phone.
Adaptivity Index is populated with NLP tags. Change the tags and the index must be entirely rebuilt. No index. NLP model updates can be applied immediately  Fast model updates = fast learning
Open Standards Index has a schema. Tight coupling between content and queries. Like a traditional RDBMS. Catalog concept fits with open metadata, open data. Consumes loosely coupled web services. Open standards fit better with semantic web and other machine intelligence technologies

If I were to just count up the number of green boxes, it appears that I should favour the Tank-less architecture. Not so fast. The first two attributes — speed and availability — are very important. The Tank delivers fast, reliable answers. It is a proven approach. Frankly, if I was building a QA system for a customer today, I would recommend the Hot Water Tank architecture.

Whatson is a basement build, an experiment for my own research. I have a strong hunch that I can build a smarter cognitive system with the Tank-less architecture. Some of the features in green that win out for the Tank-less approach are the ones that really matter for a cognitive system. Present-day Question-Answer systems are not adaptive. They do not learn unless someone goes in and programs new rules. The Tank-less approach is better suited to dynamic adaptation. Also, the embrace of open standards suggests the possibility of a system could actively crawl the web to learn from it. Sound like science fiction? We’ll see.

Special Topic: Autonomy vs. Distributed Intelligence

A Tank architecture might be preferred for a cognitive system for another reason — autonomy. Since everything is stored locally, a Tank system does not depend on external resources. This attribute has already been evaluated with regard to speed and availability, but some think that autonomy is itself an attribute of intelligence. Is it? In the movie Transcendence, the artificial intelligence was called PINN, an acronym for Physically Independent Neural Network.

Personally, I do not think of human intelligence as autonomous. Consider transactive memory, the concept that memory is a social structure. We could spend a lot of time memorizing entries from Wikipedia. Certainly, if the Internet were to go down our knowledge would be safe in our heads. It is likely more efficient to store a pointer in our head to interesting entries in Wikipedia. If the time comes that we need details, we can look up an entry, updated by others with the latest information. In Natural Born Cyborgs, Clark argues persuasively that human intelligence is a function of our brains plus technologies operating outside of us, e.g., calculators, computers. For these reasons, I view intelligence as distributed in nature. The Tank-less architecture fits with that view.

“Tank-less” architecture of a Cognitive system. Search and analyze on demand.

As I move to the end of this iteration of Whatson, I am stalling on the final step involving Answer Type computation. In the Hot Water Tank architecture of a Question-Answer system, all the source data gets crawled and indexed internally. During indexing, Natural Language Processing (NLP) is applied to the data, tagging Person entities Location entities, and others. This tagging is key to answering user questions correctly. You see, questions get analyzed using similar NLP. An Answer Type is computed to figure out what kind of question is being asked and what kind of answer will do. Is it a Person question? A Location question? Correct answers can be obtained by matching the Answer Type to the content tagged during indexing.

tankless architecture 2The Hot Water Tank architecture is solid, but I can’t help feeling it is too rigid, like Relational Database design with all its unique identifiers and keys. Relational Database design is also very effective is returning accurate results, as long as you are only asking canned questions about a known range of content. In this post, I describe an alternative “Tank-less” architecture.

Obviously, I am playing on the Tank/Tank-less options for heating water in one’s home. The Tank option heats a large quantity of water in advance, waiting for you to turn the tap on. Similarly, in the Tank architecture, a massive first crawl of all external data source content is performed, followed by a massive first indexing of that content, just like filling a hot water tank. Updates to the index can be performed on just the delta, but changes to the NLP model may require complete re-indexing.

The Tank-less option heats water on demand. In the Tank-less architecture, the massive index is replaced with a lightweight Data Source Catalog, containing structured descriptions about data sources and programmatic interfaces to search the sources. When a user submits a question, the text is still analyzed using NLP and an Answer Type is still computed. What is new is an extra step in which the system uses the NLP and the Data Source Catalog to choose the appropriate sources for answers. A small amount of content is retrieved using the search interfaces in the Catalog. NLP tagging is then applied to just that small amount of content. Correct answers can be obtained by matching the Answer Type to the content just tagged.

Replacing the massive index with a lightweight catalog is an experimental approach. The catalog must contain sufficient description of the data sources to make accurate choices about where to search for answers. In addition, the description must be structured to allow for automated choices of the sources. I’m not sure yet if this will work best in the long run, but I have a good feeling about it. In my next post, I will weigh the advantages and disadvantages of both architectures. It will be no surprise that I favour the Tank-less architecture for a cognitive system.