Tag Archives: cognitive

AI Era Requires a Probabilistic Mindset

You may have heard that AI (or cognitive) era applications are “probabilistic”. What does that really mean?

Let me illustrate with two hypothetical applications.

First Application

You hire me to write a billing application for your consulting firm.  You give me the list of all your employees and their hourly billing rates.  You also give me last year’s data, including the number of hours each consultant worked for each client account and the bills you generated.

I do my development and come back to you later and say “I’ve finished the application and testing shows it computes the correct amount for 90% of the bills”.  What do you say?  You say,

  • You’re not done yet
  • You’re not getting paid yet
  • Get back to work

This is a classic “procedural era” application.  We all expect — and demand — 100% correct answers for procedural era applications.

Second Application

Now let’s change the requirements a little bit. Sometimes you give discounts and sometimes you give freebies.  You do this because it makes clients happy — and happy clients are important to your business.  Sometimes you do this for your largest clients, because they bring you so much business.  Sometimes you do this for clients that are considering big orders, because you want to show how much you value them.  Sometimes you do this for “friends & family” clients, because increasing their happiness increases your own happiness.

Again, you give me the bills for the last year.  And again, I do my development and come back to you and say “I’ve finished the application and testing shows it computes the correct amount for 90% of the bills”.  What do you say this time? You say

  • This is fabulous
  • Here’s a bonus for such a high accuracy rate
  • I’ve got this other program I’d like you to write for me.

Why such a different response between these two similar applications?

This is a classic “cognitive era” application. There’s no obvious formula for how to make clients happy.  An expectation for 100% correct answers is completely unrealistic.  For some medical diagnoses, even expert physicians only agree with each other about 85% of the time, so how can we ever even think a computer program can be correct 100% of the time.

The example illustrates that we must change our mindset as we move into the cognitive era. While a few application achieve NEAR (but not exactly) 100% accuracy (e.g. hand-writing digit classification), many successful cognitive era applications achieve well below 90% accuracy. 70% and 80% accuracy is the best we’ve been able to achieve.

Perfection is simply not a realistic goal in the cognitive era.

That’s why we say cognitive applications are PROBABILISTIC.


Scott N. Gerard

What is Cognitive Computing?

Cognitive computing is all the rage these days.  But what is it, really?  I’ve been thinking about it quite a bit lately, and I believe I have come to a few novel conclusions.

Wikipedia has a nice long article about Cognition.  It expansively covers a great many things that I would agree are “cognitive”, but not (yet) “cognitive computing”.  I’m interested in writing cognitive software; not is constructing an full, artificially intelligent, “faux human”.  So, I’ll focus only on just cognitive computing.

Rob High, CTO of IBM’s Watson Group, defines cognitive computing as four “E”s.

  • Cognitive systems are able to learn their behavior through education
  • That support forms of expression that are more natural for human interaction
  • Whose primary value is their expertise; and
  • That continue to evolve as they experience new information, new scenarios, and new responses
  • and does so at enormous scale.

Adaptive

I agree with education and evolve, although I see these two as similar concepts.  To make these ideas fit my somewhat artificial classification system below, I rename this idea to adaptive.

Ambiguity

However, I disagree with limiting the definition to human expression.  There are many processes that I believe require cognitive skills that are not naturally interpreted by humans.  Dolphin and bat echo-location are good examples; they are a kind of “seeing” but humans can’t do it.   Any application that can monitor the network communication into and out of an organization and correctly identify data leakage gets my vote for “cognitive”, even though humans can’t do it.

Ambiguity is a better criterion than human expression.

Many human expressions are difficult to interpret because they are ambiguous.  I offer the following two examples.

  • Natural language is very ambiguous.  The classic sentence “Time flies like an arrow; fruit flies like a banana” has many different possible interpretations.  Sentences can have ambiguous parses (is “time” a noun, or is it an adjective modifying “flies”; is “flies” a verb or a noun; etc).  Words can be ambiguous, commonly called word sense disambiguation (WSD).  Is “bass” a fish or a kind of musical instrument?
  • Human emotions are ambiguous.  They require interpreting facial expressions, body language, sarcasm, etc. And people often disagree on the proper interpretation of a person’s emotion:  “Is Bob angry at me?”  “No, You know how he is.  He was just making a joke.”

To be more precise, an input is ambiguous when there are multiple output interpretations consistent with that input.  The goal is to determine which output interpretation(s) are, in some sense, most appropriate. Many elements (surrounding context, background knowledge, common sense, etc.) help decide which interpretations are most appropriate.

Pushing this idea farther, we should change from discussions of structured data vs. unstructured data and start discussing unambiguous data vs. ambiguous data.

From:  structured data vs. unstructured data

To:     unambiguous data vs. ambiguous data

There are many cases where structured vs. unstructured misses the point.  A row of structured data is easy to process not because it is physically separated into separate fields.  It is easy to process because here is only one way to interpret that row.  Structured data can even be ambiguous, in which case we need to “clean the data” (remove ambiguity).  Java code has exactly the same structure as natural language, but compilers are not “cognitive” because the Java programming language is unambiguous.

The fundamental problem is to accept an ambiguous input plus its available context, and search through the space of all possible interpretations for the most appropriate output(s).  That is, a cognitive process is a search process.

In the Programmable Era, programmers were able to resolve low levels of ambiguity by the seat of their pants, either because there were few possible interpretations or because interpretation resolution could be “factored” into a sequence of more or less independent resolution steps.  But as the amount of ambiguity increases, programmers are unable to satisfactorily resolve ambiguity by the seat of their pants.  In the Cognitive Era, programmers need Ambiguity Resolution Frameworks (ARFs) to help them process large amounts of ambiguity.  Machine learning is one kind of ARF which takes as input multiple features (each of which can be understood by the programmer) and combines all the features together to resolve down to few interpretations (note that I’m not requiring  ARFs to perfectly resolve all ambiguity to a single interpretation).  The Cognitive Era is largely populated by cases where imperfect resolution of large  interpretation spaces is an unavoidable consequence of the input’s irreducible ambiguity.

Action

I also disagree that expertise is a defining criterion for cognitive computing.  A better, more inclusive, criterion is action.

Only humans can accept and interpret expertise.  Requiring a cognitive system to output expertise necessarily forces a human “into the loop”.  While appropriate is some cases, it is wrong to require a human in the loop of every cognitive system.  Rather, we should encourage the development of autonomous systems that are able to act on their own.  The distinction between expertise and action is not completely black and white:  Watson’s Jeopardy! system did both by ringing a buzzer (action) and providing an response (expertise).

Many years ago, IBM defined the “autonomic MAPE loop” consisting of four steps: M: monitor or sense, A: analyze, P: plan, and E: execute or effectors or act.  Not all cognitive systems must contain a MAPE loop, but I see it as more inclusive than the 4 E’s above.  Expertise is best characterized as the output of the Analysis step, requiring a human to perform the Plan step.  The Observe-Interpret-Evaluate-Decide loop is similar to the MAPE loop, with Observer=M, Interpret & Evaluate=A, Decide=P & E.  But they both end with an action.

So instead of the 4 E’s, I suggest we define cognitive computing by the 3 A’s:  Adaptive, Ambiguous, and Action.

Humans Need Not Apply

Under that category of “technology is neither good nor bad; and it is seldom neutral”,  I just watched a very interesting and well-done video about the impact of intelligent machine technology on our jobs.

In part, it compares horses and people.  When the automobile started entering our economy, horses might have said, “This will make horse-life easier and we horses can move to more interesting and easier jobs”.  But that didn’t happen; the horse population peaked in 1915 and has been declining ever since.  I’m sure we all agree that intelligent and cognitive applications will certainly replace some jobs.  The question is: will there be enough new jobs to keep humans fully employed?   Might unemployment raise to 45% as the video suggests?  How many future job descriptions will contain the phrase “Humans Need Not Apply”.

What the video fails to discuss is how massive unemployment might be averted.  I’d like to see even some proposals or suggestions.  Do you have any ideas?

I would also like to think that I—a high-tech, machine-learning, cognitive-app, AI technologist— would be immune to these kinds of changes.  But I’m less certain after watching this video.    You should definitely check it out.

2015 is the Mole-of-Bits Year

MoleOfBitsTake a look at this graphic from IDC.  It estimates the totals number of bits in the world over time.  There are many things going on in this graphic.  It shows that enterprise data is certainly growing, but does not comprise the majority of data; sensor data and social media data far outstrip it.

It also shows that a huge fraction of all data contains uncertainty. This has dramatic implications for old school programmers.  Programming absolutely must continue to adopt new approaches to handle uncertain input data.  Particularly for emerging cognitive applications. The traditional excuse of classifying ANY input errors as “garbage in” just won’t not cut it any more.

But my favorite part of this graphic are the axes; forget the graphic curves (how often does that happen). The x-axis shows time with 2015 on the far right.  The y-axis shows the number of bits in the world. For the chemists among you, 10 to the 23rd is an essentially Avagadro’s number (6.02 E23), which is the number molecules in a “mole”.  What does this data mean?  Imagine you’re holding a tablespoon filled with water.   You’re holding a mole of water molecules.  The chart above implies that this year, 2015, there will be one bit of data for EVERY molecule of atomic H2O in that tablespoon.  To me, that is nothing short of INCREDIBLE and AWESOME.  When I was growing up, I remember trying to imagine how we would ever have such a gi-nor-mous number of macroscopic things.  Well here we are, and in my lifetime.  I’m moved.

So I hearby officially declare

2015 as the “mole of bits” year

Will Superintelligent AIs Be Our Doom?

I am quite focused on advancing computer science so it becomes more capable and able to solve more of our problems.  Over the last many decades, procedural programming enabled us to solve many broad classes of problems, but there are still have many problems outside procedural’s grasp.  Artificial Intelligence (AI), aka cognitive computing, is one good way to approach many of the remaining problems.  So I spend a lot of time trying to advance these new technologies.

However, one of my favorite phrases is from Melvin Kranzberg:  “Technology is neither good nor bad; and it is seldom neutral”.  So we (both ME and YOU) must always carefully consider the implications of our technologies.

Along those lines, I just read this excerpt titled Will Superintelligent AIs Be Our Doom?  I don’t believe we should never explore a technology just because it might cause harm.  If that were the case, we should have never developed most of the technologies that make up modern life.  I do believe there is a possibility AI could get away from us.  The take-away for me is: we need to consider both the wildly good and wildly bad possibilities.  That at least helps us understand — as best as possible — what might actually happen.