Tag Archives: service science

On Intelligence

OnIntelligenceCoverLast week I went to Singapore for business and endured some dang long flights with a lot of time on my hands.  I tried watching the “Anchorman 2” movie, but it was just too off the wall (unintelligent?) for me, so I gave up on it.  Some of the other movies, like “Jack Ryan: Shadow Recruit”, were better.  But I still had a lot of time, so I turned to reading “On Intelligence” by Jeff Hawkins.

In the early chapters, where he was poking holes at a number of established approaches to intelligence, I was a bit skeptical.  But then, as he settled into his memory-prediction framework, he started to win me over with his different view.

Hawkins talks in depth about the biological processes in the human neocortex, which was interesting.  But the most interesting idea to me was his description of a “memory-prediction framework”.  Basically, this framework includes the obvious case of signals flowing from sensors up to higher cognitive levels, plus the less obvious case of signals flowing down from higher to lower cognitive levels.  Each cognitive level remembers what has previously occurred and predicts what is likely to occur next.  These cognitive levels detect and predict “sequences of sequences” and “structures of structures”.  Predictions allow for missing or garbled sensor data to be filled in automatically. There is also an exception case, where the prediction from above is at odds with the sensor data from below.  Exceptions also flow up the cognitive hierarchy until they are handled at some level.  If they flow high enough, we become aware of them as “something’s not right”.

What I find most intriguing is how this memory-prediction framework might be implemented artificially.  While Hawkins does address this, layered Hidden Markov Models (HMMs) would seem to be a useful direction.  Jim Spohrer tells me that Kurzweil’s book “How to Create a Mind” suggests exactly this, so I’m adding that to my reading list.

I wonder how much training data it would take to train such a model.  I can’t help but think of a baby randomly jerking and flexing its arms and legs; boys endlessly throwing and catching balls; and kids riding their bikes for hours. All these activities would generate a lot of training data.

I also pondered the implications for service science.  Do service systems have a hierarchy of concepts similar to lower and higher cognitive functions?  What kind of “memories” and “predictions” do service systems have?  Service systems always have documents and policies, but that is not the kind of “active memory” Hawkins thinks is important.  Service employees clearly have internal memories, but are there active memories between small groups of employees?  Do departments or entire organizations have memories?  What are the important “invariant representations” of different service systems?  Should we focus on the differences between Person arrives at front desk vs. Guest checks-in vs. Guest is on a week long vacation vs. Guest is satisfied with service? What are the common sequences (or even sequences of sequences) in an evolving customer encounter?  If we knew them, could we can predict next events.  “Be prepared” seems like a more modest and achievable goal for a service system than the kind of moment-by-moment predication Hawkins envisions.

If you’re particularly interested in bio-inspired intelligence, there is a lot of meat in this book to keep you busy and fascinated.  If you’re more interested in the artificial mechanisms for intelligence, like I am, focus on the memory-prediction framework.  Either way, I recommend this book.

Complexity in Service Science

Yesterday, I gave a short talk to the International Society of Service Innovation Specialists (ISSIP) about some of my Musings on Metrics for Service Systems.  

The basic approach in Eric J. Chaisson’s book Cosmic Evolution is entropy.  His fundamental metric is “free energy rate density”.   Systems “feed” off a flow of energy.  Chaisson computes his metric for a wide range of systems from galaxies, stars, human bodies, and societies, and he shows that larger values of the metric are correlated with our general notion of greater complexity.  If you’re a physicist, you’ll probably enjoy his development through a sequence of formulas; if you’re focused less on physics formulas, more on service science, and don’t want to read the whole book, then I recommend

  • Chapter 3 is the real heart of the book
  • Chapter 4, in the Discussion section:  Evolution: A cultural perspective (p193)
  • Section: Are Complexity and Energy Rate Density the Right Terms? (p215)

My musings were how these ideas could or should be applied in a service science context.  Chaisson’s metric is based on free energy (measured in ergs) flow (per second) density (per gram).  All four of these words have meaning.  I generalize his metric to “usable consumable rate density”.

  • Consumable: Chaisson focuses specifically on energy as the thing systems consume. Below I suggest some other consumables that might be more appropriate for service systems.
  • Usable:  Not all energy is usable.  Total energy is conserved, but only free energy is available to do work.
  • Rate:  Systems feed off the flow of the consumable.
  • Density: To compare systems as vastly different as stars and human bodies, Chaisson divides the value from the three other words by the size of the system. This normalizes the metric to free energy rate per gram. 

I wondered about the following metrics

Area Usable Consumable Rate Density
Physics free energy rate density
Information meaningful information hours of service op size of data stores
Value value
Experience memorable/valuable experience hour of service
Financial after tax revenue  quarter, year liquidation value,

The Physics area is exactly Chaisson’s metric.  Since all service systems exist in the physical world, we can directly use this metric for service systems.

However, some services are virtual, so a metric focusing on information as the consumable makes a lot of sense.  Chaisson discusses this possibility and rejects it because the concepts are slippery.  However, I think it is an area worth further consideration.  Information in a data stream can be partitioned into a part that can be described by some (small) process plus a part that is random.  The “usable information” is only the part described by the process.  This direction needs a lot more fleshing out.

One of the common tag lines for Service Science is “co-creation of value”.  So what is “value” exactly?  We can define it financially (see below), but that ignores some types of value.  Does utility sufficiently capture the idea of “value”?

During the call, Haluk suggested “units of experience” as an interesting base for the consumable.  As not all experiences are “usable”, we might focus on memorable experiences or the user’s utility of the experience (as measured by surveys?).

The measures above have many slippery terms in them.  A financial approach eliminates a lot of slipperiness (though some might argue it introduces sliminess 🙂 ).  Not all revenue is usable, so “after tax revenue per service hour per company liquidation value” might be an interesting service metric. Hunter suggested some other ideas; consumable: profit or EBITDA; rate: per quarter, per year or per reporting period; density: per employee, per customer, per capitialized dollar, or per balance sheet dollar.