Tag Archives: book


SyncMore travel for me and another great book.  This one is Sync: How Order Emerges from Chaos in The Universe, Nature, and Daily Live by Steven Strogatz.

The simplest example,  echoed on the book cover, is how do millions of fireflies synchronize their blinking.  There is no global “blink now” clock.  There is no master firefly orchestrating the performance.  So how do the fireflies end up synchronized?  It’s a simple idea that requires a lot of thought to truly understand.  The problem seems so simple when you first hear it.  I would have probably waved my hands at it, mumbled that the solution probably looked like XYZ, and then moved on. Only when you really try to solve it, do you begin to see the depth and subtly of this particular rat hole.

Strogatz also discusses many other sync problems:  pairs of electrons in superconductors, pacemaker cells in the heart, hints at neurons in the brain, and many others.  I was particularly intrigued by his discussion of the propagation of chemical activation waves in 2 and 3 dimensions.

There are no equations in the book, but Strogatz’s prose is sufficient to give a good taste of the novel mathematics he and others have used to address these problems.  It was a quick read.

As with most books I read, I begin to ponder how I might apply some of its ideas to multiagent systems.  For one, there are multiple ties to social networking (the A is-connected-to B kind; not the Twitter kind).  I also try to re-imagine his waves of chemical activations around a Petri dish transformed into waves of protocol interactions around a social network.

Most of Strogatz’s problems appear to require continuous-space and continuous-time.  Most of my multiagent system problems are simpler, requiring only discrete-space and discrete-time. I’ve developed some “half-vast” (say it out loud) ideas about using a model checker to approach protocol problems with sync-like elements.  I’d introduce some social operators into the model and an expanded CTL-like expression language.  Models would only need to be expanded enough to check the specific properties under consideration.  I’d also need to introduce agent variables that range over a group of agents to express the kinds of properties I have in mind.  The classic CTL temporal operators are strictly time-like, whereas the social operators would be strictly space-like.  Unfortunately, my current ideas could easily cause a massive state space explosion, so I still have work to do.

I so love reading books like Sync:  books that open intellectual doors and expand conceptual horizons.  I expect I’ll be thinking about these ideas for quite some time.

On Intelligence

OnIntelligenceCoverLast week I went to Singapore for business and endured some dang long flights with a lot of time on my hands.  I tried watching the “Anchorman 2” movie, but it was just too off the wall (unintelligent?) for me, so I gave up on it.  Some of the other movies, like “Jack Ryan: Shadow Recruit”, were better.  But I still had a lot of time, so I turned to reading “On Intelligence” by Jeff Hawkins.

In the early chapters, where he was poking holes at a number of established approaches to intelligence, I was a bit skeptical.  But then, as he settled into his memory-prediction framework, he started to win me over with his different view.

Hawkins talks in depth about the biological processes in the human neocortex, which was interesting.  But the most interesting idea to me was his description of a “memory-prediction framework”.  Basically, this framework includes the obvious case of signals flowing from sensors up to higher cognitive levels, plus the less obvious case of signals flowing down from higher to lower cognitive levels.  Each cognitive level remembers what has previously occurred and predicts what is likely to occur next.  These cognitive levels detect and predict “sequences of sequences” and “structures of structures”.  Predictions allow for missing or garbled sensor data to be filled in automatically. There is also an exception case, where the prediction from above is at odds with the sensor data from below.  Exceptions also flow up the cognitive hierarchy until they are handled at some level.  If they flow high enough, we become aware of them as “something’s not right”.

What I find most intriguing is how this memory-prediction framework might be implemented artificially.  While Hawkins does address this, layered Hidden Markov Models (HMMs) would seem to be a useful direction.  Jim Spohrer tells me that Kurzweil’s book “How to Create a Mind” suggests exactly this, so I’m adding that to my reading list.

I wonder how much training data it would take to train such a model.  I can’t help but think of a baby randomly jerking and flexing its arms and legs; boys endlessly throwing and catching balls; and kids riding their bikes for hours. All these activities would generate a lot of training data.

I also pondered the implications for service science.  Do service systems have a hierarchy of concepts similar to lower and higher cognitive functions?  What kind of “memories” and “predictions” do service systems have?  Service systems always have documents and policies, but that is not the kind of “active memory” Hawkins thinks is important.  Service employees clearly have internal memories, but are there active memories between small groups of employees?  Do departments or entire organizations have memories?  What are the important “invariant representations” of different service systems?  Should we focus on the differences between Person arrives at front desk vs. Guest checks-in vs. Guest is on a week long vacation vs. Guest is satisfied with service? What are the common sequences (or even sequences of sequences) in an evolving customer encounter?  If we knew them, could we can predict next events.  “Be prepared” seems like a more modest and achievable goal for a service system than the kind of moment-by-moment predication Hawkins envisions.

If you’re particularly interested in bio-inspired intelligence, there is a lot of meat in this book to keep you busy and fascinated.  If you’re more interested in the artificial mechanisms for intelligence, like I am, focus on the memory-prediction framework.  Either way, I recommend this book.