Tag Archives: complexity


SyncMore travel for me and another great book.  This one is Sync: How Order Emerges from Chaos in The Universe, Nature, and Daily Live by Steven Strogatz.

The simplest example,  echoed on the book cover, is how do millions of fireflies synchronize their blinking.  There is no global “blink now” clock.  There is no master firefly orchestrating the performance.  So how do the fireflies end up synchronized?  It’s a simple idea that requires a lot of thought to truly understand.  The problem seems so simple when you first hear it.  I would have probably waved my hands at it, mumbled that the solution probably looked like XYZ, and then moved on. Only when you really try to solve it, do you begin to see the depth and subtly of this particular rat hole.

Strogatz also discusses many other sync problems:  pairs of electrons in superconductors, pacemaker cells in the heart, hints at neurons in the brain, and many others.  I was particularly intrigued by his discussion of the propagation of chemical activation waves in 2 and 3 dimensions.

There are no equations in the book, but Strogatz’s prose is sufficient to give a good taste of the novel mathematics he and others have used to address these problems.  It was a quick read.

As with most books I read, I begin to ponder how I might apply some of its ideas to multiagent systems.  For one, there are multiple ties to social networking (the A is-connected-to B kind; not the Twitter kind).  I also try to re-imagine his waves of chemical activations around a Petri dish transformed into waves of protocol interactions around a social network.

Most of Strogatz’s problems appear to require continuous-space and continuous-time.  Most of my multiagent system problems are simpler, requiring only discrete-space and discrete-time. I’ve developed some “half-vast” (say it out loud) ideas about using a model checker to approach protocol problems with sync-like elements.  I’d introduce some social operators into the model and an expanded CTL-like expression language.  Models would only need to be expanded enough to check the specific properties under consideration.  I’d also need to introduce agent variables that range over a group of agents to express the kinds of properties I have in mind.  The classic CTL temporal operators are strictly time-like, whereas the social operators would be strictly space-like.  Unfortunately, my current ideas could easily cause a massive state space explosion, so I still have work to do.

I so love reading books like Sync:  books that open intellectual doors and expand conceptual horizons.  I expect I’ll be thinking about these ideas for quite some time.

Complex Adaptive Systems

ComplexAdaptiveSystemsOn some recent plane trips, I have been reading Complex Adaptive Systems by John H. Miller and Scott E. Page.

The authors discuss a number of topics we all presume we know, but that are seldom explicitly discussed.  For example, they have really nice descriptions of modeling and emergence.  I certainly can’t recall any specific sources for this material.

They talk about “computation as theory”.  Formal mathematical methods  are certainly a valuable means to understand some theories.  But some social systems must have a certain minimal level of complexity before important behavioral phenomena appear.  Those models are just too complex for formal methods.  Computation is the only way forward. This book is all about how to approach building and understanding computational (agent-based) models for social systems.

This book recaps and integrates some material from Wolfram’s A New Kind of Science.  That was also a good readI clearly remember reading that book a decade ago on my backyard patio, bathed in the physical light of the sun and the intellectual light of intriguing ideas.

I also thoroughly enjoyed Scott Page’s (same author) Understanding Complexity lectures which address the same topic, in an abbreviated fashion, in a video format.

The book is a particularly easy read because the examples are simple and clear, and because they use such interesting expressions and metaphors.  (For example, when talking about pixelated pictures, they have this footnote:  “For the romantic among you, assume a stained glass window”.  I loved that).  I also hope to learn a few things about better technical writing by examining the writing style of this book.


Complexity in Service Science

Yesterday, I gave a short talk to the International Society of Service Innovation Specialists (ISSIP) about some of my Musings on Metrics for Service Systems.  

The basic approach in Eric J. Chaisson’s book Cosmic Evolution is entropy.  His fundamental metric is “free energy rate density”.   Systems “feed” off a flow of energy.  Chaisson computes his metric for a wide range of systems from galaxies, stars, human bodies, and societies, and he shows that larger values of the metric are correlated with our general notion of greater complexity.  If you’re a physicist, you’ll probably enjoy his development through a sequence of formulas; if you’re focused less on physics formulas, more on service science, and don’t want to read the whole book, then I recommend

  • Chapter 3 is the real heart of the book
  • Chapter 4, in the Discussion section:  Evolution: A cultural perspective (p193)
  • Section: Are Complexity and Energy Rate Density the Right Terms? (p215)

My musings were how these ideas could or should be applied in a service science context.  Chaisson’s metric is based on free energy (measured in ergs) flow (per second) density (per gram).  All four of these words have meaning.  I generalize his metric to “usable consumable rate density”.

  • Consumable: Chaisson focuses specifically on energy as the thing systems consume. Below I suggest some other consumables that might be more appropriate for service systems.
  • Usable:  Not all energy is usable.  Total energy is conserved, but only free energy is available to do work.
  • Rate:  Systems feed off the flow of the consumable.
  • Density: To compare systems as vastly different as stars and human bodies, Chaisson divides the value from the three other words by the size of the system. This normalizes the metric to free energy rate per gram. 

I wondered about the following metrics

Area Usable Consumable Rate Density
Physics free energy rate density
Information meaningful information hours of service op size of data stores
Value value
Experience memorable/valuable experience hour of service
Financial after tax revenue  quarter, year liquidation value,

The Physics area is exactly Chaisson’s metric.  Since all service systems exist in the physical world, we can directly use this metric for service systems.

However, some services are virtual, so a metric focusing on information as the consumable makes a lot of sense.  Chaisson discusses this possibility and rejects it because the concepts are slippery.  However, I think it is an area worth further consideration.  Information in a data stream can be partitioned into a part that can be described by some (small) process plus a part that is random.  The “usable information” is only the part described by the process.  This direction needs a lot more fleshing out.

One of the common tag lines for Service Science is “co-creation of value”.  So what is “value” exactly?  We can define it financially (see below), but that ignores some types of value.  Does utility sufficiently capture the idea of “value”?

During the call, Haluk suggested “units of experience” as an interesting base for the consumable.  As not all experiences are “usable”, we might focus on memorable experiences or the user’s utility of the experience (as measured by surveys?).

The measures above have many slippery terms in them.  A financial approach eliminates a lot of slipperiness (though some might argue it introduces sliminess 🙂 ).  Not all revenue is usable, so “after tax revenue per service hour per company liquidation value” might be an interesting service metric. Hunter suggested some other ideas; consumable: profit or EBITDA; rate: per quarter, per year or per reporting period; density: per employee, per customer, per capitialized dollar, or per balance sheet dollar.


Evolutionary Game Theory and the Role of Third-Party Punishment in Human Cultures

Yesterday I went to a very interesting lecture at Duke University by Professor Dana Nau on Evolutionary Game Theory and the Role of Third-Party Punishment in Human Cultures.

They use evolutionary game theory to help explain the phenomenon of responsible third-party punishment (3PP).  Assume agent A harms agent B.  Then agent C is a responsible third-party punisher (of A) if C spends some its own resources to punish A, receiving no benefit to itself.  Using simulation and a lot of details I don’t consider here, they show that high strength-of-ties and low mobility foster the evolution of responsible 3PP.

While this work considers human cultures, it likely has application to systems of autonomous agents.   And, in particular, how punishment could or should work when agents violate their commitments (commitments are a major element of my own research, which I’ll explain it upcoming posts).

In addition to the meat of this paper, I was particularly interested in the social graph that defines each agent’s neighborhood.  As their approach is evolutionary, they “mutate” (among other things) the social graph during the simulation.  They randomly pick two agents and swap them (I’ll call this “stranger swap”).  This is like ripping these two agents out of their current groups and jamming them into completely different groups.   Each agent adopt the other’s friendships.  This is like Star Trek’s tele-portation, or perhaps The Prince and the Pauper.  This is radical.  Agents can move arbitrarily far, but they can be thrust into neighborhoods of the social graph which have little or nothing in common with their old neighborhoods.

A less radical mutation would be “friend swap” which can swap two agents only if they are currently friends (socially adjacent).  This kind of swap moves an agent to a similar neighborhood.  This is more like slow animal migration than tele-portation.  Of course, since agents can only migrate one step per mutation, it will take longer for them to move as far as “stranger swapping”, but it does a better job at incrementally changing agent’s neighborhoods.  A “friend of a friend swap” would be a middle ground between “stranger swap” and “friend swap”.

All three of these social graph mutations leave the structure of the graph absolutely fixed.  I also wondered about a different kind of mutation which seems closer to what happens in real life.  What about the social mutations of “drop friend” where an agent randomly deletes the relationship (edge) with one of its current friends, and “add friend” where an agent randomly adds a relationship with some new agent.  This means the agent maintains most of its existing relationships, but the structure of the graph changes.  We’d have to carefully consider the pool from which an agent randomly chooses a new friend. If the pool includes all agents in the graph, I’d expect the graph to collapse in on itself (constantly shrinking graph diameter) over time.  If the pool includes just friends of friends, the graph would still shrink, but more slowly.  Is there a way to mutate so that various properties of the graph (like graph diameter) are maintained?

I think social graph mutations are interesting.  Do you agree?