# Using Commitments: The Workhorse

I Promise

I am going to post some examples of how you can use commitments to describe various real-world situations.  Remember that a commitment is a promise by a set of debtor agents to a set of creditor agents.  By explicitly modeling the commitments through out an interaction, the agents involved can better understand, at each step, what they have done in the past and what they still need to do in the future.

Today’s post is about a protocol called OrderPayShip.  I use it so often that I call it The Workhorse.   This example shows a “classic” interaction pattern for incrementally creating and satisfying commitments in a business transaction. It’s an interaction—or protocol (more about protocols in upcoming posts)—between a Buyer (she) and a Seller (he).  The protocol incrementally creates two commitments, and then incrementally satisfies those two commitments.

Let’s look at a common sequence of messages exchanged between these two agents using the OrderPayShip protocol.  I’m only going to talk about the “happy path” of this protocol for now—the one that both agents hope will happen—and skip any error conditions for now.  Five messages are exchanged:

1. Buyer requests a price quote on some good.  Suppose Buyer wants a pizza, walks up to Seller’s counter and she asks how much a pizza would cost.
2. Seller returns his price quote.  That is, Seller tells Buyer that a pizza costs $10. This message also means Seller commits to Buyer that he will give the pizza to Buyer, if Buyer pays the quoted price: C(Seller, Buyer,$10, pizza).  Typically this information is implied by the interaction, but we track commitments explicitly.
3. Buyer likes the idea of a $10 pizza, and she places an order. Placing the order also means Buyer commits to Seller that she will pay, if Seller gives her a pizza: C(Buyer, Seller, pizza,$10).
At this point, there are two reciprocal commitments: one from Seller to Buyer and one from Buyer to Seller, and both commitments are conditional.  The protocol could stall at this point, because neither agent is required to do anything more.  But since both agents are interested completing the order, one of them will typically make a move.
4. Here, I assume Buyer sends her payment first.  This satisfies Buyer’s commitment and also makes Seller’s commitment unconditional.
5. To satisfy his unconditional commitment, Seller gives her a pizza.  This satisfies Seller’s commitment.

At this point both commitments are satisfied and the protocol is complete.

# Commitments between Social Agents

A key element of my research, and that of my research group, is commitments.  Commitments are an explicit, externally visible representation of a social contract between autonomous agents.

In systems built out of components or objects (which are not autonomous), the global designer might program a component with an externally visible rule “IF receive event A, THEN send event B”.   But, autonomous agents can’t be built that way.  First, rules violate an agent’s autonomy.  Second, there is often no single designer who can program (require) all the agents.  Societies of autonomous agents need a more flexible and natural approach.  They can’t be built from such rules, but they can be built from social commitments.

A social commitment is a directed, social contract between autonomous agents.  A commitment is from a set of debtor agents, to a set of creditor agents, that if an antecedent Boolean expression (or event) occurs, the debtors commit to making the consequent Boolean expression (or event) occur.   There are no restrictions on the order of the antecedent event and the consequent event, so that debtors can act early if they choose.  A commitment is written

C(debtors, creditors, antecedent, consequent)

For example, I could commit to delivering a pizza to you, if you pay me $10. C(Me, You, pay$10, deliver pizza)

A common commitment pattern is where two agents make reciprocal commitments to each other.  So you might make the reciprocal commitment to pay me $10 if I deliver the pizza to you. C(You, Me, deliver pizza, pay$10)

Commitments can be combined in other ways too.  We model business contracts as sets of commitments. We can also reason over commitments, but that is too deep for this post.

Commitments must be created by the debtors.  Creating a commitment is like signing the social contract.  After creating the first pizza commitment above, the debtors are conditionally committed to the creditors because the antecedent has not yet occurred. Typically, the next step is some agent (often a creditor) acts to make the antecedent true.  The commitment is now detached, and the debtors are now unconditionally committed to the creditors—the debtors should make the consequent true at some point in the future.  When the debtors make the consequent true, the commitment becomes discharged (or satisfied).

Or, debtors can transfer the commitment to another set of debtors.  Or, the creditors can release the debtors from their commitment without penalty.  These are perfectly acceptable outcomes.  Because agents are autonomous, they can not be forced to perform the consequent.  In real systems, agents  may be unwilling or unable to satisfy a commitment.  When this happens, the debtors can cancel the commitment, with a penalty, and the commitment becomes violated.  The only thing we do require is that debtors can not sit on an unconditional commitment forever.  They must eventually satisfy, transfer, be released from, or cancel all of their unconditional commitments.

Multiagent systems are social systems of agents.  So we model many of their relationships as externally visible, social commitments.  Commitments give autonomous agents the flexibility they require to make, satisfy and cancel their commitments to other agents.

# Complexity in Service Science

Yesterday, I gave a short talk to the International Society of Service Innovation Specialists (ISSIP) about some of my Musings on Metrics for Service Systems.

The basic approach in Eric J. Chaisson’s book Cosmic Evolution is entropy.  His fundamental metric is “free energy rate density”.   Systems “feed” off a flow of energy.  Chaisson computes his metric for a wide range of systems from galaxies, stars, human bodies, and societies, and he shows that larger values of the metric are correlated with our general notion of greater complexity.  If you’re a physicist, you’ll probably enjoy his development through a sequence of formulas; if you’re focused less on physics formulas, more on service science, and don’t want to read the whole book, then I recommend

• Chapter 3 is the real heart of the book
• Chapter 4, in the Discussion section:  Evolution: A cultural perspective (p193)
• Section: Are Complexity and Energy Rate Density the Right Terms? (p215)

My musings were how these ideas could or should be applied in a service science context.  Chaisson’s metric is based on free energy (measured in ergs) flow (per second) density (per gram).  All four of these words have meaning.  I generalize his metric to “usable consumable rate density”.

• Consumable: Chaisson focuses specifically on energy as the thing systems consume. Below I suggest some other consumables that might be more appropriate for service systems.
• Usable:  Not all energy is usable.  Total energy is conserved, but only free energy is available to do work.
• Rate:  Systems feed off the flow of the consumable.
• Density: To compare systems as vastly different as stars and human bodies, Chaisson divides the value from the three other words by the size of the system. This normalizes the metric to free energy rate per gram.

I wondered about the following metrics

Area Usable Consumable Rate Density
Physics free energy rate density
Information meaningful information hours of service op size of data stores
Value value
Experience memorable/valuable experience hour of service
Financial after tax revenue  quarter, year liquidation value,

The Physics area is exactly Chaisson’s metric.  Since all service systems exist in the physical world, we can directly use this metric for service systems.

However, some services are virtual, so a metric focusing on information as the consumable makes a lot of sense.  Chaisson discusses this possibility and rejects it because the concepts are slippery.  However, I think it is an area worth further consideration.  Information in a data stream can be partitioned into a part that can be described by some (small) process plus a part that is random.  The “usable information” is only the part described by the process.  This direction needs a lot more fleshing out.

One of the common tag lines for Service Science is “co-creation of value”.  So what is “value” exactly?  We can define it financially (see below), but that ignores some types of value.  Does utility sufficiently capture the idea of “value”?

During the call, Haluk suggested “units of experience” as an interesting base for the consumable.  As not all experiences are “usable”, we might focus on memorable experiences or the user’s utility of the experience (as measured by surveys?).

The measures above have many slippery terms in them.  A financial approach eliminates a lot of slipperiness (though some might argue it introduces sliminess 🙂 ).  Not all revenue is usable, so “after tax revenue per service hour per company liquidation value” might be an interesting service metric. Hunter suggested some other ideas; consumable: profit or EBITDA; rate: per quarter, per year or per reporting period; density: per employee, per customer, per capitialized dollar, or per balance sheet dollar.

# Evolutionary Game Theory and the Role of Third-Party Punishment in Human Cultures

Yesterday I went to a very interesting lecture at Duke University by Professor Dana Nau on Evolutionary Game Theory and the Role of Third-Party Punishment in Human Cultures.

They use evolutionary game theory to help explain the phenomenon of responsible third-party punishment (3PP).  Assume agent A harms agent B.  Then agent C is a responsible third-party punisher (of A) if C spends some its own resources to punish A, receiving no benefit to itself.  Using simulation and a lot of details I don’t consider here, they show that high strength-of-ties and low mobility foster the evolution of responsible 3PP.

While this work considers human cultures, it likely has application to systems of autonomous agents.   And, in particular, how punishment could or should work when agents violate their commitments (commitments are a major element of my own research, which I’ll explain it upcoming posts).

In addition to the meat of this paper, I was particularly interested in the social graph that defines each agent’s neighborhood.  As their approach is evolutionary, they “mutate” (among other things) the social graph during the simulation.  They randomly pick two agents and swap them (I’ll call this “stranger swap”).  This is like ripping these two agents out of their current groups and jamming them into completely different groups.   Each agent adopt the other’s friendships.  This is like Star Trek’s tele-portation, or perhaps The Prince and the Pauper.  This is radical.  Agents can move arbitrarily far, but they can be thrust into neighborhoods of the social graph which have little or nothing in common with their old neighborhoods.

A less radical mutation would be “friend swap” which can swap two agents only if they are currently friends (socially adjacent).  This kind of swap moves an agent to a similar neighborhood.  This is more like slow animal migration than tele-portation.  Of course, since agents can only migrate one step per mutation, it will take longer for them to move as far as “stranger swapping”, but it does a better job at incrementally changing agent’s neighborhoods.  A “friend of a friend swap” would be a middle ground between “stranger swap” and “friend swap”.

All three of these social graph mutations leave the structure of the graph absolutely fixed.  I also wondered about a different kind of mutation which seems closer to what happens in real life.  What about the social mutations of “drop friend” where an agent randomly deletes the relationship (edge) with one of its current friends, and “add friend” where an agent randomly adds a relationship with some new agent.  This means the agent maintains most of its existing relationships, but the structure of the graph changes.  We’d have to carefully consider the pool from which an agent randomly chooses a new friend. If the pool includes all agents in the graph, I’d expect the graph to collapse in on itself (constantly shrinking graph diameter) over time.  If the pool includes just friends of friends, the graph would still shrink, but more slowly.  Is there a way to mutate so that various properties of the graph (like graph diameter) are maintained?

I think social graph mutations are interesting.  Do you agree?

# Welcome, Agents

I hope launching this blog on April 1st  encourages a bit of whimsy to this blog, rather than being some kind of bad omen.

My plan is to blog about a range topics, as suits my fancy.  I am very interested in multiagent systems (MAS) technology and applications, and I believe it can and should have major positive contributions to many facets of our modern world. I expect MAS to be the topic of most of my posts.  The semantic web and knowledge graph technologies are a way to efficiently represent and reason about complex data.  As a member of IBM’s Watson project (see disclaimer below), I spend quite a bit of time thinking about natural language processing (NLP).  Mathematics has always fascinated me, and I have “mental back-burners” that are constantly churning on such things.  Java and various web technologies could easily find their way into some of my posts.  Rest assured, I have no plans to post vacation or baby pictures, or meaningless updates on where I’m currently eating, drinking, or stuck in traffic.

I encourage comments of both serious and less-serious natures.

Scott Gerard (gerard at gerard.guru)

Standard disclaimer:  I am personally responsible for all content.  I am not speaking on behalf of my employer.