# Tag Archives: social

On some recent plane trips, I have been reading Complex Adaptive Systems by John H. Miller and Scott E. Page.

The authors discuss a number of topics we all presume we know, but that are seldom explicitly discussed.  For example, they have really nice descriptions of modeling and emergence.  I certainly can’t recall any specific sources for this material.

They talk about “computation as theory”.  Formal mathematical methods  are certainly a valuable means to understand some theories.  But some social systems must have a certain minimal level of complexity before important behavioral phenomena appear.  Those models are just too complex for formal methods.  Computation is the only way forward. This book is all about how to approach building and understanding computational (agent-based) models for social systems.

This book recaps and integrates some material from Wolfram’s A New Kind of Science.  That was also a good readI clearly remember reading that book a decade ago on my backyard patio, bathed in the physical light of the sun and the intellectual light of intriguing ideas.

I also thoroughly enjoyed Scott Page’s (same author) Understanding Complexity lectures which address the same topic, in an abbreviated fashion, in a video format.

The book is a particularly easy read because the examples are simple and clear, and because they use such interesting expressions and metaphors.  (For example, when talking about pixelated pictures, they have this footnote:  “For the romantic among you, assume a stained glass window”.  I loved that).  I also hope to learn a few things about better technical writing by examining the writing style of this book.

# Commitments between Social Agents

A key element of my research, and that of my research group, is commitments.  Commitments are an explicit, externally visible representation of a social contract between autonomous agents.

In systems built out of components or objects (which are not autonomous), the global designer might program a component with an externally visible rule “IF receive event A, THEN send event B”.   But, autonomous agents can’t be built that way.  First, rules violate an agent’s autonomy.  Second, there is often no single designer who can program (require) all the agents.  Societies of autonomous agents need a more flexible and natural approach.  They can’t be built from such rules, but they can be built from social commitments.

A social commitment is a directed, social contract between autonomous agents.  A commitment is from a set of debtor agents, to a set of creditor agents, that if an antecedent Boolean expression (or event) occurs, the debtors commit to making the consequent Boolean expression (or event) occur.   There are no restrictions on the order of the antecedent event and the consequent event, so that debtors can act early if they choose.  A commitment is written

C(debtors, creditors, antecedent, consequent)

For example, I could commit to delivering a pizza to you, if you pay me $10. C(Me, You, pay$10, deliver pizza)

A common commitment pattern is where two agents make reciprocal commitments to each other.  So you might make the reciprocal commitment to pay me $10 if I deliver the pizza to you. C(You, Me, deliver pizza, pay$10)

Commitments can be combined in other ways too.  We model business contracts as sets of commitments. We can also reason over commitments, but that is too deep for this post.

Commitments must be created by the debtors.  Creating a commitment is like signing the social contract.  After creating the first pizza commitment above, the debtors are conditionally committed to the creditors because the antecedent has not yet occurred. Typically, the next step is some agent (often a creditor) acts to make the antecedent true.  The commitment is now detached, and the debtors are now unconditionally committed to the creditors—the debtors should make the consequent true at some point in the future.  When the debtors make the consequent true, the commitment becomes discharged (or satisfied).

Or, debtors can transfer the commitment to another set of debtors.  Or, the creditors can release the debtors from their commitment without penalty.  These are perfectly acceptable outcomes.  Because agents are autonomous, they can not be forced to perform the consequent.  In real systems, agents  may be unwilling or unable to satisfy a commitment.  When this happens, the debtors can cancel the commitment, with a penalty, and the commitment becomes violated.  The only thing we do require is that debtors can not sit on an unconditional commitment forever.  They must eventually satisfy, transfer, be released from, or cancel all of their unconditional commitments.

Multiagent systems are social systems of agents.  So we model many of their relationships as externally visible, social commitments.  Commitments give autonomous agents the flexibility they require to make, satisfy and cancel their commitments to other agents.