Category Archives: multiagent

multiagent systems

Encouraging Industry Adoption of Multiagent Systems

AAMAS15I just recently attended the AAMAS (2015 conference in Istanbul, Turkey.  One of the key topics I wanted to explore (again), is how to encourage adoption of autonomous agent and multiagent technology by industry practitioners. There were many others who were equally interested in this same topic.

Milind Tambe chaired a very interesting discussion panel on that very topic the first afternoon.  This is not a case theory vs practice.  There was wide agreement that AAMAS is, and should be, focused on both theoretical and practical aspects of these technologies.

Michael Wooldridge said we shouldn’t kid ourselves.  There are built-in incentives throughout the academic system that naturally encourage more theoretical work.  And AAMAS members are unlikely to significantly modify those incentives.

There was also general agreement that AAMAS should encourage more industrial participation.  But Paul Scerri, who has done a quite a bit of practical application, noted that most of the application-oriented papers he’s submitted in past get rejected.  His rejection comments included:

  • need more detail on <blank>:   8 pages in not enough room to cover even a 2nd level of detail on a running system
  • not novel:  practical instantiations of technology seldom include novel elements.  They are primarily trying to combine multiple elements from others into a working whole.
  • No statistical significance: the primary test for most practical applications (particularly first generation ones) is whether they work or not.  Statistical significance is not a critical aspect.
  • Why didn’t you use <blank>:  Again, the goal of most practical applications is to get them to work reasonably well.  Detailed architectural or technology trade-off discussions are secondary.

In short, Paul compared the time and effort he invested into his submitted papers and succinctly concluded

EU(not submit) > EU(submit)

Sarit Kraus talked about the time she did an radio or TV (?) interview about one of her applications.  Afterwards, multiple people came up to her saying “I have a similar problem“.  Some of the problems were similar; some where not that similar, but it at least got a problem-owner talking with a potential problem-solver.  Promoting a variety of MAS applications to the public, in the hopes of eliciting a “similar problem” response, seems like a very effective way to communicate capabilities and encourage more agent adoption.  Therefore, we should spend more time describing example applications, both at AAMAS and through other forums.

I have often been correctly described as more “solution driven” (a solution seeking a problem) rather than “problem driven” (a problem seeking a solution).  But, in technologies that require deep skills, like ours, “solution driven” approaches are likely to be more effective than “problem driven” approaches.

It seems to me, a separate conference or workshop, focused specifically on industrial application, would be beneficial.  The paper reviews should focus on practical aspects, not theoretical ones.  It should be concurrent with AAMAS to encourage cross-pollination between theory and practice.  I know this has been tried before at AAMAS.  We have to figure out how to encourage practitioners to attend.  So … what do practitioners want from AAMAS?  I assert many want tools, techniques and information that they can apply fairly quickly and easily.  Sure, practitioners are probably also on the lookout for a few longer-term, “big ideas”.  But most of the material needs to be as readily accessible and as ready to apply as possible.  For example, downloadable toolkits and sample test datasets.

Just my two cents.  What do you think?


Evolutionary Game Theory and the Role of Third-Party Punishment in Human Cultures

Yesterday I went to a very interesting lecture at Duke University by Professor Dana Nau on Evolutionary Game Theory and the Role of Third-Party Punishment in Human Cultures.

They use evolutionary game theory to help explain the phenomenon of responsible third-party punishment (3PP).  Assume agent A harms agent B.  Then agent C is a responsible third-party punisher (of A) if C spends some its own resources to punish A, receiving no benefit to itself.  Using simulation and a lot of details I don’t consider here, they show that high strength-of-ties and low mobility foster the evolution of responsible 3PP.

While this work considers human cultures, it likely has application to systems of autonomous agents.   And, in particular, how punishment could or should work when agents violate their commitments (commitments are a major element of my own research, which I’ll explain it upcoming posts).

In addition to the meat of this paper, I was particularly interested in the social graph that defines each agent’s neighborhood.  As their approach is evolutionary, they “mutate” (among other things) the social graph during the simulation.  They randomly pick two agents and swap them (I’ll call this “stranger swap”).  This is like ripping these two agents out of their current groups and jamming them into completely different groups.   Each agent adopt the other’s friendships.  This is like Star Trek’s tele-portation, or perhaps The Prince and the Pauper.  This is radical.  Agents can move arbitrarily far, but they can be thrust into neighborhoods of the social graph which have little or nothing in common with their old neighborhoods.

A less radical mutation would be “friend swap” which can swap two agents only if they are currently friends (socially adjacent).  This kind of swap moves an agent to a similar neighborhood.  This is more like slow animal migration than tele-portation.  Of course, since agents can only migrate one step per mutation, it will take longer for them to move as far as “stranger swapping”, but it does a better job at incrementally changing agent’s neighborhoods.  A “friend of a friend swap” would be a middle ground between “stranger swap” and “friend swap”.

All three of these social graph mutations leave the structure of the graph absolutely fixed.  I also wondered about a different kind of mutation which seems closer to what happens in real life.  What about the social mutations of “drop friend” where an agent randomly deletes the relationship (edge) with one of its current friends, and “add friend” where an agent randomly adds a relationship with some new agent.  This means the agent maintains most of its existing relationships, but the structure of the graph changes.  We’d have to carefully consider the pool from which an agent randomly chooses a new friend. If the pool includes all agents in the graph, I’d expect the graph to collapse in on itself (constantly shrinking graph diameter) over time.  If the pool includes just friends of friends, the graph would still shrink, but more slowly.  Is there a way to mutate so that various properties of the graph (like graph diameter) are maintained?

I think social graph mutations are interesting.  Do you agree?