Stephen Cranefield: A Bayesian approach to learning norms from observation in multi-agent systems
From Tatyana Sarayeva
Abstract: The field of multi-agent systems (MAS) addresses theories, tools and techniques for constructing systems of “software agents” that interact to achieve individual and/or collective design goals. Generally, the agents are assumed to be independent and autonomous and to exist in an open and dynamic system. To facilitate orderly and efficient functioning of such systems, researchers have adopted concepts from human society, such as social norms and trust, and developed computational counterparts.
In this talk, I will discuss the problem of learning norms based on observation of the interactions of other agents. This is useful when the membership of a multi-agent system is dynamic and there are no centrally imposed norms, or when the norms are implicitly given by human participation in the system. This talk presents two projects that applied a Bayesian statistical approach to norm learning. The first project learned symbolic norms from a prespecified language by considering two sources of evidence for and against candidate norms: the observation of sanctioning actions, and reasoning about observed agents’ goals and the plans they had available to generate their actions. An evaluation using simulations showed that the approach allows agents to generate norm-compliant behaviour between 70% and 99% of the time, without prior knowledge of the norms. The second project adapted this work to infer norms that might underlie the behaviour observed in bilateral sequences of political events extracted from the GDELT Event Database. I will also discuss current and future work in this area.