Multi-agent systems (MAS) are comprised by autonomous agents, each with a potentially specific goal that may be different from the objective of the system designer. MAS represent the perfect environment for the work in Algorithmic Mechanism Design (AMD), which seeks to design incentive-compatible mechanisms, the core idea being to maximise the profit of the agents when they behave honestly, thus preventing misbehaviour and allowing the designer to optimise her goal. AMD often assumes full rationality of agents who are expected to know their full preferences (however complex they are) and to strategise optimally so that the mechanism is guided towards outcomes they prefer. However, in real MAS, this is too strong an assumption. Humans could interact with software agents and irrationally choose suboptimal strategies due to their cognitive biases and/or limitations [1]. Software agents themselves could be “irrational” since they could have been “badly” programmed either because the programmer misunderstood the incentive structure in place or due to computational barriers [2]. Much work has been done in the last years to relax full rationality and set an agenda to design AMD mechanisms for real MAS, where we seek to incentivise honest behaviour when agents have some form of imperfect rationality. This paper will survey some recent works focusing on mechanism design when agents have imperfect rationality.

Mechanism Design: (Ir)Rationality and Obvious Strategyproofness

Ferraioli D.;Ventre C.
2023-01-01

Abstract

Multi-agent systems (MAS) are comprised by autonomous agents, each with a potentially specific goal that may be different from the objective of the system designer. MAS represent the perfect environment for the work in Algorithmic Mechanism Design (AMD), which seeks to design incentive-compatible mechanisms, the core idea being to maximise the profit of the agents when they behave honestly, thus preventing misbehaviour and allowing the designer to optimise her goal. AMD often assumes full rationality of agents who are expected to know their full preferences (however complex they are) and to strategise optimally so that the mechanism is guided towards outcomes they prefer. However, in real MAS, this is too strong an assumption. Humans could interact with software agents and irrationally choose suboptimal strategies due to their cognitive biases and/or limitations [1]. Software agents themselves could be “irrational” since they could have been “badly” programmed either because the programmer misunderstood the incentive structure in place or due to computational barriers [2]. Much work has been done in the last years to relax full rationality and set an agenda to design AMD mechanisms for real MAS, where we seek to incentivise honest behaviour when agents have some form of imperfect rationality. This paper will survey some recent works focusing on mechanism design when agents have imperfect rationality.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11386/4852653
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact