This paper studies the problem of interconnected agents collaborating to track a dynamic state from partially informative observations, where the dynamic state evolves according to a slowly varying finite-state Markov chain. Although the centralized version of this problem has been extensively studied in the literature, the decentralized setting, particularly in the context of social learning, remains largely underexplored. The main result of this work is to establish that adaptive social learning (ASL), a recent social learning strategy suited for non-stationary environments, achieves the same error probability scaling law as the centralized solution in the rare transitions regime. Theoretical findings are supported by simulations, offering valuable insights into social learning under Markovian state transitions.
Fundamental Social Learning Scaling Law for Tracking Hidden Markov Models
Matta V.;
2025
Abstract
This paper studies the problem of interconnected agents collaborating to track a dynamic state from partially informative observations, where the dynamic state evolves according to a slowly varying finite-state Markov chain. Although the centralized version of this problem has been extensively studied in the literature, the decentralized setting, particularly in the context of social learning, remains largely underexplored. The main result of this work is to establish that adaptive social learning (ASL), a recent social learning strategy suited for non-stationary environments, achieves the same error probability scaling law as the centralized solution in the rare transitions regime. Theoretical findings are supported by simulations, offering valuable insights into social learning under Markovian state transitions.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


