Adaptive Social Learning (ASL) enables consistent truth learning in nonstationary environments. In this framework, agents linked by a graph, exchange their local beliefs with neighbors to track some underlying state of interest. This state can drift over time. Previous works have examined the adaptation and learning properties of ASL without relating them to the speed of the drifts. This study assesses the performance of ASL by modeling the true state as a Markov chain. We determine an asymptotic characterization of the ASL tracking performance, revealing the fundamental scaling laws that rule the rare transition regime. We demonstrate that ASL achieves a vanishing probability of error when the average drift time of the Markov chain is smaller than the adaptation time of the ASL algorithm. Simulations illustrate our theoretical findings, providing insights into the ASL performance in dynamic settings.
Adaptive Social Learning for Tracking Rare Transition Markov Chains
Matta V.;
2024
Abstract
Adaptive Social Learning (ASL) enables consistent truth learning in nonstationary environments. In this framework, agents linked by a graph, exchange their local beliefs with neighbors to track some underlying state of interest. This state can drift over time. Previous works have examined the adaptation and learning properties of ASL without relating them to the speed of the drifts. This study assesses the performance of ASL by modeling the true state as a Markov chain. We determine an asymptotic characterization of the ASL tracking performance, revealing the fundamental scaling laws that rule the rare transition regime. We demonstrate that ASL achieves a vanishing probability of error when the average drift time of the Markov chain is smaller than the adaptation time of the ASL algorithm. Simulations illustrate our theoretical findings, providing insights into the ASL performance in dynamic settings.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


