This work studies social learning under non-stationary conditions. Although designed for online inference, traditional social learning algorithms perform poorly under drifting conditions. To mitigate this drawback, we propose the Adaptive Social Learning (ASL) strategy. This strategy leverages an adaptive Bayesian update, where the adaptation degree can be modulated by tuning a suitable step-size parameter. The learning performance of the ASL algorithm is examined by means of a steady-state analysis. It is shown that, under the regime of small step-sizes: i) consistent learning is possible; ii) and an accurate prediction of the performance can be furnished in terms of a Gaussian approximation.
Adaptation in online social learning
Matta V.;
2021-01-01
Abstract
This work studies social learning under non-stationary conditions. Although designed for online inference, traditional social learning algorithms perform poorly under drifting conditions. To mitigate this drawback, we propose the Adaptive Social Learning (ASL) strategy. This strategy leverages an adaptive Bayesian update, where the adaptation degree can be modulated by tuning a suitable step-size parameter. The learning performance of the ASL algorithm is examined by means of a steady-state analysis. It is shown that, under the regime of small step-sizes: i) consistent learning is possible; ii) and an accurate prediction of the performance can be furnished in terms of a Gaussian approximation.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.