We consider a network of agents that must solve an online optimization problem from continual observation of streaming data. To this end, the agents implement a distributed cooperative strategy where each agent is allowed to perform local exchange of information with its neighbors. In order to cope with communication constraints, the exchanged information must be compressed to reduce the communication load. We propose a distributed diffusion strategy nicknamed as ACTC (Adapt-Compress-Then-Combine), which implements the following three operations: adaptation, where each agent performs an individual stochastic-gradient update; compression, which leverages a recently introduced class of stochastic compression operators; and combination, where each agent combines the compressed updates received from its neighbors. The main elements of novelty of this work are as follows: i) adaptive strategies, where constant (as opposed to diminishing) step-sizes are critical to infuse the agents with the ability of responding in real time to nonstationary variations in the observed model; ii) directed, i.e., non-symmetric combination policies, which allow us to enhance the role played by the network topology in the learning performance; iii) global strong convexity, a condition under which the individual agents might feature even non-convex cost functions. Under this demanding setting, we establish that the iterates of the ACTC strategy fluctuate around the exact global optimizer with a mean-square-deviation on the order of the step-size, achieving remarkable savings of communication resources. Comparison against up-to-date learning strategies with compressed data highlights the benefits of the proposed solution.

Distributed Adaptive Learning Under Communication Constraints

Carpentiero M.;Matta V.;
2024-01-01

Abstract

We consider a network of agents that must solve an online optimization problem from continual observation of streaming data. To this end, the agents implement a distributed cooperative strategy where each agent is allowed to perform local exchange of information with its neighbors. In order to cope with communication constraints, the exchanged information must be compressed to reduce the communication load. We propose a distributed diffusion strategy nicknamed as ACTC (Adapt-Compress-Then-Combine), which implements the following three operations: adaptation, where each agent performs an individual stochastic-gradient update; compression, which leverages a recently introduced class of stochastic compression operators; and combination, where each agent combines the compressed updates received from its neighbors. The main elements of novelty of this work are as follows: i) adaptive strategies, where constant (as opposed to diminishing) step-sizes are critical to infuse the agents with the ability of responding in real time to nonstationary variations in the observed model; ii) directed, i.e., non-symmetric combination policies, which allow us to enhance the role played by the network topology in the learning performance; iii) global strong convexity, a condition under which the individual agents might feature even non-convex cost functions. Under this demanding setting, we establish that the iterates of the ACTC strategy fluctuate around the exact global optimizer with a mean-square-deviation on the order of the step-size, achieving remarkable savings of communication resources. Comparison against up-to-date learning strategies with compressed data highlights the benefits of the proposed solution.
2024
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11386/4859511
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact