The Influence Maximization problem is a classic and well-studied problem in the area of Social Networks Analysis. In this problem you have a social network, a given information diffusion model, and a budget B, and you have to select a set of at most B nodes (seeds) to activate in order to start an information diffusion campaign that is able to reach the (expected) largest number of nodes in the network. Recently, to better model viral marketing scenarios where advertisers conduct multiple rounds of viral marketing to promote one product, attention has been given to the adaptive and the multi-round versions of the problem. Here the campaign is orchestrated on a horizon of T rounds and at the beginning of each round a different set of seeds is activated that can be adaptively selected given the results of the previous rounds. In this paper we generalize this setting to the case where the diffusion probabilities of the links in the network are not known in advance and they have to be learned while the campaign is running. We study the problem under the lens of online bandit algorithms, and we propose an online learning algorithm that is able to achieve a constant approximation of the optimal solution with only constant regret with respect to T. We also propose an alternative approach and we give preliminary experimental evidence that this outperforms our online learning algorithm in terms of computational complexity, keeping the regret sublinear.

Adaptive Multi-Round Influence Maximization with Limited Information

Auletta V.;Carbone F.;Ferraioli D.
;
Vinci C.
2025

Abstract

The Influence Maximization problem is a classic and well-studied problem in the area of Social Networks Analysis. In this problem you have a social network, a given information diffusion model, and a budget B, and you have to select a set of at most B nodes (seeds) to activate in order to start an information diffusion campaign that is able to reach the (expected) largest number of nodes in the network. Recently, to better model viral marketing scenarios where advertisers conduct multiple rounds of viral marketing to promote one product, attention has been given to the adaptive and the multi-round versions of the problem. Here the campaign is orchestrated on a horizon of T rounds and at the beginning of each round a different set of seeds is activated that can be adaptively selected given the results of the previous rounds. In this paper we generalize this setting to the case where the diffusion probabilities of the links in the network are not known in advance and they have to be learned while the campaign is running. We study the problem under the lens of online bandit algorithms, and we propose an online learning algorithm that is able to achieve a constant approximation of the optimal solution with only constant regret with respect to T. We also propose an alternative approach and we give preliminary experimental evidence that this outperforms our online learning algorithm in terms of computational complexity, keeping the regret sublinear.
2025
9798400714269
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11386/4926416
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact