Given a random variable X that takes values in a finite set X = {x1,..., xn}, and a positive number R, we consider the problem of finding a deterministic function f: X → Y that maximizes the entropy H(f(X)), while satisfying the constraint H(f(X)) ≤ R. This problem occurs in several contexts, including lossy compression with logarithmic loss and the well-known Deterministic Information Bottleneck problem. We prove the NP-hardness of finding the optimal solution to the above maximization problem. On the positive side, we design approximation algorithms that achieve improved approximation factors compared to existing methods in the literature.

NP-Hardness and Approximation Algorithms for Constrained Entropy Maximization

Bruno R.;Vaccaro U.
2026

Abstract

Given a random variable X that takes values in a finite set X = {x1,..., xn}, and a positive number R, we consider the problem of finding a deterministic function f: X → Y that maximizes the entropy H(f(X)), while satisfying the constraint H(f(X)) ≤ R. This problem occurs in several contexts, including lossy compression with logarithmic loss and the well-known Deterministic Information Bottleneck problem. We prove the NP-hardness of finding the optimal solution to the above maximization problem. On the positive side, we design approximation algorithms that achieve improved approximation factors compared to existing methods in the literature.
2026
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11386/4933015
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact