Given a random variable X that takes values in a finite set X = {x1,..., xn}, and a positive number R, we consider the problem of finding a deterministic function f: X → Y that maximizes the entropy H(f(X)), while satisfying the constraint H(f(X)) ≤ R. This problem occurs in several contexts, including lossy compression with logarithmic loss and the well-known Deterministic Information Bottleneck problem. We prove the NP-hardness of finding the optimal solution to the above maximization problem. On the positive side, we design approximation algorithms that achieve improved approximation factors compared to existing methods in the literature.
NP-Hardness and Approximation Algorithms for Constrained Entropy Maximization
Bruno R.;Vaccaro U.
2026
Abstract
Given a random variable X that takes values in a finite set X = {x1,..., xn}, and a positive number R, we consider the problem of finding a deterministic function f: X → Y that maximizes the entropy H(f(X)), while satisfying the constraint H(f(X)) ≤ R. This problem occurs in several contexts, including lossy compression with logarithmic loss and the well-known Deterministic Information Bottleneck problem. We prove the NP-hardness of finding the optimal solution to the above maximization problem. On the positive side, we design approximation algorithms that achieve improved approximation factors compared to existing methods in the literature.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


