Deep learning techniques are widely used for compressing and reconstructing images and time-series features, thanks to their ability to learn efficient latent representations. In this work, we introduce two families of pooling functions-pseudo-overlap and pseudo-grouping-and integrate them within an autoencoder to improve feature extraction and latent space organization. Unlike conventional pooling methods, these functions adaptively modulate feature aggregation, allowing better preservation of structural information during compression. We evaluate the proposed pooling mechanisms on both image (MNIST, FashionMNIST, CIFAR10, SVHN, CIFAR100, ImageNet) and time-series datasets (Vasicek, J-Vasicek, GMB, J-GMB, MRD, J-MRD), demonstrating enhanced reconstruction accuracy. Comparative experiments show that pseudo-overlap and pseudo-grouping outperform traditional pooling layers, especially in datasets with complex structures, highlighting the importance of carefully designing pooling operations for optimal performance.
Enhancing feature compression and reconstruction in time-series and image domains with pseudo-overlap and pseudo-grouping pooling functions
Carollo M.
Conceptualization
;Bardozzo F.Investigation
;Tagliaferri R.Supervision
2026
Abstract
Deep learning techniques are widely used for compressing and reconstructing images and time-series features, thanks to their ability to learn efficient latent representations. In this work, we introduce two families of pooling functions-pseudo-overlap and pseudo-grouping-and integrate them within an autoencoder to improve feature extraction and latent space organization. Unlike conventional pooling methods, these functions adaptively modulate feature aggregation, allowing better preservation of structural information during compression. We evaluate the proposed pooling mechanisms on both image (MNIST, FashionMNIST, CIFAR10, SVHN, CIFAR100, ImageNet) and time-series datasets (Vasicek, J-Vasicek, GMB, J-GMB, MRD, J-MRD), demonstrating enhanced reconstruction accuracy. Comparative experiments show that pseudo-overlap and pseudo-grouping outperform traditional pooling layers, especially in datasets with complex structures, highlighting the importance of carefully designing pooling operations for optimal performance.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


