Multiple Sequence Alignment (MSA) is a fundamental NP-hard problem in Bioinformatics, central to numerous sequence analysis tasks. Despite extensive research, existing approaches still struggle to achieve optimal alignment accuracy. Recently, Deep Reinforcement Learning (DRL) approaches have shown promise in addressing these limitations. However, their scalability remains limited by the high computational cost and training time required for large-scale MSA tasks. The proposed MSA approach integrates bio-inspired optimization with adaptive learning. A Genetic Algorithm (GA) serves as a high-level “orchestrator”, reformulating alignment as an evolution-driven optimization process. At each evolutionary step, multiple localized reinforcement learning agents generate high fidelity sub-alignments that are then merged into a globally consistent solution. This hybridization of stochastic evolutionary search and policy-driven learning prevents the need to retrain DRL models on extensive datasets, while simultaneously enhancing alignment precision and computational scalability. Preliminary experimental results confirm the effectiveness of the proposed approach, which achieves higher pairwise sum scores across multiple benchmark datasets, highlighting its robustness and competitive advantage in sequence alignment.

Scalable Multiple Sequence Alignment via Genetic Algorithms and localized Deep Reinforcement Learning Agents

Rocco Zaccagnino
;
Gerardo Benevento;Delfina Malandrino;Alessia Ture;Gianluca Zaccagnino
2025

Abstract

Multiple Sequence Alignment (MSA) is a fundamental NP-hard problem in Bioinformatics, central to numerous sequence analysis tasks. Despite extensive research, existing approaches still struggle to achieve optimal alignment accuracy. Recently, Deep Reinforcement Learning (DRL) approaches have shown promise in addressing these limitations. However, their scalability remains limited by the high computational cost and training time required for large-scale MSA tasks. The proposed MSA approach integrates bio-inspired optimization with adaptive learning. A Genetic Algorithm (GA) serves as a high-level “orchestrator”, reformulating alignment as an evolution-driven optimization process. At each evolutionary step, multiple localized reinforcement learning agents generate high fidelity sub-alignments that are then merged into a globally consistent solution. This hybridization of stochastic evolutionary search and policy-driven learning prevents the need to retrain DRL models on extensive datasets, while simultaneously enhancing alignment precision and computational scalability. Preliminary experimental results confirm the effectiveness of the proposed approach, which achieves higher pairwise sum scores across multiple benchmark datasets, highlighting its robustness and competitive advantage in sequence alignment.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11386/4920230
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact