Machine Learning (ML) approach is a discussed research topic because of its benefit in several research fields. The most important issues in the training process of ML are accuracy and speed: a suitable mathematical model is critical and a fast data processing is mandatory. Fractional Calculus is involved in a large number of important applications and, recently, many ML algorithms, in order to improve accuracy of results when performing training in solving optimization problems, are based on decision and control performed by means of time-fractional models to better understand complex systems. However, the high computational cost, which characterizes the numerical solution, of this approach might be a problem for large scale Machine Learning systems. High Performance Computing (HPC) is the way of addressing the need of real time computation. In fact, through tensor-based parallel strategies designed for modern parallel architectures, Fractional Calculus tools are very helpful for the ML training step. In this contest, we consider a time-fractional diffusion system and, after introducing a suitable modification of a numerical model to solve it, we propose a related and novel parallel implementation on GPUs (Graphics Processing Units). Experiments show the gain of performance in terms of execution time and accuracy of our parallel implementation.
File in questo prodotto:
Non ci sono file associati a questo prodotto.