The widespread application of Artificial Neural Networks in various fields that can be potentially life-threatening has raised questions about the quality and reliability of their results. Consequently, it has become increasingly important to investigate the contribution of the uncertainty of the outputs of such systems. Their employment for classification problems lacks a thorough evaluation of the quality of the results, revealing significant shortcomings in the uncertainty management of these methodologies. These limitations highlight the need for more comprehensive and accurate methodologies for estimating uncertainty in classifiers, underscoring the need for a novel approach to develop a robust, accurate, and repeatable methodology. Despite these obstacles, some techniques provide valuable methodological insights, enabling a more comprehensive and general treatment of uncertainty in neural networks for recognition and classification applications. Adhering to the principles outlined in the ISO GUM standard and incorporating appropriate linearization of the model for direct processing makes the authors propose to evaluate uncertainty analytically. Consequently, the proposed approach allows the network output to be evaluated by quantifying its reliability.
Including Measurement Uncertainty to Improve the Reliability of Classification ANN
Carratu' M.;Gallo V.;Laino V.;Liguori C.;Pietrosanto A.
2024-01-01
Abstract
The widespread application of Artificial Neural Networks in various fields that can be potentially life-threatening has raised questions about the quality and reliability of their results. Consequently, it has become increasingly important to investigate the contribution of the uncertainty of the outputs of such systems. Their employment for classification problems lacks a thorough evaluation of the quality of the results, revealing significant shortcomings in the uncertainty management of these methodologies. These limitations highlight the need for more comprehensive and accurate methodologies for estimating uncertainty in classifiers, underscoring the need for a novel approach to develop a robust, accurate, and repeatable methodology. Despite these obstacles, some techniques provide valuable methodological insights, enabling a more comprehensive and general treatment of uncertainty in neural networks for recognition and classification applications. Adhering to the principles outlined in the ISO GUM standard and incorporating appropriate linearization of the model for direct processing makes the authors propose to evaluate uncertainty analytically. Consequently, the proposed approach allows the network output to be evaluated by quantifying its reliability.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.