The popularity and the appeal of systems which are able to automatically determine the gender from face images are growing rapidly. Such a great interest arises from the wide variety of applications, especially in the fields of retail and video surveillance. In recent years, there have been several attempts to address this challenge, but a definitive solution has not yet been found. In this paper, we propose a novel approach that fuses domain-specific and trainable features to recognize the gender from face images. In particular, we use the SURF descriptors extracted from 51 facial landmarks related to eyes, nose, and mouth as domain-dependent features, and the COSFIRE filters as trainable features. The proposed approach turns out to be very robust with respect to the well-known face variations, including different poses, expressions, and illumination conditions. It achieves state-of-the-art recognition rates on the GENDER-FERET (94.7%) and on the labeled faces in the wild (99.4%) data sets, which are two of the most popular benchmarks for gender recognition. We further evaluated the method on a new data set acquired in real scenarios, the UNISA-Public, recently made publicly available. It consists of 206 training (144 male, 62 female) and 200 test (139 male, 61 female) images that are acquired with a real-time indoor camera capturing people in regular walking motion. Such experiment has the aim to assess the capability of the algorithm to deal with face images extracted from videos, which are definitely more challenging than the still images available in the standard data sets. Also for this data set, we achieved a high recognition rate of 91.5%, that confirms the generalization capabilities of the proposed approach. Of the two types of features, the trainable COSFIRE filters are the most effective and, given their trainable character, they can be applied in any visual pattern recognition problem.

Fusion of Domain-Specific and Trainable Features for Gender Recognition from Face Images

Azzopardi, George;Greco, Antonio
;
Saggese, Alessia
;
Vento, Mario
2018

Abstract

The popularity and the appeal of systems which are able to automatically determine the gender from face images are growing rapidly. Such a great interest arises from the wide variety of applications, especially in the fields of retail and video surveillance. In recent years, there have been several attempts to address this challenge, but a definitive solution has not yet been found. In this paper, we propose a novel approach that fuses domain-specific and trainable features to recognize the gender from face images. In particular, we use the SURF descriptors extracted from 51 facial landmarks related to eyes, nose, and mouth as domain-dependent features, and the COSFIRE filters as trainable features. The proposed approach turns out to be very robust with respect to the well-known face variations, including different poses, expressions, and illumination conditions. It achieves state-of-the-art recognition rates on the GENDER-FERET (94.7%) and on the labeled faces in the wild (99.4%) data sets, which are two of the most popular benchmarks for gender recognition. We further evaluated the method on a new data set acquired in real scenarios, the UNISA-Public, recently made publicly available. It consists of 206 training (144 male, 62 female) and 200 test (139 male, 61 female) images that are acquired with a real-time indoor camera capturing people in regular walking motion. Such experiment has the aim to assess the capability of the algorithm to deal with face images extracted from videos, which are definitely more challenging than the still images available in the standard data sets. Also for this data set, we achieved a high recognition rate of 91.5%, that confirms the generalization capabilities of the proposed approach. Of the two types of features, the trainable COSFIRE filters are the most effective and, given their trainable character, they can be applied in any visual pattern recognition problem.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: http://hdl.handle.net/11386/4714199
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 28
  • ???jsp.display-item.citation.isi??? 19
social impact