Gender recognition from face images is an important application and it is still an open computer vision problem, even though it is something trivial from the human visual system. Variations in pose, lighting, and expression are few of the problems that make such an application challenging for a computer system. Neurophysiological studies demonstrate that the human brain is able to distinguish men and women also in absence of external cues, by analyzing the shape of specific parts of the face. In this paper, we describe an automatic procedure that combines trainable shape and color features for gender classification. In particular the proposed method fuses edge-based and color-blob-based features by means of trainable COSFIRE filters. The former types of feature are able to extract information about the shape of a face whereas the latter extract information about shades of colors in different parts of the face. We use these two sets of features to create a stacked classification SVM model and demonstrate its effectiveness on the GENDER-COLOR-FERET dataset, where we achieve an accuracy of 96.4%.

Gender recognition from face images using trainable shape and color features

Azzopardi G.;Foggia P.;Greco A.
;
Saggese A.;Vento M.
2018

Abstract

Gender recognition from face images is an important application and it is still an open computer vision problem, even though it is something trivial from the human visual system. Variations in pose, lighting, and expression are few of the problems that make such an application challenging for a computer system. Neurophysiological studies demonstrate that the human brain is able to distinguish men and women also in absence of external cues, by analyzing the shape of specific parts of the face. In this paper, we describe an automatic procedure that combines trainable shape and color features for gender classification. In particular the proposed method fuses edge-based and color-blob-based features by means of trainable COSFIRE filters. The former types of feature are able to extract information about the shape of a face whereas the latter extract information about shades of colors in different parts of the face. We use these two sets of features to create a stacked classification SVM model and demonstrate its effectiveness on the GENDER-COLOR-FERET dataset, where we achieve an accuracy of 96.4%.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: http://hdl.handle.net/11386/4726244
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? ND
social impact