Serviços Personalizados
Journal
Artigo
Indicadores
- Citado por SciELO
- Acessos
Links relacionados
- Citado por Google
- Similares em SciELO
- Similares em Google
Compartilhar
Revista Facultad de Ingeniería Universidad de Antioquia
versão impressa ISSN 0120-6230
Resumo
ZAPATA-ZAPATA, Gabriel Jaime; ARIAS-LONDONO, Julián David; VARGAS-BONILLA, Jesús Francisco e OROZCO-ARROYAVE, Juan Rafael. On-line signature verification using Gaussian Mixture Models and small-sample learning strategies. Rev.fac.ing.univ. Antioquia [online]. 2016, n.79, pp.86-97. ISSN 0120-6230. https://doi.org/10.17533/udea.redin.n79a09.
This paper addresses the problem of training on-line signature verification systems when the number of training samples is small, facing the real-world scenario when the number of available signatures per user is limited. The paper evaluates nine different classification strategies based on Gaussian Mixture Models (GMM), and the Universal Background Model (UBM) strategy, which are designed to work under small-sample size conditions. The GMM's learning strategies include the conventional Expectation-Maximisation algorithm and also a Bayesian approach based on variational learning. The signatures are characterised mainly in terms of velocities and accelerations of the users' handwriting patterns. The results show that for a genuine vs. impostor test, the GMM-UBM method is able to keep the accuracy above 93%, even when only 20% of samples are used for training (5 signatures). Moreover, the combination of a full Bayesian UBM and a Support Vector Machine (SVM) (known as GMM-Supervector) is able to achieve 99% of accuracy when the training samples exceed 20. On the other hand, when simulating a real environment where there are not available impostor signatures, once again the combination of a full Bayesian UBM and a SVM, achieve more than 77% of accuracy and a false acceptance rate lower than 3%, using only 20% of the samples for training.
Palavras-chave : On-line signature verification; Gaussian Mixture Models; Universal Background Model; Variational GMM-Supervector; Bayesian learning.