Título: Multimodal unconstrained people recognition with face and ear images using deep learning
Campo OCDE: http://purl.org/pe-repo/ocde/ford#1.02.01
Fecha de publicación: 2023
Institución: Universidad Católica San pablo
Resumen: Multibiometric systems rely on the idea of combining multiple biometric
methods into one single process that leads to a more reliable and accurate
system. The combination of two different biometric traits such as face and
ear results in an advantageous and complementary process when using 2D
images taken under uncontrolled conditions.
In this work, we investigate several approaches to fuse information
from the face and ear images to recognize people in a more accurate manner
than using each method separately. We leverage the research maturity
level of the face recognition field to build, first a truly multimodal database
of ear and face images called VGGFace-Ear dataset, second a model that
can describe ear images with high generalization called VGGEar model, and
finally explore fusion strategies at two different levels in a common recognition
pipeline, feature and score levels.
Experiments on the UERC dataset have shown, first of all, an improvement
of around 7% compared to the state-of-the-art methods in the
ear recognition field. Second, fusing information from the face and ear images
increases recognition rates from 79% and 82%, in the unimodal face
and ear recognition respectively, to 94% recognition rate using the Rank-1
metric.
Disciplina académico-profesional: Ciencia de la Computación
Institución que otorga el grado o título: Universidad Católica San Pablo. Departamento de Ciencia de la Computación
Grado o título: Maestro en Ciencia de la Computación
Jurado: Ochoa Luna, José Eduardo; Mora Colque, Rensso Victor Hugo; Cayllahua Cahuina, Edward Jorge Yuri; Menotti, David
Fecha de registro: 15-nov-2023