Look-up in Google Scholar
Full metadata record
Olivares Poggi, Cesar Augusto
Huiza Pereyra, Eric Raphael
2020-09-01T00:12:05Z
2020-09-01T00:12:05Z
2020-09-01T00:12:05Z
2020-09-01T00:12:05Z
2020
2020-08-31
http://hdl.handle.net/20.500.12404/16906
People with deafness or hearing disabilities who aim to use computer based systems rely on state-of-art video classification and human action recognition techniques that combine traditional movement pat-tern recognition and deep learning techniques. In this work we present a pipeline for semi-automatic video annotation applied to a non-annotated Peru-vian Signs Language (PSL) corpus along with a novel method for a progressive detection of PSL elements (nSDm). We produced a set of video annotations in-dicating signs appearances for a small set of nouns and numbers along with a labeled PSL dataset (PSL dataset). A model obtained after ensemble a 2D CNN trained with movement patterns extracted from the PSL dataset using Lucas Kanade Opticalflow, and a RNN with LSTM cells trained with raw RGB frames extracted from the PSL dataset reporting state-of-art results over the PSL dataset on signs classification tasks in terms of AUC, Precision and Recall. (es_ES)
Trabajo de investigación (es_ES)
eng (es_ES)
Pontificia Universidad Católica del Perú (es_ES)
info:eu-repo/semantics/openAccess (es_ES)
http://creativecommons.org/licenses/by/2.5/pe/ (*)
Redes neuronales (Computación) (es_ES)
Algoritmos computacionales (es_ES)
Reconocimiento óptico de patrones (es_ES)
Talking with signs: a simple method to detect nouns and numbers in a non annotated signs language corpus (es_ES)
info:eu-repo/semantics/masterThesis (es_ES)
Pontificia Universidad Católica del Perú. Escuela de Posgrado (es_ES)
Informática (es_ES)
Maestría (es_ES)
Maestro en Informática (es_ES)
PE (es_ES)
https://purl.org/pe-repo/ocde/ford#1.02.00 (es_ES)
https://purl.org/pe-repo/renati/level#maestro (es_ES)
09342040
611077 (es_ES)
http://purl.org/pe-repo/renati/type#trabajoDeInvestigacion (es_ES)
Privada asociativa



This item is licensed under a Creative Commons License Creative Commons