Bibliographic citations
This is an automatically generated citacion. Modify it if you see fit
Cachay, A., (2022). Agrupamiento de textos basado en la generación de Embeddings [Pontificia Universidad Católica del Perú]. http://hdl.handle.net/20.500.12404/23159
Cachay, A., Agrupamiento de textos basado en la generación de Embeddings []. PE: Pontificia Universidad Católica del Perú; 2022. http://hdl.handle.net/20.500.12404/23159
@mastersthesis{renati/527657,
title = "Agrupamiento de textos basado en la generación de Embeddings",
author = "Cachay Guivin, Anthony Wainer",
publisher = "Pontificia Universidad Católica del Perú",
year = "2022"
}
Title: Agrupamiento de textos basado en la generación de Embeddings
Authors(s): Cachay Guivin, Anthony Wainer
Advisor(s): Beltrán Castañón, César Armando
Keywords: Procesamiento en lenguaje natural (Informática); Inteligencia artificial; Sistemas embebidos (Computadoras)
OCDE field: https://purl.org/pe-repo/ocde/ford#1.02.00
Issue Date: 19-Aug-2022
Institution: Pontificia Universidad Católica del Perú
Abstract: Actualmente, gracias a los avances tecnológicos, principalmente en el mundo de la
informática se logra disponer de una gran cantidad de información, que en su mayoría son
una composición de signos codificados a nivel computacional que forman una unidad de
sentido, como son los textos. Debido a la variabilidad y alta volumetría de información
navegable en internet hace que poder agrupar información veraz sea una tarea complicada.
El avance computacional del lenguaje de procesamiento natural está creciendo cada día
para solucionar estos problemas.
El presente trabajo de investigación estudia la forma como se agrupan los textos con
la generación de Embeddings. En particular, se centra en usar diferentes métodos para
aplicar modelos supervisados y no supervisados para que se puedan obtener resultados
eficientes al momento de toparse con tareas de agrupamiento automático.
Se trabajó con cinco Datasets, y como resultado de la implementación de los modelos
supervisados se pudo determinar que el mejor Embedding es FastText implementado con
Gensim y aplicado en modelos basados en boosting. Para los modelos no supervisados el
mejor Embedding es Glove aplicado en modelos de redes neuronales con AutoEncoder y
capa K-means.
Nowadays, thanks to technological advances, mainly in the world of information technology, a large amount of information is available, most of which is a composition of signs encoded at a computational level that form a unit of meaning, such as texts. Due to the variability and high volume of navigable information on the Internet, grouping truthful information is a complicated task. The computational advance of natural language processing is growing every day to solve these problems. The present research work studies the way texts are clustered with the generation of Embeddings. In particular, it focuses on using different methods to apply supervised and unsupervised models so that efficient results can be obtained when encountering automatic clustering tasks. Five Datasets were worked with, and as a result of the implementation of the supervised models it was determined that the best Embedding is FastText implemented with Gensim and applied in models based on boosting. For the unsupervised models the best Embedding is Glove applied in neural network models with AutoEncoder and K-means layer.
Nowadays, thanks to technological advances, mainly in the world of information technology, a large amount of information is available, most of which is a composition of signs encoded at a computational level that form a unit of meaning, such as texts. Due to the variability and high volume of navigable information on the Internet, grouping truthful information is a complicated task. The computational advance of natural language processing is growing every day to solve these problems. The present research work studies the way texts are clustered with the generation of Embeddings. In particular, it focuses on using different methods to apply supervised and unsupervised models so that efficient results can be obtained when encountering automatic clustering tasks. Five Datasets were worked with, and as a result of the implementation of the supervised models it was determined that the best Embedding is FastText implemented with Gensim and applied in models based on boosting. For the unsupervised models the best Embedding is Glove applied in neural network models with AutoEncoder and K-means layer.
Link to repository: http://hdl.handle.net/20.500.12404/23159
Discipline: Informática con mención en Ciencias de la Computación
Grade or title grantor: Pontificia Universidad Católica del Perú. Escuela de Posgrado.
Grade or title: Maestro en Informática con mención en Ciencias de la Computación
Juror: Pineda Ancco, Ferdinand Edgardo; Beltran Castañon, Cesar Armando; Gomez Montoya, Hector Erasmo
Register date: 19-Aug-2022
This item is licensed under a Creative Commons License