Citas bibligráficas
Morales, J., (2024). Generación de imágenes de acciones específicas de una persona utilizando aprendizaje profundo [Pontificia Universidad Católica del Perú]. http://hdl.handle.net/20.500.12404/27570
Morales, J., Generación de imágenes de acciones específicas de una persona utilizando aprendizaje profundo []. PE: Pontificia Universidad Católica del Perú; 2024. http://hdl.handle.net/20.500.12404/27570
@mastersthesis{renati/528018,
title = "Generación de imágenes de acciones específicas de una persona utilizando aprendizaje profundo",
author = "Morales Pariona, Jose Ulises",
publisher = "Pontificia Universidad Católica del Perú",
year = "2024"
}
Since the appearance of GAN networks, various investigations have been carried out on how to generate images in various fields, such as image generation, image conversion, video synthesis, image synthesis from text, and video frame prediction. Based mostly on improving the generation of high resolution images and the reconstruction or prediction of data. The purpose of this work is to implement GAN networks in other areas, such as the generation of images of entities performing an action. In this case, 3 actions of people were considered, which are the Gluteus, Abdomen and Cardio exercises. First, the images from YouTube were downloaded and processed, which includes a sequence of images of each action. Subsequently, two groups of images were separated, of a single person, and of different people performing the actions. Secondly, the InfoGAN model was selected for image generation, having the Initial Score (PI) as a performance evaluator. Obtaining as results for the first group, a maximum score of 1.28 and in the second group, a maximum score of 1.3. In conclusion, although the maximum score of 3 was not obtained for this performance tester, due to the quantity and quality of the images. It can be seen that the model is able to differentiate the 3 types of exercises, although there are cases where the legs, arms and head are shown incorrectly.
Este ítem está sujeto a una licencia Creative Commons Licencia Creative Commons