Bibliographic citations
This is an automatically generated citacion. Modify it if you see fit
Flores, V., (2023). Priority sampling and visual attention for self-driving car [Tesis de maestría, Universidad Católica San pablo]. https://hdl.handle.net/20.500.12590/17744
Flores, V., Priority sampling and visual attention for self-driving car [Tesis de maestría]. PE: Universidad Católica San pablo; 2023. https://hdl.handle.net/20.500.12590/17744
@mastersthesis{renati/781969,
title = "Priority sampling and visual attention for self-driving car",
author = "Flores Benites, Victor",
publisher = "Universidad Católica San pablo",
year = "2023"
}
Full metadata record
Mora Colque, Rensso Victor Hugo
Flores Benites, Victor
2023-09-27T13:57:42Z
2023-09-27T13:57:42Z
2023
1079965
https://hdl.handle.net/20.500.12590/17744
End-to-end methods facilitate the development of self-driving models by employing a single network that learns the human driving style from examples. However, these models face problems such as distributional shift, causal confusion, and high variance. To address these problems we propose two techniques. First, we propose the priority sampling algorithm, which biases a training sampling towards unknown observations for the model. Priority sampling employs a trade-off strategy that incentivizes the training algorithm to explore the whole dataset. Our results show a reduction of the error in the control signals in all the models studied. Moreover, we show evidence that our algorithm limits overtraining on noisy training samples. As a second approach, we propose a model based on the theory of visual attention (Bundesen, 1990) by which selecting relevant visual information to build an optimal environment representation. Our model employs two visual information selection mechanisms: spatial and feature-based attention. Spatial attention selects regions with visual encoding similar to contextual encoding, while feature-based attention selects features disentangled with useful information for routine driving. Furthermore, we encourage the model to recognize new sources of visual information by adding a bottom-up input. Results in the CoRL-2017 dataset (Dosovitskiy et al., 2017) show that our spatial attention mechanism recognizes regions relevant to the driving task. Our model builds disentangled features with low cosine similarity, but with high representation similarity. Finally, we report performance improvements over traditional end-to-end models. (es_PE)
Tesis de maestría (es_PE)
application/pdf
eng
Universidad Católica San pablo
info:eu-repo/semantics/openAccess
https://creativecommons.org/licenses/by-nc/4.0/
Visual attention (es_PE)
Self-driving (es_PE)
Non-identically distributed data distribution (es_PE)
End-to-end methods (es_PE)
Priority sampling and visual attention for self-driving car (es_PE)
info:eu-repo/semantics/masterThesis
Universidad Católica San Pablo. Departamento de Ciencia de la Computación
Ciencia de la Computación (es_PE)
Maestría (es_PE)
Maestro en Ciencia de la Computación (es_PE)
PE
http://purl.org/pe-repo/ocde/ford#1.02.01
Escuela Profesional Ciencia de la Computación (es_PE)
https://purl.org/pe-repo/renati/level#maestro
42846291
https://orcid.org/0000-0003-4734-8752
71962886
611017
Ochoa Luna, Jose Eduardo
Camara Chavez, Guillermo
Chancán, Marvin
https://purl.org/pe-repo/renati/type#tesis
Privada asociativa
info:eu-repo/semantics/publishedVersion
This item is licensed under a Creative Commons License