Bibliographic citations
This is an automatically generated citacion. Modify it if you see fit
Flores, V., (2023). Priority sampling and visual attention for self-driving car [Tesis de maestría, Universidad Católica San pablo]. https://hdl.handle.net/20.500.12590/17744
Flores, V., Priority sampling and visual attention for self-driving car [Tesis de maestría]. PE: Universidad Católica San pablo; 2023. https://hdl.handle.net/20.500.12590/17744
@mastersthesis{renati/781969,
title = "Priority sampling and visual attention for self-driving car",
author = "Flores Benites, Victor",
publisher = "Universidad Católica San pablo",
year = "2023"
}
Title: Priority sampling and visual attention for self-driving car
Authors(s): Flores Benites, Victor
Advisor(s): Mora Colque, Rensso Victor Hugo
Keywords: Visual attention; Self-driving; Non-identically distributed data distribution; End-to-end methods
OCDE field: http://purl.org/pe-repo/ocde/ford#1.02.01
Issue Date: 2023
Institution: Universidad Católica San pablo
Abstract: End-to-end methods facilitate the development of self-driving models by employing a single network that learns the human driving style from examples. However, these models face problems such as distributional shift, causal confusion, and high variance. To address these problems we propose two techniques. First, we propose the priority sampling algorithm, which biases a training sampling towards unknown observations for the model. Priority sampling employs a trade-off strategy that incentivizes the training algorithm to explore the whole dataset. Our results show a reduction of the error in the control signals in all the models studied. Moreover, we show evidence that our algorithm limits overtraining on noisy training samples. As a second approach, we propose a model based on the theory of visual attention (Bundesen, 1990) by which selecting relevant visual information to build an optimal environment representation. Our model employs two visual information selection mechanisms: spatial and feature-based attention. Spatial attention selects regions with visual encoding similar to contextual encoding, while feature-based attention selects features disentangled with useful information for routine driving. Furthermore, we encourage the model to recognize new sources of visual information by adding a bottom-up input. Results in the CoRL-2017 dataset (Dosovitskiy et al., 2017) show that our spatial attention mechanism recognizes regions relevant to the driving task. Our model builds disentangled features with low cosine similarity, but with high representation similarity. Finally, we report performance improvements over traditional end-to-end models.
Link to repository: https://hdl.handle.net/20.500.12590/17744
Discipline: Ciencia de la Computación
Grade or title grantor: Universidad Católica San Pablo. Departamento de Ciencia de la Computación
Grade or title: Maestro en Ciencia de la Computación
Juror: Ochoa Luna, Jose Eduardo; Camara Chavez, Guillermo; Chancán, Marvin
Register date: 27-Sep-2023
This item is licensed under a Creative Commons License