Look-up in Google Scholar
Title: Computationally inexpensive parallel parking supervisor based on video processing
Advisor(s): Rodríguez Valderrama, Paúl Antonio
OCDE field: https://purl.org/pe-repo/ocde/ford#2.02.05
Issue Date: 5-Dec-2013
Institution: Pontificia Universidad Católica del Perú
Abstract: Parallel parking, in general, is a moderate difficulty maneuver. Moreover, for inexperienced drivers, it can be a stressful situation that can lead to errors such as stay far from the sidewalk or damage another vehicle resulting in traffic tickets that range from simple parking violation to crash-related violations. In this work, we propose a computationally effective approach to perform a collisionfree parallel parking. The method will calculate the minimum parking space needed and then the efficient path for the parallel parking. This method is computationally inexpensive in comparison with the current state of the art. Moreover, it could be used by any car because the parameters needed to perform all computations are taken from the specifications of real cars. Preliminary results of this work were summarized in [1] that was presented at the 15th International IEEE Conference on Intelligent Transportation Systems. The simulation and experimental data show the effectiveness of the method. This effectiveness is specified when the path followed by the driver and the path calculated with the method are compared. The image capture of the vehicle is used to get the path made by the driver for the parallel parking. Furthermore, road surface marks were determined (in a parking lot) as a visual aid for the drivers in order to perform the parallel parking maneuver. After analyzing the paths, it is noted that the vehicles that properly followed the marks, parked correctly.
Discipline: Procesamiento de señales e imágenes digitales
Grade or title grantor: Pontificia Universidad Católica del Perú. Escuela de Posgrado
Grade or title: Maestro en Procesamiento de señales e imágenes digitales
Register date: 5-Dec-2013; 5-Dec-2013



This item is licensed under a Creative Commons License Creative Commons