Show simple item record

Sensor fusion for localization of automated vehicles

dc.contributor.authorMerfels, Christian
dc.date.accessioned2020-08-31T13:51:39Z
dc.date.available2020-08-31T13:51:39Z
dc.date.issued2020
dc.identifier.urihttps://hdl.handle.net/20.500.11811/8563
dc.description.abstractAutomated vehicles need to precisely know where they are at all times to be able to make informed driving decisions. Therefore, multiple localization systems are typically installed on such vehicles to provide redundant position estimates based on different sensors. Thus, an important task is the fusion of these position estimates into a single estimate. The goal of this thesis to develop a new approach to solve this sensor fusion problem in a generic way to achieve high modularity, interchangeability, and extensibility, while at the same time assuring high precision, robustness, and availability.
Generic approaches to sensor fusion for localization systems face the difficulty that only general assumptions can be made about their input data. These generic assumptions make it complicated to model the error of each input source differently.
We approach this challenge by presenting a novel layer architecture that can be modularly adapted. The core of our generic fusion approach is an optimization method that combines all available position and odometry measurements. We formulate a sliding window pose graph over these measurements to estimate the most probable trajectory of the vehicle. In a preprocessing sublayer, the measurements are adjusted so that different common error characteristics are either reduced or can be taken into account in the estimation process. These include systematic, autocorrelated, and cross-correlated errors as well as outliers. We derive different preprocessing modules for each of these error modes.
In this thesis, we extend the pose graph model to represent the effects of autocorrelated errors and marginalization. We implement our approach and evaluate it using simulated data as well as data gathered on real prototype vehicles. In experiments, we show that the estimation method scales from a filtering-based to a batch solution depending on the available computational resources. In addition, we demonstrate that our preprocessing modules reduce the effects of the described error characteristics. Overall, we develop a generic fusion of position estimates, which is a key component of automated vehicles.
en
dc.format.extent265
dc.language.isoeng
dc.relation.ispartofseriesSchriftenreihe / Institut für Geodäsie und Geoinformation ; 63
dc.rightsIn Copyright
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/
dc.subjectSensor fusion
dc.subjectautomated vehicles
dc.subjectlocalization
dc.subject.ddc526.982 Fotogrammetrie
dc.titleSensor fusion for localization of automated vehicles
dc.typeDissertation oder Habilitation
dc.publisher.nameRheinische Friedrich-Wilhelms-Universität Bonn, Landwirtschaftliche Fakultät, IGG - Institut für Geodäsie und Geoinformation
dc.publisher.locationBonn
dc.rights.accessRightsopenAccess
dc.relation.eissn2699-6685
dc.relation.urnhttps://nbn-resolving.org/nbn:de:hbz:5n-52761
ulbbn.pubtypeZweitveröffentlichung


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

The following license files are associated with this item:

InCopyright