Show simple item record

Towards LiDAR-based Spatio-temporal Scene Understanding for Autonomous Vehicles

dc.contributor.advisorStachniss, Cyrill
dc.contributor.authorBehley, Jens
dc.date.accessioned2026-05-15T11:04:30Z
dc.date.available2026-05-15T11:04:30Z
dc.date.issued15.05.2026
dc.identifier.urihttps://hdl.handle.net/20.500.11811/14155
dc.description.abstractSelf-driving cars are expected to reduce the number of casualties caused by traffic accidents, since a machine is always attentive, it can exploit various input modalities, and it always obeys the traffic rules. Liberating people from driving a vehicle will also enable them to do more pleasant activities while getting from one place to another. A fleet of self-driving cars could also lead to less parked cars in the city as cars could efficiently shared and be available on-demand. All these prospects of self-driving cars led to an increasing activity in this area of research and many large automotive companies invested substantially in the research and development of self-driving cars.
A central aspect of self-driving cars is perception to make sense of the different sensory inputs available. Most self-driving car prototypes rely on a combination of different sensors, such as cameras and 3D LiDAR sensors. In particular, 3D LiDAR sensors provide accurate and dense depth measurements of the environment. Since the advent of fast 3D LiDAR sensors that can produce millions of measurements of the 360° field-of-view, research on 3D LiDAR-based perception attracted increasing attention in the recent years.
In this habilitation thesis, we present our contributions in the area of 3D LiDAR-based perception. We cover our work on 3D LiDAR-based spatial perception to enable an autonomous system to localize itself in the environment. We present our approaches for Simultaneous Localization and Mapping (SLAM) for building maps on-the-fly, localization using existing maps, mapping to generate detailed maps, and map compression to efficiently transfer mapping data.
Furthermore, we cover our approaches for semantic interpretation of a single 3D LiDAR scan. All the presented work in semantic perception is based on our dataset, SemanticKITTI, that provides the data needed to train machine learning approaches for semantic interpretation. Furthermore, we present our work on semantic segmentation and panoptic segmentation. Additionally, we present our approach to reduce the need for labeled data.
Lastly, we cover also our work on unifying spatial and semantic interpretation in the area of spatio-temporal interpretation. In this part, we present our approach for moving object segmentation using a sequence of 3D LiDAR scans. We present our approach for semantic SLAM that use semantic information to improve pose estimation. Lastly, we present our work on panoptic segmentation on a sequence of 3D LiDAR scans that provides spatio-temporal interpretation.
en
dc.language.isoeng
dc.rightsIn Copyright
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/
dc.subject.ddc620 Ingenieurwissenschaften und Maschinenbau
dc.titleTowards LiDAR-based Spatio-temporal Scene Understanding for Autonomous Vehicles
dc.typeDissertation oder Habilitation
dc.publisher.nameUniversitäts- und Landesbibliothek Bonn
dc.publisher.locationBonn
dc.rights.accessRightsopenAccess
dc.identifier.urnhttps://nbn-resolving.org/urn:nbn:de:hbz:5-90100
dc.relation.doihttps://doi.org/10.1109/LRA.2022.3140439
dc.relation.doihttps://doi.org/10.1109/LRA.2022.3142440
dc.relation.doihttps://doi.org/10.1177/02783649211006735
dc.relation.doihttps://doi.org/10.1109/LRA.2021.3061331
dc.relation.doihttps://doi.org/10.1007/s10514-021-09999-0
dc.relation.doihttps://doi.org/10.1109/LRA.2021.3093567
dc.relation.doihttps://doi.org/10.1109/LRA.2021.3059633
dc.relation.doihttps://doi.org/10.1109/CVPR46437.2021.00548
dc.relation.doihttps://doi.org/10.1109/ICRA48506.2021.9561476
dc.relation.doihttps://doi.org/10.1109/ICRA48506.2021.9561335
dc.relation.doihttps://doi.org/10.1109/ICRA48506.2021.9562069
dc.relation.doihttps://doi.org/10.15607/RSS.2020.XVI.009
dc.relation.doihttps://doi.org/10.1109/IROS45743.2020.9340769
dc.relation.doihttps://doi.org/10.1109/IROS45743.2020.9340837
dc.relation.doihttps://doi.org/10.1109/ICCV.2019.00939
dc.relation.doihttps://doi.org/10.1109/IROS40897.2019.8967704
dc.relation.doihttps://doi.org/10.1109/IROS40897.2019.8967762
dc.relation.doihttps://doi.org/10.15607/RSS.2018.XIV.016
ulbbn.pubtypeErstveröffentlichung
ulbbnediss.affiliation.nameRheinische Friedrich-Wilhelms-Universität Bonn
ulbbnediss.affiliation.locationBonn
ulbbnediss.thesis.levelHabilitation
ulbbnediss.dissID9010
ulbbnediss.date.accepted07.06.2023
ulbbnediss.dissNotes.externIn reference to IEEE copyrighted material which is used with permission in this thesis, the IEEE does not endorse any of University of Bonns's products or services. Internal or personal use of this material is permitted. If interested in reprinting/republishing IEEE copyrighted material for advertising or promotional purposes or for creating new collective works for resale or redistribution, please go to http://www.ieee.org/publications_standards/publications/rights/rights_link.html to learn how to obtain a License from RightsLink.
ulbbnediss.instituteAgrar-, Ernährungs- und Ingenieurwissenschaftliche Fakultät : Institut für Geodäsie und Geoinformation (IGG)
ulbbnediss.fakultaetAgrar-, Ernährungs- und Ingenieurwissenschaftliche Fakultät
dc.contributor.coRefereeMcCool, Chris
ulbbnediss.contributor.orcidhttps://orcid.org/0000-0001-6483-0319


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

The following license files are associated with this item:

InCopyright