Towards LiDAR-based Spatio-temporal Scene Understanding for Autonomous Vehicles
Towards LiDAR-based Spatio-temporal Scene Understanding for Autonomous Vehicles

| dc.contributor.advisor | Stachniss, Cyrill | |
| dc.contributor.author | Behley, Jens | |
| dc.date.accessioned | 2026-05-15T11:04:30Z | |
| dc.date.available | 2026-05-15T11:04:30Z | |
| dc.date.issued | 15.05.2026 | |
| dc.identifier.uri | https://hdl.handle.net/20.500.11811/14155 | |
| dc.description.abstract | Self-driving cars are expected to reduce the number of casualties caused by traffic accidents, since a machine is always attentive, it can exploit various input modalities, and it always obeys the traffic rules. Liberating people from driving a vehicle will also enable them to do more pleasant activities while getting from one place to another. A fleet of self-driving cars could also lead to less parked cars in the city as cars could efficiently shared and be available on-demand. All these prospects of self-driving cars led to an increasing activity in this area of research and many large automotive companies invested substantially in the research and development of self-driving cars. A central aspect of self-driving cars is perception to make sense of the different sensory inputs available. Most self-driving car prototypes rely on a combination of different sensors, such as cameras and 3D LiDAR sensors. In particular, 3D LiDAR sensors provide accurate and dense depth measurements of the environment. Since the advent of fast 3D LiDAR sensors that can produce millions of measurements of the 360° field-of-view, research on 3D LiDAR-based perception attracted increasing attention in the recent years. In this habilitation thesis, we present our contributions in the area of 3D LiDAR-based perception. We cover our work on 3D LiDAR-based spatial perception to enable an autonomous system to localize itself in the environment. We present our approaches for Simultaneous Localization and Mapping (SLAM) for building maps on-the-fly, localization using existing maps, mapping to generate detailed maps, and map compression to efficiently transfer mapping data. Furthermore, we cover our approaches for semantic interpretation of a single 3D LiDAR scan. All the presented work in semantic perception is based on our dataset, SemanticKITTI, that provides the data needed to train machine learning approaches for semantic interpretation. Furthermore, we present our work on semantic segmentation and panoptic segmentation. Additionally, we present our approach to reduce the need for labeled data. Lastly, we cover also our work on unifying spatial and semantic interpretation in the area of spatio-temporal interpretation. In this part, we present our approach for moving object segmentation using a sequence of 3D LiDAR scans. We present our approach for semantic SLAM that use semantic information to improve pose estimation. Lastly, we present our work on panoptic segmentation on a sequence of 3D LiDAR scans that provides spatio-temporal interpretation. | en |
| dc.language.iso | eng | |
| dc.rights | In Copyright | |
| dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | |
| dc.subject.ddc | 620 Ingenieurwissenschaften und Maschinenbau | |
| dc.title | Towards LiDAR-based Spatio-temporal Scene Understanding for Autonomous Vehicles | |
| dc.type | Dissertation oder Habilitation | |
| dc.publisher.name | Universitäts- und Landesbibliothek Bonn | |
| dc.publisher.location | Bonn | |
| dc.rights.accessRights | openAccess | |
| dc.identifier.urn | https://nbn-resolving.org/urn:nbn:de:hbz:5-90100 | |
| dc.relation.doi | https://doi.org/10.1109/LRA.2022.3140439 | |
| dc.relation.doi | https://doi.org/10.1109/LRA.2022.3142440 | |
| dc.relation.doi | https://doi.org/10.1177/02783649211006735 | |
| dc.relation.doi | https://doi.org/10.1109/LRA.2021.3061331 | |
| dc.relation.doi | https://doi.org/10.1007/s10514-021-09999-0 | |
| dc.relation.doi | https://doi.org/10.1109/LRA.2021.3093567 | |
| dc.relation.doi | https://doi.org/10.1109/LRA.2021.3059633 | |
| dc.relation.doi | https://doi.org/10.1109/CVPR46437.2021.00548 | |
| dc.relation.doi | https://doi.org/10.1109/ICRA48506.2021.9561476 | |
| dc.relation.doi | https://doi.org/10.1109/ICRA48506.2021.9561335 | |
| dc.relation.doi | https://doi.org/10.1109/ICRA48506.2021.9562069 | |
| dc.relation.doi | https://doi.org/10.15607/RSS.2020.XVI.009 | |
| dc.relation.doi | https://doi.org/10.1109/IROS45743.2020.9340769 | |
| dc.relation.doi | https://doi.org/10.1109/IROS45743.2020.9340837 | |
| dc.relation.doi | https://doi.org/10.1109/ICCV.2019.00939 | |
| dc.relation.doi | https://doi.org/10.1109/IROS40897.2019.8967704 | |
| dc.relation.doi | https://doi.org/10.1109/IROS40897.2019.8967762 | |
| dc.relation.doi | https://doi.org/10.15607/RSS.2018.XIV.016 | |
| ulbbn.pubtype | Erstveröffentlichung | |
| ulbbnediss.affiliation.name | Rheinische Friedrich-Wilhelms-Universität Bonn | |
| ulbbnediss.affiliation.location | Bonn | |
| ulbbnediss.thesis.level | Habilitation | |
| ulbbnediss.dissID | 9010 | |
| ulbbnediss.date.accepted | 07.06.2023 | |
| ulbbnediss.dissNotes.extern | In reference to IEEE copyrighted material which is used with permission in this thesis, the IEEE does not endorse any of University of Bonns's products or services. Internal or personal use of this material is permitted. If interested in reprinting/republishing IEEE copyrighted material for advertising or promotional purposes or for creating new collective works for resale or redistribution, please go to http://www.ieee.org/publications_standards/publications/rights/rights_link.html to learn how to obtain a License from RightsLink. | |
| ulbbnediss.institute | Agrar-, Ernährungs- und Ingenieurwissenschaftliche Fakultät : Institut für Geodäsie und Geoinformation (IGG) | |
| ulbbnediss.fakultaet | Agrar-, Ernährungs- und Ingenieurwissenschaftliche Fakultät | |
| dc.contributor.coReferee | McCool, Chris | |
| ulbbnediss.contributor.orcid | https://orcid.org/0000-0001-6483-0319 |
Files in this item
This item appears in the following Collection(s)
-
E-Dissertationen (1169)




