Learning Discriminative Representations and Generative Approaches for Outdoor 3D LiDAR Data
Learning Discriminative Representations and Generative Approaches for Outdoor 3D LiDAR Data

| dc.contributor.advisor | Stachniss, Cyrill | |
| dc.contributor.author | Nunes, Lucas Franco | |
| dc.date.accessioned | 2026-02-18T09:21:55Z | |
| dc.date.available | 2026-02-18T09:21:55Z | |
| dc.date.issued | 18.02.2026 | |
| dc.identifier.uri | https://hdl.handle.net/20.500.11811/13901 | |
| dc.description.abstract | Mobile robots have become part of our everyday lives nowadays to automate tasks that are monotonous, repetitive, or dangerous to humans, such as autonomous driving. The automation of the driving task is complex to achieve due to the highly dynamic and constantly changing scenarios. In such a scenario, the robot's perception system must be robust to deal with the different agents in the scene, such as pedestrians, cyclists, and other vehicles, constantly interacting. At the same time, wrong decisions can have catastrophic impacts. Therefore, a key limitation for achieving autonomous driving comes from the limitations of current perception systems in interpreting the raw sensor data, such as 3D LiDAR sensors. The training of these perception systems requires the annotated dataset to be large in quantity and variability, which is challenging to acquire when considering 3D LiDAR data. This thesis divides the challenge of improving perception systems into three main challenges. The first challenge is related to the amount of labeled data available to train perception models, accounting for quantity and variability. The second challenge is related to the classes defined in the annotated data. In unstructured environments, it is impossible to define and label all potentially occurring classes of objects. The behavior of the perception system towards those unusual objects becomes unpredictable, potentially leading to dangerous situations. Finally, the third challenge relates to the domain gap between data collected with different sensors. Different LiDARs often have different laser beam patterns or sensor resolutions, leading to drastic changes in the appearance of the collected scans, limiting the use of annotated data between different LiDARs. To address the first challenge, this thesis proposes self-supervised learning strategies to optimize a network for learning a latent representation that can distinguish between coarse segments of objects in the scene without labels. Regarding the second challenge, we propose a new class-agnostic instance segmentation method and a benchmark for evaluating instance segmentation methods in this context. To address the third challenge, this thesis proposes a method to predict a dense and complete point cloud from a single LiDAR scan and an approach to generate novel dense point clouds with semantic annotations to be used as training data. In sum, this thesis tackles those three different challenges to improve perception systems and reduce their dependency on manually annotated datasets. | en |
| dc.language.iso | eng | |
| dc.rights | In Copyright | |
| dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | |
| dc.subject.ddc | 004 Informatik | |
| dc.title | Learning Discriminative Representations and Generative Approaches for Outdoor 3D LiDAR Data | |
| dc.type | Dissertation oder Habilitation | |
| dc.publisher.name | Universitäts- und Landesbibliothek Bonn | |
| dc.publisher.location | Bonn | |
| dc.rights.accessRights | openAccess | |
| dc.identifier.urn | https://nbn-resolving.org/urn:nbn:de:hbz:5-88005 | |
| dc.relation.doi | https://doi.org/10.1109/LRA.2022.3142440 | |
| dc.relation.doi | https://doi.org/10.1109/LRA.2022.3187872 | |
| dc.relation.doi | https://doi.org/10.1109/CVPR52729.2023.00505 | |
| dc.relation.doi | https://doi.org/10.1109/CVPR52733.2024.01399 | |
| dc.relation.doi | https://doi.org/10.48550/arXiv.2503.21449 | |
| ulbbn.pubtype | Erstveröffentlichung | |
| ulbbnediss.affiliation.name | Rheinische Friedrich-Wilhelms-Universität Bonn | |
| ulbbnediss.affiliation.location | Bonn | |
| ulbbnediss.thesis.level | Dissertation | |
| ulbbnediss.dissID | 8800 | |
| ulbbnediss.date.accepted | 10.02.2026 | |
| ulbbnediss.institute | Agrar-, Ernährungs- und Ingenieurwissenschaftliche Fakultät : Institut für Geodäsie und Geoinformation (IGG) | |
| ulbbnediss.fakultaet | Agrar-, Ernährungs- und Ingenieurwissenschaftliche Fakultät | |
| dc.contributor.coReferee | Valada, Abhinav |
Files in this item
This item appears in the following Collection(s)
-
E-Dissertationen (1141)




