Show simple item record

Robot Mapping with 3D LiDARs

dc.contributor.advisorStachniss, Cyrill
dc.contributor.authorVizzo, Ignacio Martin
dc.date.accessioned2024-05-15T10:26:23Z
dc.date.available2024-05-15T10:26:23Z
dc.date.issued15.05.2024
dc.identifier.urihttps://hdl.handle.net/20.500.11811/11536
dc.description.abstractRobots can assist humans in a multitude of ways. For example, robots can handle tedious tasks that humans prefer not to do, such as vacuum cleaning daily to keep a house clean. They can tackle challenging problems that, when attempted by humans, might result in fatal errors, such as driving a car. Furthermore, robots can perform tasks we already do but with greater efficiency and accuracy. An example of this could be a robot that constantly scans large warehouses, providing insights on optimizing logistics worldwide. Additionally, robots can be deployed to foreign planets like Mars, where rovers can traverse the terrain, collect data, and send it back to Earth, giving us insights into the potential viability of human habitation there.
Addressing these tasks effectively is a significant challenge due to the complex nature of each component that constitutes a robotics system. A robot without prior knowledge about its environment must simultaneously create a map, determine its location within that map, analyze its surroundings, and devise an efficient route to explore an unfamiliar environment. Often, a map serves as the robot's foundational understanding of its surroundings and provides a spatial representation of the area, identifying obstacles, paths, and other significant features. This knowledge is essential for the robot to effectively navigate, avoid collisions, and perform the aforementioned tasks strategically and safely.
In addition to these challenges, robots exist and navigate within a three-dimensional world. Consequently, using and exploiting modern sensors, such as 3D LiDARs, become essential in tackling real-world robot applications. By relying on 3D data, we can expand mobile robots capabilities and potential applications, pushing the boundaries of what they can accomplish.
The central question of this thesis is: "Can we estimate what the world looks like based on sensor data from 3D LiDARs?" To answer this question, we develop a comprehensive 3D mapping pipeline. We first propose a reliable mechanism to collect data from the real world. Second, we introduce a method to understand the spatial movement of the sensors within the world. Finally, we investigate diverse world representations for different downstream robotic tasks, such as navigation, localization, scene understanding, and others. The ideas presented in this thesis empower mobile robots to create 3D maps on their own, allowing them to understand and navigate the world more effectively.
The work described in this thesis makes several significant contributions to robot mapping using 3D LiDARs. As a result, this work advances the current state of the art regarding robustness and efficiency in robot mapping with 3D LiDARs. All contributions have been validated with tests on real-world datasets, undergone rigorous scientific review, and published in conference papers, workshop papers, and journal articles, all subject to peer review. Furthermore, these contributions have been made publicly available as open-source software to promote transparency and facilitate further research.
en
dc.language.isoeng
dc.rightsIn Copyright
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/
dc.subjectRobot Mapping
dc.subject3D Mapping
dc.subjectPoint cloud registration
dc.subjectMobile Robots
dc.subject3D SLAM
dc.subject.ddc004 Informatik
dc.subject.ddc526.1 Geodäsie
dc.titleRobot Mapping with 3D LiDARs
dc.typeDissertation oder Habilitation
dc.publisher.nameUniversitäts- und Landesbibliothek Bonn
dc.publisher.locationBonn
dc.rights.accessRightsopenAccess
dc.identifier.urnhttps://nbn-resolving.org/urn:nbn:de:hbz:5-76041
dc.relation.doihttps://doi.org/10.1109/ICRA48506.2021.9562069
dc.relation.doihttps://doi.org/10.1109/ICRA48506.2021.9561335
dc.relation.doihttps://doi.org/10.1109/LRA.2022.3187255
dc.relation.doihttps://doi.org/10.3390/s22031296
dc.relation.doihttps://doi.org/10.1109/LRA.2023.3236571
ulbbn.pubtypeErstveröffentlichung
ulbbnediss.affiliation.nameRheinische Friedrich-Wilhelms-Universität Bonn
ulbbnediss.affiliation.locationBonn
ulbbnediss.thesis.levelDissertation
ulbbnediss.dissID7604
ulbbnediss.date.accepted01.12.2023
ulbbnediss.instituteLandwirtschaftliche Fakultät : Institut für Geodäsie und Geoinformation (IGG)
ulbbnediss.fakultaetLandwirtschaftliche Fakultät
dc.contributor.coRefereeGrisetti, Giorgio
ulbbnediss.contributor.orcidhttps://orcid.org/0000-0001-5140-6359


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

The following license files are associated with this item:

InCopyright