Vizzo, Ignacio Martin: Robot Mapping with 3D LiDARs. - Bonn, 2024. - Dissertation, Rheinische Friedrich-Wilhelms-Universität Bonn.
Online-Ausgabe in bonndoc: https://nbn-resolving.org/urn:nbn:de:hbz:5-76041
@phdthesis{handle:20.500.11811/11536,
urn: https://nbn-resolving.org/urn:nbn:de:hbz:5-76041,
author = {{Ignacio Martin Vizzo}},
title = {Robot Mapping with 3D LiDARs},
school = {Rheinische Friedrich-Wilhelms-Universität Bonn},
year = 2024,
month = may,

note = {Robots can assist humans in a multitude of ways. For example, robots can handle tedious tasks that humans prefer not to do, such as vacuum cleaning daily to keep a house clean. They can tackle challenging problems that, when attempted by humans, might result in fatal errors, such as driving a car. Furthermore, robots can perform tasks we already do but with greater efficiency and accuracy. An example of this could be a robot that constantly scans large warehouses, providing insights on optimizing logistics worldwide. Additionally, robots can be deployed to foreign planets like Mars, where rovers can traverse the terrain, collect data, and send it back to Earth, giving us insights into the potential viability of human habitation there.
Addressing these tasks effectively is a significant challenge due to the complex nature of each component that constitutes a robotics system. A robot without prior knowledge about its environment must simultaneously create a map, determine its location within that map, analyze its surroundings, and devise an efficient route to explore an unfamiliar environment. Often, a map serves as the robot's foundational understanding of its surroundings and provides a spatial representation of the area, identifying obstacles, paths, and other significant features. This knowledge is essential for the robot to effectively navigate, avoid collisions, and perform the aforementioned tasks strategically and safely.
In addition to these challenges, robots exist and navigate within a three-dimensional world. Consequently, using and exploiting modern sensors, such as 3D LiDARs, become essential in tackling real-world robot applications. By relying on 3D data, we can expand mobile robots capabilities and potential applications, pushing the boundaries of what they can accomplish.
The central question of this thesis is: "Can we estimate what the world looks like based on sensor data from 3D LiDARs?" To answer this question, we develop a comprehensive 3D mapping pipeline. We first propose a reliable mechanism to collect data from the real world. Second, we introduce a method to understand the spatial movement of the sensors within the world. Finally, we investigate diverse world representations for different downstream robotic tasks, such as navigation, localization, scene understanding, and others. The ideas presented in this thesis empower mobile robots to create 3D maps on their own, allowing them to understand and navigate the world more effectively.
The work described in this thesis makes several significant contributions to robot mapping using 3D LiDARs. As a result, this work advances the current state of the art regarding robustness and efficiency in robot mapping with 3D LiDARs. All contributions have been validated with tests on real-world datasets, undergone rigorous scientific review, and published in conference papers, workshop papers, and journal articles, all subject to peer review. Furthermore, these contributions have been made publicly available as open-source software to promote transparency and facilitate further research.},

url = {https://hdl.handle.net/20.500.11811/11536}
}

The following license files are associated with this item:

InCopyright