Show simple item record

Active Perception for Learning-Based Robot Mapping

dc.contributor.advisorPopović, Marija
dc.contributor.authorJin, Liren
dc.date.accessioned2026-01-21T12:33:14Z
dc.date.available2026-01-21T12:33:14Z
dc.date.issued21.01.2026
dc.identifier.urihttps://hdl.handle.net/20.500.11811/13836
dc.description.abstractAutonomous robots need to perceive and understand their environment in order to plan and carry out tasks. A fundamental aspect of this perception capability is the active control of onboard sensor viewpoints to explore the surrounding environment and acquire informative measurements relevant to the task at hand. Unlike passive perception, which follows predefined path patterns or fixed heuristics for exploration, and external supervision, which requires labor-intensive human guidance, active perception involves autonomous decision-making to determine the most valuable viewpoints for collecting measurements based on the robot's current knowledge of the environment. The key in the process is the view planning step, which enables the robot to select viewpoints that maximize the expected usefulness of the acquired measurements. This capability is relevant in unknown environments, where prior knowledge is unavailable to inform view planning, and its online adaptation can enhance performance for tasks such as localization, object detection, and mapping.
In this thesis, we focus on the task of robot mapping, using robots equipped with onboard sensors to construct spatial representations of their environments. Specifically, we investigate autonomous mapping in unknown environments by integrating active perception strategies. Our goal is to enable robots to actively build accurate spatial representations using sensor measurements. While previous work has studied active perception for robot mapping, many existing approaches do not focus on preserving fine-grained details of the environment, which are crucial for tasks requiring high-fidelity environmental models, including infrastructure inspection and digital twin generation. This largely stems from the use of conventional, discrete map representations, which lead to information loss during the mapping process.
We address this challenge by leveraging learning-based mapping techniques capable of representing the environment in a continuous manner. The main contribution of this thesis is the development of active perception strategies with such mapping techniques. We explore Gaussian processes, image-based neural rendering, semantic neural radiance fields, and Gaussian splatting to achieve autonomous, high-fidelity robot mapping. At the core of our approach lies the adaptation of map representations and the design of utility formulations that assess the expected usefulness of candidate viewpoints with respect to specific mapping objectives, such as reducing map uncertainty or enhancing reconstruction fidelity, thereby enabling active perception. Due to the varying characteristics of these mapping techniques, we develop tailored active perception strategies for each method to align the view planning module with the underlying map representation. To validate our contributions, we evaluate the proposed methods in simulation and real-world scenarios, demonstrating their strengths in improving mapping efficiency and quality for autonomous mapping tasks.
Overall, this thesis highlights the effectiveness of active perception for learning-based robot mapping. By coupling view planning with learning-based mapping techniques, our work takes an important step forward in the field of active perception for robot mapping, contributing to more efficient and accurate environmental modeling in unknown environments. All methods presented in this thesis have been published in peer-reviewed conference papers and journal articles, underscoring their scientific contribution to the field. To support reproducibility and further research, the corresponding source code has been made publicly available in open-access repositories.
en
dc.language.isoeng
dc.rightsIn Copyright
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/
dc.subjectmobile Robotik
dc.subject.ddc004 Informatik
dc.subject.ddc620 Ingenieurwissenschaften und Maschinenbau
dc.titleActive Perception for Learning-Based Robot Mapping
dc.typeDissertation oder Habilitation
dc.identifier.doihttps://doi.org/10.48565/bonndoc-758
dc.publisher.nameUniversitäts- und Landesbibliothek Bonn
dc.publisher.locationBonn
dc.rights.accessRightsopenAccess
dc.identifier.urnhttps://nbn-resolving.org/urn:nbn:de:hbz:5-87456
dc.relation.doihttps://doi.org/10.1109/LRA.2022.3183797
dc.relation.doihttps://doi.org/10.1109/IROS55552.2023.10342226
dc.relation.doihttps://doi.org/10.1109/IROS58592.2024.10801401
dc.relation.doihttps://doi.org/10.1109/LRA.2025.3555149
ulbbn.pubtypeErstveröffentlichung
ulbbnediss.affiliation.nameRheinische Friedrich-Wilhelms-Universität Bonn
ulbbnediss.affiliation.locationBonn
ulbbnediss.thesis.levelDissertation
ulbbnediss.dissID8745
ulbbnediss.date.accepted14.01.2026
ulbbnediss.instituteAgrar-, Ernährungs- und Ingenieurwissenschaftliche Fakultät : Institut für Geodäsie und Geoinformation (IGG)
ulbbnediss.fakultaetAgrar-, Ernährungs- und Ingenieurwissenschaftliche Fakultät
dc.contributor.coRefereeStachniss, Cyrill
ulbbnediss.contributor.orcidhttps://orcid.org/0000-0003-2351-9270


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

The following license files are associated with this item:

InCopyright