Jin, Liren: Active Perception for Learning-Based Robot Mapping. - Bonn, 2026. - Dissertation, Rheinische Friedrich-Wilhelms-Universität Bonn.
Online-Ausgabe in bonndoc: https://nbn-resolving.org/urn:nbn:de:hbz:5-87456
Online-Ausgabe in bonndoc: https://nbn-resolving.org/urn:nbn:de:hbz:5-87456
@phdthesis{handle:20.500.11811/13836,
urn: https://nbn-resolving.org/urn:nbn:de:hbz:5-87456,
doi: https://doi.org/10.48565/bonndoc-758,
author = {{Liren Jin}},
title = {Active Perception for Learning-Based Robot Mapping},
school = {Rheinische Friedrich-Wilhelms-Universität Bonn},
year = 2026,
month = jan,
note = {Autonomous robots need to perceive and understand their environment in order to plan and carry out tasks. A fundamental aspect of this perception capability is the active control of onboard sensor viewpoints to explore the surrounding environment and acquire informative measurements relevant to the task at hand. Unlike passive perception, which follows predefined path patterns or fixed heuristics for exploration, and external supervision, which requires labor-intensive human guidance, active perception involves autonomous decision-making to determine the most valuable viewpoints for collecting measurements based on the robot's current knowledge of the environment. The key in the process is the view planning step, which enables the robot to select viewpoints that maximize the expected usefulness of the acquired measurements. This capability is relevant in unknown environments, where prior knowledge is unavailable to inform view planning, and its online adaptation can enhance performance for tasks such as localization, object detection, and mapping.
In this thesis, we focus on the task of robot mapping, using robots equipped with onboard sensors to construct spatial representations of their environments. Specifically, we investigate autonomous mapping in unknown environments by integrating active perception strategies. Our goal is to enable robots to actively build accurate spatial representations using sensor measurements. While previous work has studied active perception for robot mapping, many existing approaches do not focus on preserving fine-grained details of the environment, which are crucial for tasks requiring high-fidelity environmental models, including infrastructure inspection and digital twin generation. This largely stems from the use of conventional, discrete map representations, which lead to information loss during the mapping process.
We address this challenge by leveraging learning-based mapping techniques capable of representing the environment in a continuous manner. The main contribution of this thesis is the development of active perception strategies with such mapping techniques. We explore Gaussian processes, image-based neural rendering, semantic neural radiance fields, and Gaussian splatting to achieve autonomous, high-fidelity robot mapping. At the core of our approach lies the adaptation of map representations and the design of utility formulations that assess the expected usefulness of candidate viewpoints with respect to specific mapping objectives, such as reducing map uncertainty or enhancing reconstruction fidelity, thereby enabling active perception. Due to the varying characteristics of these mapping techniques, we develop tailored active perception strategies for each method to align the view planning module with the underlying map representation. To validate our contributions, we evaluate the proposed methods in simulation and real-world scenarios, demonstrating their strengths in improving mapping efficiency and quality for autonomous mapping tasks.
Overall, this thesis highlights the effectiveness of active perception for learning-based robot mapping. By coupling view planning with learning-based mapping techniques, our work takes an important step forward in the field of active perception for robot mapping, contributing to more efficient and accurate environmental modeling in unknown environments. All methods presented in this thesis have been published in peer-reviewed conference papers and journal articles, underscoring their scientific contribution to the field. To support reproducibility and further research, the corresponding source code has been made publicly available in open-access repositories.},
url = {https://hdl.handle.net/20.500.11811/13836}
}
urn: https://nbn-resolving.org/urn:nbn:de:hbz:5-87456,
doi: https://doi.org/10.48565/bonndoc-758,
author = {{Liren Jin}},
title = {Active Perception for Learning-Based Robot Mapping},
school = {Rheinische Friedrich-Wilhelms-Universität Bonn},
year = 2026,
month = jan,
note = {Autonomous robots need to perceive and understand their environment in order to plan and carry out tasks. A fundamental aspect of this perception capability is the active control of onboard sensor viewpoints to explore the surrounding environment and acquire informative measurements relevant to the task at hand. Unlike passive perception, which follows predefined path patterns or fixed heuristics for exploration, and external supervision, which requires labor-intensive human guidance, active perception involves autonomous decision-making to determine the most valuable viewpoints for collecting measurements based on the robot's current knowledge of the environment. The key in the process is the view planning step, which enables the robot to select viewpoints that maximize the expected usefulness of the acquired measurements. This capability is relevant in unknown environments, where prior knowledge is unavailable to inform view planning, and its online adaptation can enhance performance for tasks such as localization, object detection, and mapping.
In this thesis, we focus on the task of robot mapping, using robots equipped with onboard sensors to construct spatial representations of their environments. Specifically, we investigate autonomous mapping in unknown environments by integrating active perception strategies. Our goal is to enable robots to actively build accurate spatial representations using sensor measurements. While previous work has studied active perception for robot mapping, many existing approaches do not focus on preserving fine-grained details of the environment, which are crucial for tasks requiring high-fidelity environmental models, including infrastructure inspection and digital twin generation. This largely stems from the use of conventional, discrete map representations, which lead to information loss during the mapping process.
We address this challenge by leveraging learning-based mapping techniques capable of representing the environment in a continuous manner. The main contribution of this thesis is the development of active perception strategies with such mapping techniques. We explore Gaussian processes, image-based neural rendering, semantic neural radiance fields, and Gaussian splatting to achieve autonomous, high-fidelity robot mapping. At the core of our approach lies the adaptation of map representations and the design of utility formulations that assess the expected usefulness of candidate viewpoints with respect to specific mapping objectives, such as reducing map uncertainty or enhancing reconstruction fidelity, thereby enabling active perception. Due to the varying characteristics of these mapping techniques, we develop tailored active perception strategies for each method to align the view planning module with the underlying map representation. To validate our contributions, we evaluate the proposed methods in simulation and real-world scenarios, demonstrating their strengths in improving mapping efficiency and quality for autonomous mapping tasks.
Overall, this thesis highlights the effectiveness of active perception for learning-based robot mapping. By coupling view planning with learning-based mapping techniques, our work takes an important step forward in the field of active perception for robot mapping, contributing to more efficient and accurate environmental modeling in unknown environments. All methods presented in this thesis have been published in peer-reviewed conference papers and journal articles, underscoring their scientific contribution to the field. To support reproducibility and further research, the corresponding source code has been made publicly available in open-access repositories.},
url = {https://hdl.handle.net/20.500.11811/13836}
}





