Zeller, Matthias: Radar-Based Scene Understanding for Autonomous Vehicles. - Bonn, 2025. - Dissertation, Rheinische Friedrich-Wilhelms-Universität Bonn.
Online-Ausgabe in bonndoc: https://nbn-resolving.org/urn:nbn:de:hbz:5-82755
Online-Ausgabe in bonndoc: https://nbn-resolving.org/urn:nbn:de:hbz:5-82755
@phdthesis{handle:20.500.11811/13118,
urn: https://nbn-resolving.org/urn:nbn:de:hbz:5-82755,
author = {{Matthias Zeller}},
title = {Radar-Based Scene Understanding for Autonomous Vehicles},
school = {Rheinische Friedrich-Wilhelms-Universität Bonn},
year = 2025,
month = jun,
note = {Autonomous vehicles have the potential to revolutionize transportation by redmoving instance segmentation leverages the advantages of radar sensors and leads to exceptional results, the predictions are ideal for enhancing scene understanding further. We propose an algorithm to utilize moving instance predictions and reliably associate agents over time, including the tracking of distant objects thatucing accidents caused by human errors, improving efficiency, and enhancing mobility for everyone. Dynamic real-world environments impose several challenges, including varying lighting conditions, adverse weather, and interactions with diverse road users. Therefore, the reliable perception of the surroundings under changing conditions is a fundamental task for safe navigation in dynamic real-world environments. Common perception stacks of modern autonomous driving systems comprise different sensors, such as cameras, LiDARs, and radar sensors, to leverage the advantages and mitigate the limitations of the individual modalities. Cameras and LiDARs face limitations in adverse weather conditions, including rain, fog and snow. Therefore, radar sensors, which work under these conditions, are critical to enable safe mobility. Radar sensors provide sparse point clouds to locate and identify objects within the surroundings of the autonomous vehicle. Each point in the cloud also contains additional information, such as the Doppler velocity, which is the radial velocity of the object. Consequently, radar point clouds include relevant information to differentiate between moving and static instances within the environment. Dedicated algorithms capable of handling sparse and noisy radar point clouds are fundamental to extracting high-level information.
The main contributions of this thesis are novel and impactful approaches that process radar point clouds to improve scene understanding of autonomous vehicles in real-world environments. We focus on several tasks that contribute to the perception and understanding of the environment. We start with semantic segmentation to extract information about the corresponding classes of objects in radar point clouds. In the second step, we propose a novel approach to address moving object segmentation, which benefits from the fact that a binary classification simplifies the overall segmentation compared to general semantic segmentation. The task is well suited for radar data because of the provided Doppler velocity. Based on the reliable segmentation of moving objects, we develop a novel algorithm for instance segmentation to distinguish individual objects within a scene. The resulting segmentation of moving instances improves scene understanding and includes knowledge about the number of agents.
Since only comprise one point. We further use the predictions to predict the semantics of the individual instances. Hence, we propose a novel approach that predicts the semantic classes of the individual agents and utilizes the information to refine the instance assignment.
In sum, our approaches show superior performance on various benchmarks, including diverse environments, and provide optimized modules to enhance scene understanding. All of our proposed approaches presented in this thesis were published in peer-reviewed conference papers and journal articles, contributing to the advancements of radar-based scene understanding in real-world environments.},
url = {https://hdl.handle.net/20.500.11811/13118}
}
urn: https://nbn-resolving.org/urn:nbn:de:hbz:5-82755,
author = {{Matthias Zeller}},
title = {Radar-Based Scene Understanding for Autonomous Vehicles},
school = {Rheinische Friedrich-Wilhelms-Universität Bonn},
year = 2025,
month = jun,
note = {Autonomous vehicles have the potential to revolutionize transportation by redmoving instance segmentation leverages the advantages of radar sensors and leads to exceptional results, the predictions are ideal for enhancing scene understanding further. We propose an algorithm to utilize moving instance predictions and reliably associate agents over time, including the tracking of distant objects thatucing accidents caused by human errors, improving efficiency, and enhancing mobility for everyone. Dynamic real-world environments impose several challenges, including varying lighting conditions, adverse weather, and interactions with diverse road users. Therefore, the reliable perception of the surroundings under changing conditions is a fundamental task for safe navigation in dynamic real-world environments. Common perception stacks of modern autonomous driving systems comprise different sensors, such as cameras, LiDARs, and radar sensors, to leverage the advantages and mitigate the limitations of the individual modalities. Cameras and LiDARs face limitations in adverse weather conditions, including rain, fog and snow. Therefore, radar sensors, which work under these conditions, are critical to enable safe mobility. Radar sensors provide sparse point clouds to locate and identify objects within the surroundings of the autonomous vehicle. Each point in the cloud also contains additional information, such as the Doppler velocity, which is the radial velocity of the object. Consequently, radar point clouds include relevant information to differentiate between moving and static instances within the environment. Dedicated algorithms capable of handling sparse and noisy radar point clouds are fundamental to extracting high-level information.
The main contributions of this thesis are novel and impactful approaches that process radar point clouds to improve scene understanding of autonomous vehicles in real-world environments. We focus on several tasks that contribute to the perception and understanding of the environment. We start with semantic segmentation to extract information about the corresponding classes of objects in radar point clouds. In the second step, we propose a novel approach to address moving object segmentation, which benefits from the fact that a binary classification simplifies the overall segmentation compared to general semantic segmentation. The task is well suited for radar data because of the provided Doppler velocity. Based on the reliable segmentation of moving objects, we develop a novel algorithm for instance segmentation to distinguish individual objects within a scene. The resulting segmentation of moving instances improves scene understanding and includes knowledge about the number of agents.
Since only comprise one point. We further use the predictions to predict the semantics of the individual instances. Hence, we propose a novel approach that predicts the semantic classes of the individual agents and utilizes the information to refine the instance assignment.
In sum, our approaches show superior performance on various benchmarks, including diverse environments, and provide optimized modules to enhance scene understanding. All of our proposed approaches presented in this thesis were published in peer-reviewed conference papers and journal articles, contributing to the advancements of radar-based scene understanding in real-world environments.},
url = {https://hdl.handle.net/20.500.11811/13118}
}