Roggiolani, Gianmarco: Unsupervised Learning for In-Field Phenotyping Leveraging Domain Knowledge. - Bonn, 2026. - Dissertation, Rheinische Friedrich-Wilhelms-Universität Bonn.
Online-Ausgabe in bonndoc: https://nbn-resolving.org/urn:nbn:de:hbz:5-87932
Online-Ausgabe in bonndoc: https://nbn-resolving.org/urn:nbn:de:hbz:5-87932
@phdthesis{handle:20.500.11811/13894,
urn: https://nbn-resolving.org/urn:nbn:de:hbz:5-87932,
author = {{Gianmarco Roggiolani}},
title = {Unsupervised Learning for In-Field Phenotyping Leveraging Domain Knowledge},
school = {Rheinische Friedrich-Wilhelms-Universität Bonn},
year = 2026,
month = feb,
note = {The growing world population and the unsustainability of common farming practices are challenging our agricultural production system, which has to cope with the increased demand for food, feed, fuel, and fiber, without draining the natural resources, worsening climate change, or compromising environmental biodiversity. We need to rethink our whole farming system to increase the yield per area unit and improve the sustainability of our methods.
Robotic systems have the potential to offer a more sustainable alternative to standard practices. They can perform argeted weeding instead of uniformly spraying the whole field, thus reducing the use of agrochemicals. Robots can also continuously monitor the state of plants in the field, providing measurements that breeders and agronomists can use to develop more resilient and high-throughput crop varieties.
Robots need robust perception systems to provide accurate in-field measurements. Such perception systems are usually data-driven approaches learning from manually produced examples, also called labeled data. To correctly understand their surroundings, the perception systems need access to vast amounts of labeled data, covering all possible scenarios, i.e., different plant growths, light conditions, soil textures, and crop species. The high cost and time required to produce labeled data are the bottlenecks that limit the adoption of robotic systems.
However, in the agricultural domain, we can exploit prior knowledge about the fields’ arrangements and the plants’ characteristics to enhance the abilities of the perception systems, while simultaneously reducing the need for labeled data for data-driven approaches.
The main contribution of this thesis is a set of novel perception techniques to improve the scene understanding of robotic systems with a focus on reducing the requirements for manually annotated data. First, we present an approach to identify weeds, crops, single plants, and single leaves using manually annotated data. Then, we show how to exploit the knowledge about the agricultural environment to boost the performance of all tasks without additional annotated data. Our third contribution is an approach to distinguish crops from weeds, and our fourth contribution is an approach to identify single plants in the fields. Both of them do not require annotated data. We then present how to improve single-leaf segmentation in 3D exploiting the plant structure as the fifth contribution. Finally, we present our approach to generate realistic 3D leaves of known lengths and widths to enhance the capabilities of existing trait estimation approaches.
In summary, this thesis contributes to the interpretation of agricultural data for different tasks, from the semantic understanding of crops and weeds to the estimation of leaf traits, such as the width and length of the leaf blade. The computer vision approaches presented in this thesis allow for more accurate identification and measurement of crops, single plants, and single leaves with reduced requirements for manually labeled data. We exploit the prior knowledge about the agricultural domain to boost the performance of existing techniques and to produce automatically annotated data to be used in data-driven approaches. We cut to less than half the cost and time required to annotate a dataset for semantic understanding, thus making a concrete step toward a more efficient and robust perception system for farming tasks.},
url = {https://hdl.handle.net/20.500.11811/13894}
}
urn: https://nbn-resolving.org/urn:nbn:de:hbz:5-87932,
author = {{Gianmarco Roggiolani}},
title = {Unsupervised Learning for In-Field Phenotyping Leveraging Domain Knowledge},
school = {Rheinische Friedrich-Wilhelms-Universität Bonn},
year = 2026,
month = feb,
note = {The growing world population and the unsustainability of common farming practices are challenging our agricultural production system, which has to cope with the increased demand for food, feed, fuel, and fiber, without draining the natural resources, worsening climate change, or compromising environmental biodiversity. We need to rethink our whole farming system to increase the yield per area unit and improve the sustainability of our methods.
Robotic systems have the potential to offer a more sustainable alternative to standard practices. They can perform argeted weeding instead of uniformly spraying the whole field, thus reducing the use of agrochemicals. Robots can also continuously monitor the state of plants in the field, providing measurements that breeders and agronomists can use to develop more resilient and high-throughput crop varieties.
Robots need robust perception systems to provide accurate in-field measurements. Such perception systems are usually data-driven approaches learning from manually produced examples, also called labeled data. To correctly understand their surroundings, the perception systems need access to vast amounts of labeled data, covering all possible scenarios, i.e., different plant growths, light conditions, soil textures, and crop species. The high cost and time required to produce labeled data are the bottlenecks that limit the adoption of robotic systems.
However, in the agricultural domain, we can exploit prior knowledge about the fields’ arrangements and the plants’ characteristics to enhance the abilities of the perception systems, while simultaneously reducing the need for labeled data for data-driven approaches.
The main contribution of this thesis is a set of novel perception techniques to improve the scene understanding of robotic systems with a focus on reducing the requirements for manually annotated data. First, we present an approach to identify weeds, crops, single plants, and single leaves using manually annotated data. Then, we show how to exploit the knowledge about the agricultural environment to boost the performance of all tasks without additional annotated data. Our third contribution is an approach to distinguish crops from weeds, and our fourth contribution is an approach to identify single plants in the fields. Both of them do not require annotated data. We then present how to improve single-leaf segmentation in 3D exploiting the plant structure as the fifth contribution. Finally, we present our approach to generate realistic 3D leaves of known lengths and widths to enhance the capabilities of existing trait estimation approaches.
In summary, this thesis contributes to the interpretation of agricultural data for different tasks, from the semantic understanding of crops and weeds to the estimation of leaf traits, such as the width and length of the leaf blade. The computer vision approaches presented in this thesis allow for more accurate identification and measurement of crops, single plants, and single leaves with reduced requirements for manually labeled data. We exploit the prior knowledge about the agricultural domain to boost the performance of existing techniques and to produce automatically annotated data to be used in data-driven approaches. We cut to less than half the cost and time required to annotate a dataset for semantic understanding, thus making a concrete step toward a more efficient and robust perception system for farming tasks.},
url = {https://hdl.handle.net/20.500.11811/13894}
}





