Zur Kurzanzeige

Robotic Vision for Precision Intervention in Horticulture

dc.contributor.advisorMcCool, Chris
dc.contributor.authorSmitt, Claus German
dc.date.accessioned2025-01-20T09:42:29Z
dc.date.available2025-01-20T09:42:29Z
dc.date.issued20.01.2025
dc.identifier.urihttps://hdl.handle.net/20.500.11811/12743
dc.description.abstractStriving towards optimal sustainable agriculture systems to address the world’s growing demand for food, precision agriculture has emerged as a key strategy. In recent years robotic systems have gained remarkable capabilities to automate various agricultural tasks. Frequently, agricultural robots make use of vision-based sensors such as color (RGB) cameras coupled with advanced deep learning models to provide a fine-grained understanding of the environment. However, these robots are deployed in challenging conditions where current techniques fall short in terms of required performance to estimate suitable high-precision fine-grained plant-level information. Yet, opportunities exist to greatly enhance the quality of these vision-based approaches by employing robotic vision techniques that exploit not just the RGB camera information but also the estimated scene structure (depth) as well as coarse robot localization data.
This thesis explores the use of robotic vision to automate agricultural surveillance, particularly focusing on horticulture glasshouse systems. To achieve this we develop a robotic platform called PATHoBot and demonstrate how this is an enabling system for tasks such as crop monitoring, 3D panoptic fruit mapping, 4D registration, fruit volume, quantity and quality estimation, autonomous harvesting, and large-scale phenotyping in commercial glasshouses. We then demonstrate how rich robotic information, specifically relative motion plus scene geometry (e.g. depth), can be fused with state-of-the-art vision deep learning approaches to make them robust to real-world challenges, yielding highly accurate crop detection, tracking, and segmentation results.
PATHoBot is a crop monitoring robot designed for commercial glasshouses, equipped with a global multi-modal camera array for on-the-fly surveillance of vertical crops and a robotic arm for proximity monitoring and intervention tasks. We first show its utility by generating 3D crop maps and improving a tracking-via-segmentation fruit counting system by exploiting multi-modal spatial-temporal data it captures. We also propose methods to improve crop monitoring systems by explicitly incorporating spatial-temporal information. Combining scene geometry and how the robot moved, we estimate how extracted vision features had moved spatially over time, allowing us to directly incorporate these features into a DNN segmentation model. These approaches achieved improved performance and robustness to real-world conditions across horticulture and arable farming domains.
Finally, we introduce a 3D semantic scene understanding model capable of identifying individual fruits and tracking them through strong occlusions addressing key challenges in agricultural monitoring while achieving impressive performance. This system allows us to take object detections and jointly resolve geometry, robot pose, object instances, and even object identities (tracking objects) in a single approach. The output of PAg-NeRF provides a spatial-temporal consistent understanding of a field provided by deep learnt models. Our contributions show how robot spatial-temporal information and multimodal data can be exploited to improve the performance of DNN crop monitoring systems and expand their capabilities, in particular for horticulture domains. This has a direct impact on improving the crop decision-making process and automated intervention tasks, ultimately leading to advancement in sustainable food production practices. The approaches discussed in this thesis have associated peer-reviewed publications listed below. Furthermore, Our paper on explicitly incorporating spatial-temporal information into recurrent models received the best AgriRobotics paper award at IEEE’s Intelligent Robots and Systems conference 2022 (IROS 2022). Finally, the datasets and implementation of our novel monitoring methods have been publicly released to enable further research.
en
dc.language.isoeng
dc.rightsIn Copyright
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/
dc.subject.ddc004 Informatik
dc.subject.ddc620 Ingenieurwissenschaften und Maschinenbau
dc.titleRobotic Vision for Precision Intervention in Horticulture
dc.typeDissertation oder Habilitation
dc.publisher.nameUniversitäts- und Landesbibliothek Bonn
dc.publisher.locationBonn
dc.rights.accessRightsopenAccess
dc.identifier.urnhttps://nbn-resolving.org/urn:nbn:de:hbz:5-80638
ulbbn.pubtypeErstveröffentlichung
ulbbnediss.affiliation.nameRheinische Friedrich-Wilhelms-Universität Bonn
ulbbnediss.affiliation.locationBonn
ulbbnediss.thesis.levelDissertation
ulbbnediss.dissID8063
ulbbnediss.date.accepted05.06.2024
ulbbnediss.instituteAgrar-, Ernährungs- und Ingenieurwissenschaftliche Fakultät : Institut für Landtechnik (ILT)
ulbbnediss.fakultaetAgrar-, Ernährungs- und Ingenieurwissenschaftliche Fakultät
dc.contributor.coRefereeRoscher, Ribana
ulbbnediss.contributor.orcidhttps://orcid.org/0000-0002-7267-7139


Dateien zu dieser Ressource

Thumbnail

Das Dokument erscheint in:

Zur Kurzanzeige

Die folgenden Nutzungsbestimmungen sind mit dieser Ressource verbunden:

InCopyright