Estrada León, Edgar Santiago: Advanced deep learning methods for quantifying imaging biomarkers in large cohort studies. - Bonn, 2025. - Dissertation, Rheinische Friedrich-Wilhelms-Universität Bonn.
Online-Ausgabe in bonndoc: https://nbn-resolving.org/urn:nbn:de:hbz:5-81355
Online-Ausgabe in bonndoc: https://nbn-resolving.org/urn:nbn:de:hbz:5-81355
@phdthesis{handle:20.500.11811/12917,
urn: https://nbn-resolving.org/urn:nbn:de:hbz:5-81355,
author = {{Edgar Santiago Estrada León}},
title = {Advanced deep learning methods for quantifying imaging biomarkers in large cohort studies},
school = {Rheinische Friedrich-Wilhelms-Universität Bonn},
year = 2025,
month = mar,
note = {Understanding the anatomical structures of the body and brain, and their influence on human health throughout the lifespan is of significant research and clinical interest. Tracking of structural changes can provide insights into the physiological and pathological integrity of body organs during aging; thus, they may serve as early markers for disease detection. A common method to study the body’s internal organs in-vivo is by extracting morphometric estimates from non-invasive medical images, such as magnetic resonance imaging (MRI). However, before translating imaging data into interpretable image-derived quantitative markers that can be used for downstream analysis, the structures of interest must be segmented in the image, either manually or automatically.
Manual segmentation of images is labor-intensive and expensive when large amounts of data need to be analyzed (i.e., large cohort imaging studies). Therefore, automated segmentation techniques are required. Achieving accurate segmentation of any body structures is challenging due to inherent complexities such as large anatomical variations across subjects, partial volume effects, inhomogeneous signals, and the presence of artifacts. Recent advances in machine learning, particularly deep learning-based methods, have enabled reliable and accurate segmentation of multiple structures (e.g., brain structures, eye vessels, vertebrae, etc.). Despite these advances, there is still a lack of reliable and thoroughly validated automated methods for many anatomical structures. Examples include the abdominal adipose tissue, olfactory bulbs, and hypothalamic sub-structures, which are of interest in the Rhineland Study, an ongoing large population-based cohort study upon which this thesis will be based.
In this work, we fill this gap by introducing three novel open-source deep learning-based tools for segmenting and quantifying the structures of interest. Each pipeline is tailored to a specific segmentation task, as each task presents unique characteristics and challenges. Firstly, we introduced FatSegNet, a novel pipeline designed for the automated localization and segmentation of adipose tissue on abdominal Dixon MRI scans. The proposed pipeline improves segmentation performance compared to traditional fully convolutional neural networks (F-CNNs) by enhancing feature selectivity within the network through the incorporation of competitive learning. Furthermore, the pipeline presents a novel data-driven approach for multi-view prediction aggregation for scans with anisotropic resolution. Next, we implemented the first tool for the automated segmentation of olfactory bulb tissue in high-resolution (HiRes)/sub-millimeter T2-weighted whole-brain MR images. Our tool improves the detection of fine-grained structures, such as the olfactory bulb, by employing a novel design that removes redundant information and suitably introduces self-attention layers to competitive F-CNNs, improving performance by boosting the network’s attention to spatial context. Lastly, we introduced HypVINN, the first tool for automated sub-segmentation of the hypothalamus and adjacent structures on isotropic T1-weighted (T1w) and T2-weighted (T2w) brain MR images. HypVINN extends the capabilities of competitive F-CNNs for the segmentation task by enabling input flexibility. Our proposed model builds on the concept of embedding the input modalities into a shared latent space that can be computed at inference time independently of the available modalities. Therefore, HypVINN can generate accurate segmentations of the hypothalamic structures even if only one input modality (T1w or T2w) is available (i.e., hetero-modal segmentation).
All our proposed tools were extensively validated in terms of segmentation accuracy, generalizability to in-domain and out-of-domain scenarios, test-retest reliability, and sensitivity to replicate known volumetric effects from the structures of interest. We showed the proof-of-concept of our novel pipelines in the Rhineland Study, where our segmentation pipelines have already been integrated into the study’s automated image analysis framework. To date, the tools have processed MRI scans from about 8000 participants of the Rhineland Study – demonstrating the versatility of deep-learning methods in solving the desired semantic segmentation tasks in a population-based scenario. Our tools are publicly available (https://github.com/Deep-MI), enabling other studies to benefit from our solutions. Our work will directly impact the research community by enabling the reliable assessment of imaging-derived phenotypes for structures that previously lacked robust automated tools for their evaluation.},
url = {https://hdl.handle.net/20.500.11811/12917}
}
urn: https://nbn-resolving.org/urn:nbn:de:hbz:5-81355,
author = {{Edgar Santiago Estrada León}},
title = {Advanced deep learning methods for quantifying imaging biomarkers in large cohort studies},
school = {Rheinische Friedrich-Wilhelms-Universität Bonn},
year = 2025,
month = mar,
note = {Understanding the anatomical structures of the body and brain, and their influence on human health throughout the lifespan is of significant research and clinical interest. Tracking of structural changes can provide insights into the physiological and pathological integrity of body organs during aging; thus, they may serve as early markers for disease detection. A common method to study the body’s internal organs in-vivo is by extracting morphometric estimates from non-invasive medical images, such as magnetic resonance imaging (MRI). However, before translating imaging data into interpretable image-derived quantitative markers that can be used for downstream analysis, the structures of interest must be segmented in the image, either manually or automatically.
Manual segmentation of images is labor-intensive and expensive when large amounts of data need to be analyzed (i.e., large cohort imaging studies). Therefore, automated segmentation techniques are required. Achieving accurate segmentation of any body structures is challenging due to inherent complexities such as large anatomical variations across subjects, partial volume effects, inhomogeneous signals, and the presence of artifacts. Recent advances in machine learning, particularly deep learning-based methods, have enabled reliable and accurate segmentation of multiple structures (e.g., brain structures, eye vessels, vertebrae, etc.). Despite these advances, there is still a lack of reliable and thoroughly validated automated methods for many anatomical structures. Examples include the abdominal adipose tissue, olfactory bulbs, and hypothalamic sub-structures, which are of interest in the Rhineland Study, an ongoing large population-based cohort study upon which this thesis will be based.
In this work, we fill this gap by introducing three novel open-source deep learning-based tools for segmenting and quantifying the structures of interest. Each pipeline is tailored to a specific segmentation task, as each task presents unique characteristics and challenges. Firstly, we introduced FatSegNet, a novel pipeline designed for the automated localization and segmentation of adipose tissue on abdominal Dixon MRI scans. The proposed pipeline improves segmentation performance compared to traditional fully convolutional neural networks (F-CNNs) by enhancing feature selectivity within the network through the incorporation of competitive learning. Furthermore, the pipeline presents a novel data-driven approach for multi-view prediction aggregation for scans with anisotropic resolution. Next, we implemented the first tool for the automated segmentation of olfactory bulb tissue in high-resolution (HiRes)/sub-millimeter T2-weighted whole-brain MR images. Our tool improves the detection of fine-grained structures, such as the olfactory bulb, by employing a novel design that removes redundant information and suitably introduces self-attention layers to competitive F-CNNs, improving performance by boosting the network’s attention to spatial context. Lastly, we introduced HypVINN, the first tool for automated sub-segmentation of the hypothalamus and adjacent structures on isotropic T1-weighted (T1w) and T2-weighted (T2w) brain MR images. HypVINN extends the capabilities of competitive F-CNNs for the segmentation task by enabling input flexibility. Our proposed model builds on the concept of embedding the input modalities into a shared latent space that can be computed at inference time independently of the available modalities. Therefore, HypVINN can generate accurate segmentations of the hypothalamic structures even if only one input modality (T1w or T2w) is available (i.e., hetero-modal segmentation).
All our proposed tools were extensively validated in terms of segmentation accuracy, generalizability to in-domain and out-of-domain scenarios, test-retest reliability, and sensitivity to replicate known volumetric effects from the structures of interest. We showed the proof-of-concept of our novel pipelines in the Rhineland Study, where our segmentation pipelines have already been integrated into the study’s automated image analysis framework. To date, the tools have processed MRI scans from about 8000 participants of the Rhineland Study – demonstrating the versatility of deep-learning methods in solving the desired semantic segmentation tasks in a population-based scenario. Our tools are publicly available (https://github.com/Deep-MI), enabling other studies to benefit from our solutions. Our work will directly impact the research community by enabling the reliable assessment of imaging-derived phenotypes for structures that previously lacked robust automated tools for their evaluation.},
url = {https://hdl.handle.net/20.500.11811/12917}
}