Gorgi Zadeh, Shekoufeh: Fast, Accurate and Steerable Segmentation of Drusen in Optical Coherence Tomography. - Bonn, 2020. - Dissertation, Rheinische Friedrich-Wilhelms-Universität Bonn.
Online-Ausgabe in bonndoc: https://nbn-resolving.org/urn:nbn:de:hbz:5-59613
urn: https://nbn-resolving.org/urn:nbn:de:hbz:5-59613,
author = {{Shekoufeh Gorgi Zadeh}},
title = {Fast, Accurate and Steerable Segmentation of Drusen in Optical Coherence Tomography},
school = {Rheinische Friedrich-Wilhelms-Universität Bonn},
year = 2020,
month = oct,

note = {Age-related macular degeneration (AMD) is known to be the leading cause for blindness in developed countries. One of the early appearing biomarkers of AMD are drusen that develop between the retinal pigment epithelium (RPE) layer and bruch's membrane (BM). Drusen size, number and location are among the most important biomarkers for staging AMD. In addition, assessment of these biomarkers is essential for testing new treatments, and identifying AMD risk factors. Optical coherence tomography (OCT), is a 3D imaging technique in which the layer structure and the above-mentioned biomarkers can be seen. Particularly in epidemiological studies that may contain thousands of images, manual drusen quantification in OCT is infeasible. Thus it is necessary to use automated segmentation algorithms.
In this thesis we first propose a novel multi-scale anisotropic fourth-order diffusion (MAFOD) filter that is well suited for stable localization of ridges and valleys. It smoothes along ridges at multiple scales, while sharpening them in the perpendicular direction. Compared to other existing diffusion filters, MAFOD better restores the center line of elongated structures. This makes it a suitable filter for many applications, including the preprocessing of OCT images.
Our motivation for the next work was to create a baseline for our newly developed segmentation approach. We evaluated one of the state-of-the-art drusen segmentation algorithms, which was proposed by Chen et al. [Medical Image Analysis 17.8 (2013), pp. 1058–1072], on a new set of data. Through the evaluation, we found a substantially inferior performance than the reported results. We identified multiple factors that might explain this, including lower axial resolution, greater diversity of drusen load, and simultaneous presence of other pathologies in our data-set, compared to the data-set that was described in the original paper. This motivated us to refine the state-of-the-art algorithm even further by adding additional steps. Our refined algorithm significantly improved the performance of the state-of-the-art algorithm. In addition, our results highlight that there is a need to do more work on the proper replication and validation of algorithms in the field of medical image analysis.
Next, we presented, to our knowledge, the first CNN-based drusen segmentation pipeline. In particular we evaluated three different choices of integrating a CNN into the segmentation pipeline, and found that all outperformed the state-of-the-art method. Among the three proposed pipelines for drusen segmentation, the one with a CNN trained for segmenting RPE and BM layers combined with shortest path finding and polynomial fitting was the most successful.
Our final major contribution was designing an interactive visual system for retinal layer and drusen segmentation refinement. The interactive system allows the user to apply a pretrained CNN on a set of data and then lets them correct the results. To speed up the correction, we derived two uncertainty measures from the CNN that guides the user to those images where the segmentation is more likely to have failed. In addition, we designed intelligent tools that take user specified constraints as well as the 3D context information into account to propose improved segmentations. Compared to state-of-the-art correction tools, through a small user study we observed a time reduction of 53% for layer segmentation correction and 73% for drusen segmentation correction.},

url = {http://hdl.handle.net/20.500.11811/8642}

Die folgenden Nutzungsbestimmungen sind mit dieser Ressource verbunden: