Stomberg, Timo Tjaden: Improving Explanations of Convolutional Neural Networks with Applications to Land Cover Mapping. - Bonn, 2026. - Dissertation, Rheinische Friedrich-Wilhelms-Universität Bonn.
Online-Ausgabe in bonndoc: https://nbn-resolving.org/urn:nbn:de:hbz:5-89424
Online-Ausgabe in bonndoc: https://nbn-resolving.org/urn:nbn:de:hbz:5-89424
@phdthesis{handle:20.500.11811/14062,
urn: https://nbn-resolving.org/urn:nbn:de:hbz:5-89424,
doi: https://doi.org/10.48565/bonndoc-839,
author = {{Timo Tjaden Stomberg}},
title = {Improving Explanations of Convolutional Neural Networks with Applications to Land Cover Mapping},
school = {Rheinische Friedrich-Wilhelms-Universität Bonn},
year = 2026,
month = apr,
note = {Convolutional neural networks (CNNs) have revolutionized computer vision and remain a key technology in many satellite imagery applications for environmental monitoring. As these models are integrated into scientific workflows and operational monitoring, questions about their interpretability arise; however, explaining how they generate their predictions remains challenging. Attribution methods like Grad-CAM and occlusion sensitivity are widely used to explain CNN predictions, yet they often yield differing explanations. These inconsistencies make it hard to assess reliable explanations and undermine overall trust in machine learning models.
This thesis addresses these challenges by investigating how explanations of CNN-based models can be made more interpretable, consistent, and reliable for remote sensing applications. First, we introduce UH-Net, an interpretable-by-design architecture that incorporates a high-resolution deep layer to combine semantic richness with spatial detail in attribution maps. Second, we conduct a systematic comparison of attribution methods across different CNN architectures and layers to better understand their behavior, strengths and limitations. Building on these insights, we propose a harmonization method that significantly reduces differences in attribution results across methods and provides more comprehensible explanations. Furthermore, we present two feature-specific attribution methods that achieve an inherent degree of harmonization by design. Finally, we apply our methods to naturalness mapping, making us among the first to do so using satellite imagery. To this end, we develop a high-quality Sentinel-2 dataset covering both protected and anthropogenic regions in Fennoscandia. Using UH-Net and harmonized attribution maps, we generate and evaluate large-scale naturalness maps and temporal changes across Fennoscandia from 2018 to 2024.
Overall, this work contributes new insights, methods, datasets, and applications for explainable machine learning in remote sensing. By improving the interpretability and consistency of CNN explanations, it advances the responsible and transparent application of machine learning in environmental science.},
url = {https://hdl.handle.net/20.500.11811/14062}
}
urn: https://nbn-resolving.org/urn:nbn:de:hbz:5-89424,
doi: https://doi.org/10.48565/bonndoc-839,
author = {{Timo Tjaden Stomberg}},
title = {Improving Explanations of Convolutional Neural Networks with Applications to Land Cover Mapping},
school = {Rheinische Friedrich-Wilhelms-Universität Bonn},
year = 2026,
month = apr,
note = {Convolutional neural networks (CNNs) have revolutionized computer vision and remain a key technology in many satellite imagery applications for environmental monitoring. As these models are integrated into scientific workflows and operational monitoring, questions about their interpretability arise; however, explaining how they generate their predictions remains challenging. Attribution methods like Grad-CAM and occlusion sensitivity are widely used to explain CNN predictions, yet they often yield differing explanations. These inconsistencies make it hard to assess reliable explanations and undermine overall trust in machine learning models.
This thesis addresses these challenges by investigating how explanations of CNN-based models can be made more interpretable, consistent, and reliable for remote sensing applications. First, we introduce UH-Net, an interpretable-by-design architecture that incorporates a high-resolution deep layer to combine semantic richness with spatial detail in attribution maps. Second, we conduct a systematic comparison of attribution methods across different CNN architectures and layers to better understand their behavior, strengths and limitations. Building on these insights, we propose a harmonization method that significantly reduces differences in attribution results across methods and provides more comprehensible explanations. Furthermore, we present two feature-specific attribution methods that achieve an inherent degree of harmonization by design. Finally, we apply our methods to naturalness mapping, making us among the first to do so using satellite imagery. To this end, we develop a high-quality Sentinel-2 dataset covering both protected and anthropogenic regions in Fennoscandia. Using UH-Net and harmonized attribution maps, we generate and evaluate large-scale naturalness maps and temporal changes across Fennoscandia from 2018 to 2024.
Overall, this work contributes new insights, methods, datasets, and applications for explainable machine learning in remote sensing. By improving the interpretability and consistency of CNN explanations, it advances the responsible and transparent application of machine learning in environmental science.},
url = {https://hdl.handle.net/20.500.11811/14062}
}





