Our partners

CNRS

Search





Home page > Français > Evénements > Soutenances > Soutenances de thèses

Soutenances de thèses

 

 

Fusion-based change detection for remote sensing images of different resolutions and modalities

Vinicius FERRARIS PIGNATARO MAZZEI ALBERT - Team SC - IRIT

Friday 26 October 2018, 10h00
INP-ENSEEIHT, Salle des thèses
PDF Version :

Jury

- Paolo Gamba, Professeur à l'Università degli studi di Pavia - Rapporteur
- Abdourrahmane Atto, Maître de conférences à Polytech Annecy-Chambéry - Rapporteur
- Jocelyn Chanussot, Professeur à Grenoble-INP - Examinateur
- Jérôme Bobin, Ingénieur de Recherche à CEA Saclay - Examinateur
- Bertrand Le Saux, Ingénieur de Recherche à ONERA - Examinateur
- Marie Chabert, Professeur à l'INP-ENSEEIHT - Directrice de thèse
- Nicolas Dobigeon, Professeur à l'INP-ENSEEIHT - Co-directeur de thèse

Abstract

Change detection is one of the most challenging issues when analyzing remotely sensed images. It consists in detecting alterations occurred in a given scene from between images acquired at different times. Archetypal scenarios for change detection generally compare two images acquired through the same kind of sensor that means with the same modality and the same spatial/spectral resolutions. In general, unsupervised change detection techniques are constrained to two multi-band optical images with the same spatial and spectral resolution. This scenario is suitable for a straight comparison of homologous pixels such as pixel-wise differencing. However, in some specific cases such as emergency situations, punctual missions, defense and security, the only available images may be of different modalities and of different resolutions. These dissimilarities introduce additional issues in the context of operational change detection that are not addressed by most classical methods. In the case of same modality but different resolutions, state-of-the art methods come down to conventional change detection methods after preprocessing steps applied independently on the two images, e.g. resampling operations intended to reach the same spatial and spectral resolutions. Nevertheless, these preprocessing steps may waste relevant information since they do not take into account the strong interplay existing between the two images.

The purpose of this thesis is to study how to more effectively use the available information to work with any pair of observed images, in terms of modality and resolution, developing practical contributions in a change detection context. The main hypothesis for developing change detection methods, overcoming the weakness of classical methods, is through the fusion of observed images. In this work we demonstrated that if one knows how to properly fuse two images, it is also known how to detect changes between them. This strategy is initially addressed through a change detection framework based on a 3-step procedure: fusion, prediction and detection. Then, the change detection task, benefiting from a joint forward model of two observed images as degraded versions of two (unobserved) latent images characterized by the same high spatial and high spectral resolutions, is envisioned through a robust fusion task which enforces the differences between the estimated latent images to be spatially sparse. Finally, the fusion problem is extrapolated to multimodal images. As the fusion product may not be a real quantity, the process is carried out by modelling both images as sparse linear combinations of an overcomplete pair of estimated coupled dictionaries. Thus, the change detection task is envisioned through a dual code estimation which enforces spatial sparsity in the difference between the estimated codes corresponding to each image. Experiments conducted in simulated realistically and real changes illustrate the advantages of the developed method, both qualitatively and quantitatively, proving that the fusion hypothesis is indeed a real and effective way to deal with change detection.

 

Back