Impact: Change Detection in Hyperspectral Image and Remote Sensing

Thursday 11th July 2024

Last year, the work of our Transparent Ocean team was published in the internationally subscribed journal IEEE Transactions on Geoscience and Remote Sensing. The paper, titled ‘CBANet: An End-to-End Cross-Band 2-D Attention Network for Hyperspectral Change Detection in Remote Sensing’, explores the vital role of hyperspectral change detection (HCD), its limitations and proposes a novel lightweight end-to-end deep learning-based network.  

Hyperspectral Change Detection 

Change detection (CD) task can identify differences in multi-temporal remote sensing (RS) imagery within the same geographic area. In recent years, hyperspectral images (HSI) have been successfully applied for remote sensing observation of the Earth. With the 2-D spatial information and rich spectral information in the third dimension, HSI can acquire continuous narrow bands with a high spectral resolution. Compared with multi-spectral images and conventional colour images in red-green-blue (RGB), HSI has the following two advantages: 1) high spectral resolution and wide spectral range spanning from visible light to short-wave infrared, even mid-infrared, where the spectral resolution can be 10nm or even less along with hundreds of continuous bands; 2) rich spatial and spectral information for effective detection of the region of interests. Therefore, hyperspectral change detection (HCD) has become a research hotspot, which has been successfully applied in a wide range of applications such as precision agriculture, disaster monitoring, geological survey and biomedical science.   

 

  

Limitations of Hyperspectral Change Detection 

Nevertheless, there are still some challenges for HCD tasks: 

1) Most existing change detection methods rely on the difference between the bi-temporal hypercubes, 

in which the spectral characteristics can be damaged. 

2) Existing deep learning models for HCD have a large amount of hyper parameters, resulting in redundant information in both spatial and spectral domains as well as the large computational cost. 

3) Most of the HCD methods fail to deal with sparsely distributed changing areas in various sizes. 

To tackle these issues, a lightweight deep learning network, namely CBANet, is proposed, which fuses the cross-band module for extracting spectral domain features pixel-by-pixel and design a new 2-D attention module based on traditional self-attention mechanisms for improved extraction of local spatial-spectral features whilst keeping the network compact for efficiency.  

The major contributions are summarised as follows: 

1) A cross-band feature extraction module is proposed to extract the mutual and representative features from bi-temporal hypercubes, where a 1×1 convolutional layer is introduced to greatly increase the non-linear characteristics (using the subsequent activation function) of the feature map while keeping the scale of the feature map unchanged.  

2) A 2-D self-attention module is proposed for focused extraction of local spatial-spectral features and improved feature representation and discrimination capability, resulting in enhanced network reliability.  

3) A novel end-to-end lightweight CBANet is proposed which can produce higher detection accuracy but has fewer hyperparameters. Its efficacy and efficiency have been fully validated in comprehensive experiments when compared with a few state-of-the-art approaches.  

  

The Proposed Approach

The diagram of the proposed CBANet is presented in Figure 2, which is composed of main three modules: 1) cross-band spectral feature extraction 2) spectral-spatial feature extraction and 3) 2-D self-attention-based deep feature extraction. 

 

Figure 2: The architecture of the proposed CBANet model. Figure generated by Transparent Ocean PhD Student Yinhe Li. 

  

The Vital Role of Hyperspectral Change Detection (HCD) and its Limitations   

A novel lightweight end-to-end deep learning-based network, namely CBANet, is proposed for hyperspectral change detection. With the CBANet, the proposed cross-band feature extraction module has shown very good performance to fully extract and fuse the spectral information from bi-temporal HSI data whilst using the 1×1 kernels in the convolutional layer for efficiency. In addition, the proposed 2-D self-attention module has helped to capture deep spatial-spectral features for improving the feature representation and discrimination capabilities. The experiments on three publicly available HCD datasets have shown that the proposed CBANet outperforms other benchmarking models and has better stability and lighter weight than benchmarking deep learning models. This has fully validated the effectiveness and efficiency of the proposed model for the HCD task. 

To discover more about how our Transparent Ocean team is solving real-world problems where the solution can be significantly enhanced by mitigating risks and reducing the costs towards a net zero ocean, view our dedicated Transparent Ocean webpage.