TANG B C,PALIDAN Tuerxun,BAI J X,et al. Land cover classification method for remote sensing images using CNN and Transformer[J]. Microelectronics & Computer,2024,41(4):64-73. doi: 10.19304/J.ISSN1000-7180.2023.0240
Citation: TANG B C,PALIDAN Tuerxun,BAI J X,et al. Land cover classification method for remote sensing images using CNN and Transformer[J]. Microelectronics & Computer,2024,41(4):64-73. doi: 10.19304/J.ISSN1000-7180.2023.0240

Land cover classification method for remote sensing images using CNN and Transformer

  • Semantic segmentation using remote sensing images is an effective land cover classification method. However, its application in land cover classification is hindered by the problems of inaccurate edge segmentation and lack of global information leading to misclassification in mainstream frameworks. To address these problems, a Convolutional Neural Networks (CNN) and Transformer hybrid network CTHNet for land cover classification of remote sensing images is proposed, which combines the local detail extraction capability of CNN and the global information extraction capability of Transformer. The adaptive fusion module is also designed to fuse the CNN and Transformer features from the corresponding levels, and the output of the adaptive fusion module enters the segmentation head to get the final prediction results. Finally, the boundary detection branch is combined to provide edge constraints for semantic segmentation. Experimental results on two publicly available land cover classification datasets show that the method outperforms current mainstream methods, achieving 90.53% and 64.33% of the mean Intersection over Union (mIoU), respectively, and also has better recognition of large targets and boundaries in remote sensing images.
  • loading

Catalog

    Turn off MathJax
    Article Contents

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return