Cloud Detection for Pleiades and SPOT 6/7 Imageries Using Modified K-means and Deep Learning

Yudhi Prabowo, Danang Surya Candra, Rachmat Maulana

Abstract


Cloud detection is one of the important stages in optical remote sensing activities as the cloud's existence interferes with the works. Many methods have been developed to detect the cloud, but it is still a few methods for high-resolution images, which mostly have limited multispectral bands. In this paper, a novel method of cloud detection for the images is proposed by integrating an unsupervised algorithm and deep learning. This method has three main steps: (1) pre-processing; (2) segmentation using modified K-means; and (3) cloud detection using CNN. In the segmentation step, an unsupervised algorithm, K-means is modified and used to divide pixels values into k clusters. Our modified K-means method can separate thin clouds from relative bright objects in gray clusters that will be grouped into potential cloud pixels. Afterward, a design of convolutional neural network (CNN) is used to extract the multi-scale features from each cluster and classify them into two classes: (1) cloud, which consists of thin cloud and thick cloud, and (2) non-cloud. The potential cloud area from the first step is used for guiding the result of CNN to provide accurate cloud areas. Several Pleiades and SPOT 6/7 images were used to test the reliability of the proposed method. As a result, our modified K-means has an improvement to increase the accuracy of the results. The results showed that the proposed method could detect cloud and non-cloud accurately and has the highest accuracy of the results compared to the other methods.  


Keywords


Cloud detection; high resolution; Pleiades; SPOT 6/7, deep learning; modified K-means; convolutional neural network.

Full Text:

PDF

References


W. Zhang, P. Tang, and L. Zhao, “Remote Sensing Image Scene Classification Using CNN-CapsNet,” Remote Sens., vol. 11, no. 5, 2019, doi: 10.3390/rs11050494.

Y. You et al., “Building Detection from VHR Remote Sensing Imagery Based on the Morphological Building Index,” Remote Sens., vol. 10, no. 8, pp. 1–22, 2018, doi: 10.3390/rs10081287.

L. Piermattei et al., “Impact of the Acquisition Geometry of Very High-Resolution Pl é iades Imagery on the Accuracy of Canopy Height Models over Forested Alpine Regions,” Remote Sens., vol. 10, no. 10, 2018, doi: 10.3390/rs10101542.

G. Marmorino and W. Chen, “Use of WorldView-2 Along-Track Stereo Imagery to Probe a Baltic Sea Algal Spiral,” Remote Sens., vol. 11, no. 7, pp. 1–9, 2019, doi: 10.3390/rs11070865.

J. Marcello, F. Eugenio, J. Martin, and F. Marques, “Seabed Mapping in Coastal Shallow Waters Using High Resolution Multispectral and Hyperspectral Imagery,” Remote Sens., vol. 10, no. 8, 2018, doi: 10.3390/rs10081208.

D. Amitrano et al., “Long-Term Satellite Monitoring of the Slumgullion Landslide Using Space-Borne Synthetic Aperture Radar Sub-Pixel Offset Tracking,” Remote Sens., vol. 11, no. 3, pp. 1–13, 2019, doi: 10.3390/rs11030369.

D. S. Candra, S. Phinn, and P. Scarth, “Automated cloud and cloud-shadow masking for Landsat 8 using multitemporal images in a variety of environments,” Remote Sens., vol. 11, no. 17, 2019, doi: 10.3390/rs11172060.

D. S. Candra, S. Phinn, and P. Scarth, “Cloud and cloud shadow masking for Sentinel-2 using multitemporal images in global area,” Int. J. Remote Sens., vol. 41, no. 8, p. 2020, 2020, doi: https://doi.org/10.1080/01431161.2019.1697006.

Q. Shi, H. Binbin, Z. Zhe, L. Zhanmang, and Q. Xingwen, “Improving Fmask cloud and cloud shadow detection in mountainous area for Landsats 4-8 images,” Remote Sens. Environ., vol. 199, no. September 2017, pp. 107–119, 2017.

D. Frantz, E. Haß, A. Uhl, J. Stoffels, and J. Hill, “Improvement of the Fmask algorithm for Sentinel-2 images: Separating clouds from bright surfaces based on parallax effects,” Remote Sens. Environ., vol. 215, no. April 2017, pp. 471–481, 2018, doi: 10.1016/j.rse.2018.04.046.

G. Nafiseh and A. Mehdi, “Introducing two Random Forest based methods for cloud detection in remote sensing images,” Adv. Sp. Res., vol. 62, no. July 2018, pp. 288–303, 2018, doi: https://doi.org/10.1016/j.asr.2018.04.030.

Y. Chen, R. Fan, M. Bilal, X. Yang, J. Wang, and W. Li, “Multilevel cloud detection for high-resolution remote sensing imagery using multiple convolutional neural networks,” ISPRS Int. J. Geo-Information, vol. 7, no. 5, 2018, doi: 10.3390/ijgi7050181.

X. Fengyin, S. Mengyun, S. Zhenwei, Y. Jihao, and Z. Danpei, “Multilevel Cloud Detection in Remote Sensing Images Based on Deep Learning,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 10, no. 8, pp. 1–10, 2017.

L. Sun et al., “A cloud detection algorithm-generating method for remote sensing data at visible to short-wave infrared wavelengths,” ISPRS J. Photogramm. Remote Sens., vol. 124, pp. 70–88, 2017, doi: 10.1016/j.isprsjprs.2016.12.005.

J. Cihlar and J. Howarth, “Detection and removal of cloud contamination from AVHRR images,” IEEE Trans. Geosci. Remote Sens., vol. 32, no. 3, pp. 583–589, 1994, doi: 10.1109/36.297976.

B. Lee, L. Di Girolamo, G. Zhao, and Y. Zhan, “Three-dimensional cloud volume reconstruction from the Multi-Angle Imaging SpectroRadiometer,” Remote Sens., vol. 10, no. 11, 2018, doi: 10.3390/rs10111858.

J. Gary J, H. Stephanie L, and L. Frank J, “Spatial and Temporal Varying Thresholds for Cloud Detection in GOES Imagery,” IEEE Trans. Geosci. Remote Sens., vol. 46, no. 6, pp. 1705–1717, 2008.

I. Haruma, O. Yu, M. Keitaro, M. Keigo, and Y. N. Takashi, “Development of a support vector machine based cloud detection method for MODIS with the adjustability to various conditions,” Remote Sens. Environ., vol. 205, no. February 2018, pp. 390–407, 2018, doi: https://doi.org/10.1016/j.neucom.2014.09.102.

L. Pengfei, D. Limin, X. Huachao, and X. Mingliang, “A cloud image detection method based on SVM vector machine,” Neurocomputing, vol. 169, no. December 2015, pp. 34–42, 2015.

T. Bai, D. Li, K. Sun, Y. Chen, and W. Li, “Cloud detection for high-resolution satellite imagery using machine learning and multi-feature fusion,” Remote Sens., vol. 8, no. 9, pp. 1–21, 2016, doi: 10.3390/rs8090715.

W. Li, R. Dong, H. Fu, and L. Yu, “Large-scale oil palm tree detection from high-resolution satellite images using two-stage convolutional neural networks,” Remote Sens., vol. 11, no. 1, 2019, doi: 10.3390/rs11010011.

M. E. Paoletti, J. M. Haut, J. Plaza, and A. Plaza, “Deep & Dense convolutional neural network for hyperspectral image classification,” Remote Sens., vol. 10, no. 9, pp. 1–28, 2018, doi: 10.3390/rs10091454.

K. A. Korznikov, D. E. Kislov, J. Altman, J. Doležal, A. S. Vozmishcheva, and P. V. Krestov, “Using u-net-like deep convolutional neural networks for precise tree recognition in very high resolution rgb (Red, green, blue) satellite images,” Forests, vol. 12, no. 1, pp. 1–17, 2021, doi: 10.3390/f12010066.

M. Segal-Rozenhaimer, A. Li, K. Das, and V. Chirayath, “Cloud detection algorithm for multi-modal satellite imagery using convolutional neural-networks (CNN),” Remote Sens. Environ., vol. 237, 2020, doi: 10.1016/j.rse.2019.111446.

S. Almabdy and L. Elrefaei, “Deep convolutional neural network-based approaches for face recognition,” Appl. Sci., vol. 9, no. 20, 2019, doi: 10.3390/app9204397.

K. Kamycki, T. Kapuscinski, and M. Oszust, “Data augmentation with suboptimal warping for time-series classification,” Sensors, vol. 20, no. 1, 2020, doi: 10.3390/s20010098.

J. Qi, Y. Yu, L. Wang, J. Liu, and Y. Wang, “An effective and efficient hierarchical K-means clustering algorithm,” Int. J. Distrib. Sens. Networks, vol. 13, no. 8, pp. 1–17, 2017, doi: 10.1177/1550147717728627.

S. Mehta, X. Shen, J. Gou, and D. Niu, “A new nearest centroid neighbor classifier based on k local means using harmonic mean distance,” Information, vol. 9, no. 9, 2018, doi: 10.3390/info9090234.

L. Hu, M. Qin, F. Zhang, Z. Du, and R. Liu, “RSCNN: A cnn-based method to enhance low-light remote-sensing images,” Remote Sens., vol. 13, no. 1, pp. 1–13, 2021, doi: 10.3390/rs13010062.

R. Ba, W. Song, X. Li, Z. Xie, and S. Lo, “Integration of multiple spectral indices and a neural network for burned area mapping based on MODIS data,” Remote Sens., vol. 11, no. 3, 2019, doi: 10.3390/rs11030326.

J. I. Hwang and H. S. Jung, “Automatic ship detection using the artificial neural network and support vector machine from X-Band Sar satellite images,” Remote Sens., vol. 10, no. 11, 2018, doi: 10.3390/rs10111799.

R. Waseem and W. Zenghui, “Deep Convolutional Neural Networks for Image Classification: A Comprehensive Review,” Neural Comput., vol. 29, no. 9, pp. 1–98, 2017, doi: 10.1162/NECO.

W. Li, H. Fu, L. Yu, and A. Cracknell, “Deep learning based oil palm tree detection and counting for high-resolution remote sensing images,” Remote Sens., vol. 9, no. 1, 2017, doi: 10.3390/rs9010022.

C. Yu, W. Duo, Z. Pan, and Z. Tao, “Model compression and acceleration for deep neural networks: The principles, progress, and challenges,” IEEE Signal Process. Mag., vol. 35, no. 1, pp. 126–136, 2018, doi: 10.1109/MSP.2017.2765695.

Y. Wang, Y. Li, Y. Song, and X. Rong, “The influence of the activation function in a convolution neural network model of facial expression recognition,” Appl. Sci., vol. 10, no. 5, 2020, doi: 10.3390/app10051897.

Y. Hu, Q. Zhang, Y. Zhang, and H. Yan, “A deep convolution neural network method for land cover mapping: A case study of Qinhuangdao, China,” Remote Sens., vol. 10, no. 12, 2018, doi: 10.3390/rs10122053.

A. Elmes et al., “Accounting for training data error in machine learning applied to earth observations,” Remote Sens., vol. 12, no. 6, pp. 86–88, 2020, doi: 10.3390/rs12061034.

F. Mohammadimanesh, B. Salehi, M. Mahdianpari, E. Gill, and M. Molinier, “A new fully convolutional neural network for semantic segmentation of polarimetric SAR imagery in complex land cover ecosystem,” ISPRS J. Photogramm. Remote Sens., vol. 151, no. April, pp. 223–236, 2019, doi: 10.1016/j.isprsjprs.2019.03.015.




DOI: http://dx.doi.org/10.18517/ijaseit.11.6.14312

Refbacks

  • There are currently no refbacks.



Published by INSIGHT - Indonesian Society for Knowledge and Human Development