The growing demand for high-resolution hyperspectral satellite imagery, particularly from nano and microsatellites, is driven by a wide range of critical applications, ranging from disaster management and agriculture to military intelligence. The increasing deployment of thousands of satellites into orbit and the resulting massive data production introduce challenges related to stringent edge-device constraints, especially for nanosatellites. These constraints include limited processing capabilities, power consumption, memory usage, and transmission bandwidth. Clouds, which cover over 66% of the Earth’s surface, introduce noise and inaccuracies into satellite imagery, compromising the quality and reliability of image-based systems and potentially leading to invalid data, while unnecessarily consuming transmission bandwidth. To address these issues, this paper introduces a real-time on-board satellite cloud cover detection system based on a lightweight neural network. By discarding excessively cloudy images, the proposed approach can lead to an improvement in the efficiency and accuracy of satellite image-based systems. At the same time, it allows to minimize the data to be transmitted to the ground, consequently mitigating bandwidth problems and reducing transmission power. The proposed CNN shows a compact architecture, requiring fewer than 9 thousand parameters, while maintaining a detection accuracy of 89% when evaluated using the Landsat 8 dataset. An optimized hardware accelerator is designed to meet the on-board nanosatellites constraints. Post-implementation simulations on a Xilinx Artix 7 FPGA demonstrate state-of-the-art results with a utilization of about 12 thousand and 7 thousand of mapped LUTs and FFs, respectively, with a power consumption of 116 mW.
Real-Time On-board Satellite Cloud Cover Detection Hardware Architecture using Spaceborne Remote Sensing Imagery
Vitolo P.;Fasolino A.;Liguori R.;Di Benedetto L.;Rubino A.;Licciardo G. D.
2024-01-01
Abstract
The growing demand for high-resolution hyperspectral satellite imagery, particularly from nano and microsatellites, is driven by a wide range of critical applications, ranging from disaster management and agriculture to military intelligence. The increasing deployment of thousands of satellites into orbit and the resulting massive data production introduce challenges related to stringent edge-device constraints, especially for nanosatellites. These constraints include limited processing capabilities, power consumption, memory usage, and transmission bandwidth. Clouds, which cover over 66% of the Earth’s surface, introduce noise and inaccuracies into satellite imagery, compromising the quality and reliability of image-based systems and potentially leading to invalid data, while unnecessarily consuming transmission bandwidth. To address these issues, this paper introduces a real-time on-board satellite cloud cover detection system based on a lightweight neural network. By discarding excessively cloudy images, the proposed approach can lead to an improvement in the efficiency and accuracy of satellite image-based systems. At the same time, it allows to minimize the data to be transmitted to the ground, consequently mitigating bandwidth problems and reducing transmission power. The proposed CNN shows a compact architecture, requiring fewer than 9 thousand parameters, while maintaining a detection accuracy of 89% when evaluated using the Landsat 8 dataset. An optimized hardware accelerator is designed to meet the on-board nanosatellites constraints. Post-implementation simulations on a Xilinx Artix 7 FPGA demonstrate state-of-the-art results with a utilization of about 12 thousand and 7 thousand of mapped LUTs and FFs, respectively, with a power consumption of 116 mW.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.