5 research outputs found

    Optimized kernel minimum noise fraction transformation for hyperspectral image classification

    Get PDF
    This paper presents an optimized kernel minimum noise fraction transformation (OKMNF) for feature extraction of hyperspectral imagery. The proposed approach is based on the kernel minimum noise fraction (KMNF) transformation, which is a nonlinear dimensionality reduction method. KMNF can map the original data into a higher dimensional feature space and provide a small number of quality features for classification and some other post processing. Noise estimation is an important component in KMNF. It is often estimated based on a strong relationship between adjacent pixels. However, hyperspectral images have limited spatial resolution and usually have a large number of mixed pixels, which make the spatial information less reliable for noise estimation. It is the main reason that KMNF generally shows unstable performance in feature extraction for classification. To overcome this problem, this paper exploits the use of a more accurate noise estimation method to improve KMNF. We propose two new noise estimation methods accurately. Moreover, we also propose a framework to improve noise estimation, where both spectral and spatial de-correlation are exploited. Experimental results, conducted using a variety of hyperspectral images, indicate that the proposed OKMNF is superior to some other related dimensionality reduction methods in most cases. Compared to the conventional KMNF, the proposed OKMNF benefits significant improvements in overall classification accuracy

    Challenges and Opportunities of Multimodality and Data Fusion in Remote Sensing

    No full text
    International audience—Remote sensing is one of the most common ways to extract relevant information about the Earth and our environment. Remote sensing acquisitions can be done by both active (synthetic aperture radar, LiDAR) and passive (optical and thermal range, multispectral and hyperspectral) devices. According to the sensor, a variety of information about the Earth's surface can be obtained. The data acquired by these sensors can provide information about the structure (optical, synthetic aperture radar), elevation (LiDAR) and material content (multi and hyperspectral) of the objects in the image. Once considered together their comple-mentarity can be helpful for characterizing land use (urban analysis, precision agriculture), damage detection (e.g., in natural disasters such as floods, hurricanes, earthquakes, oil-spills in seas), and give insights to potential exploitation of resources (oil fields, minerals). In addition, repeated acquisitions of a scene at different times allows one to monitor natural resources and environmental variables (vegetation phenology, snow cover), anthropological effects (urban sprawl, deforestation), climate changes (desertification, coastal erosion) among others. In this paper, we sketch the current opportunities and challenges related to the exploitation of multimodal data for Earth observation. This is done by leveraging the outcomes of the Data Fusion contests, organized by the IEEE Geoscience and Remote Sensing Society since 2006. We will report on the outcomes of these contests, presenting the multimodal sets of data made available to the community each year, the targeted applications and an analysis of the submitted methods and results: How was multimodality considered and integrated in the processing chain? What were the improvements/new opportunities offered by the fusion? What were the objectives to be addressed and the reported solutions? And from this, what will be the next challenges

    1U CubeSatでのバイナリ画像分類用に設計された畳み込みニューラルネットワーク

    Get PDF
    As of 2020, more than a thousand CubeSats have been launched into space. The nanosatellite standard allowed launch providers to utilize empty spaces in their rockets while giving educational institutions, research facilities and commercial start-up companies the chance to build, test and operate satellites in orbit. This exponential rise in the number of CubeSats has led to an increasing number of diverse missions. Missions on astrobiology, state-of-art technology demonstration, high revisit-time earth observation and space weather have been implemented. In 2018, NASA’s JPL demonstrated CubeSat’s first use in deep space by launching MarCO A and MarCO B. The CubeSats successfully relayed information received from InSight Mars Lander in Mars to Earth. Increasing complexity in missions, however, require increased access to data. Most CubeSats still rely on extremely low data rates for data transfer. Size, Weight and Power (SWaP) requirements for 1U are stringent and rely on VHF/UHF bands for data transmission. Kyushu Institute of Technology’s BIRDS-3 Project has downlink rate of 4800bps and takes about 2-3 days to reconstruct a 640x480 (VGA) image on the ground. Not only is this process extremely time consuming and manual but it also does not guarantee that the image downlinked is usable. There is a need for automatic selection of quality data and improve the work process. The purpose of this research is to design a state-of-art, novel Convolutional Neural Network (CNN) for automated onboard image classification on CubeSats. The CNN is extremely small, efficient, accurate, and versatile. The CNN is trained on a completely new CubeSat image dataset. The CNN is designed to fulfill SWaP requirements of 1U CubeSat so that it can be scaled to fit in bigger satellites in the future. The CNN is tested on never-before-seen BIRDS-3 CubeSat test dataset and is benchmarked against SVM, AE and DBN. The CNN automatizes images selection on-orbit, prioritizes quality data, and cuts down operation time significantly.九州工業大学博士学位論文 学位記番号:工博甲第510号 学位授与年月日:令和2年12月28日1 Introduction|2 Convolutional Neural Networks|3 Methodology|4 Results|5 Conclusion九州工業大学令和2年
    corecore