75 research outputs found
OCM 2023 - Optical Characterization of Materials : Conference Proceedings
The state of the art in the optical characterization of materials is advancing rapidly. New insights have been gained into the theoretical foundations of this research and exciting developments have been made in practice, driven by new applications and innovative sensor technologies that are constantly evolving.
The great success of past conferences proves the necessity of a platform for presentation, discussion and evaluation of the latest research results in this interdisciplinary field
Deep learning for remote sensing image classification:A survey
Remote sensing (RS) image classification plays an important role in the earth observation technology using RS data, having been widely exploited in both military and civil fields. However, due to the characteristics of RS data such as high dimensionality and relatively small amounts of labeled samples available, performing RS image classification faces great scientific and practical challenges. In recent years, as new deep learning (DL) techniques emerge, approaches to RS image classification with DL have achieved significant breakthroughs, offering novel opportunities for the research and development of RS image classification. In this paper, a brief overview of typical DL models is presented first. This is followed by a systematic review of pixel?wise and scene?wise RS image classification approaches that are based on the use of DL. A comparative analysis regarding the performances of typical DL?based RS methods is also provided. Finally, the challenges and potential directions for further research are discussedpublishersversionPeer reviewe
RingMo-lite: A Remote Sensing Multi-task Lightweight Network with CNN-Transformer Hybrid Framework
In recent years, remote sensing (RS) vision foundation models such as RingMo
have emerged and achieved excellent performance in various downstream tasks.
However, the high demand for computing resources limits the application of
these models on edge devices. It is necessary to design a more lightweight
foundation model to support on-orbit RS image interpretation. Existing methods
face challenges in achieving lightweight solutions while retaining
generalization in RS image interpretation. This is due to the complex high and
low-frequency spectral components in RS images, which make traditional single
CNN or Vision Transformer methods unsuitable for the task. Therefore, this
paper proposes RingMo-lite, an RS multi-task lightweight network with a
CNN-Transformer hybrid framework, which effectively exploits the
frequency-domain properties of RS to optimize the interpretation process. It is
combined by the Transformer module as a low-pass filter to extract global
features of RS images through a dual-branch structure, and the CNN module as a
stacked high-pass filter to extract fine-grained details effectively.
Furthermore, in the pretraining stage, the designed frequency-domain masked
image modeling (FD-MIM) combines each image patch's high-frequency and
low-frequency characteristics, effectively capturing the latent feature
representation in RS data. As shown in Fig. 1, compared with RingMo, the
proposed RingMo-lite reduces the parameters over 60% in various RS image
interpretation tasks, the average accuracy drops by less than 2% in most of the
scenes and achieves SOTA performance compared to models of the similar size. In
addition, our work will be integrated into the MindSpore computing platform in
the near future
Development of experimental and instrumental systems to study biological systems
Chapters 1-4 of this thesis describes the development of an experimental system to measure diffusion-limited reaction kinetics in a biological environment. About 100 years ago, the relationship between reaction rate and diffusion in homogenous solution, ie water or buffer, was described as a linear relationship by Smoluchowski. Applying this theory naively would suggest that since the diffusion coefficients drop by factors of 4-100 then the rates of reaction would drop by the same amount. However, recent theory and simulations suggest that this does not hold. Even though biological diffusion coefficients drop to 0.1-20% of that in buffer, these recent studies show that the reaction kinetics are much more weakly affected by the biological environment. Due to the lack of experimental evidence for biological diffusion, there is a great need for information in this area. Here, I describe a protein system, exogenous to E. coli¸ that will form a dimer in the presence of a small molecule. ^ I also describe the development of a new type of multivariate hyperspectral Raman instrument (MHI); the instrument is developed for use to study biological tissues and for high speed cell sorting applications. The new instrument design has a large speed advantage over traditional Raman instrumentation for rapid chemical imaging. While the MHI can reproduce the functionality of a traditional Raman spectrometer, its true speed advantage is realized after pre-training on known sample components. The MHI makes use of a spatial light modulator as a programmable optical filter that can be programmed with filters based on multivariate signal processing algorithms, such as PLS, in order to rapidly detect chemical components and create chemical maps. Chapters 5-8 of this thesis describe the development and construction of the MHI, as well as provide proof-of-concept experimental results demonstrating its functionality
Deep Learning Meets Hyperspectral Image Analysis: A Multidisciplinary Review
Modern hyperspectral imaging systems produce huge datasets potentially conveying a great abundance of information; such a resource, however, poses many challenges in the analysis and interpretation of these data. Deep learning approaches certainly offer a great variety of opportunities for solving classical imaging tasks and also for approaching new stimulating problems in the spatial–spectral domain. This is fundamental in the driving sector of Remote Sensing where hyperspectral technology was born and has mostly developed, but it is perhaps even more true in the multitude of current and evolving application sectors that involve these imaging technologies. The present review develops on two fronts: on the one hand, it is aimed at domain professionals who want to have an updated overview on how hyperspectral acquisition techniques can combine with deep learning architectures to solve specific tasks in different application fields. On the other hand, we want to target the machine learning and computer vision experts by giving them a picture of how deep learning technologies are applied to hyperspectral data from a multidisciplinary perspective. The presence of these two viewpoints and the inclusion of application fields other than Remote Sensing are the original contributions of this review, which also highlights some potentialities and critical issues related to the observed development trends
Comparison of Different Transfer Learning Methods for Classification of Mangrove Communities Using MCCUNet and UAV Multispectral Images
Mangrove-forest classification by using deep learning algorithms has attracted increasing attention but remains challenging. The current studies on the transfer classification of mangrove communities between different regions and different sensors are especially still unclear. To fill the research gap, this study developed a new deep-learning algorithm (encoder–decoder with mixed depth-wise convolution and cascade upsampling, MCCUNet) by modifying the encoder and decoder sections of the DeepLabV3+ algorithm and presented three transfer-learning strategies, namely frozen transfer learning (F-TL), fine-tuned transfer learning (Ft-TL), and sensor-and-phase transfer learning (SaP-TL), to classify mangrove communities by using the MCCUNet algorithm and high-resolution UAV multispectral images. This study combined the deep-learning algorithms with recursive feature elimination and principal component analysis (RFE–PCA), using a high-dimensional dataset to map and classify mangrove communities, and evaluated their classification performance. The results of this study showed the following: (1) The MCCUNet algorithm outperformed the original DeepLabV3+ algorithm for classifying mangrove communities, achieving the highest overall classification accuracy (OA), i.e., 97.24%, in all scenarios. (2) The RFE–PCA dimension reduction improved the classification performance of deep-learning algorithms. The OA of mangrove species from using the MCCUNet algorithm was improved by 7.27% after adding dimension-reduced texture features and vegetation indices. (3) The Ft-TL strategy enabled the algorithm to achieve better classification accuracy and stability than the F-TL strategy. The highest improvement in the F1–score of Spartina alterniflora was 19.56%, using the MCCUNet algorithm with the Ft-TL strategy. (4) The SaP-TL strategy produced better transfer-learning classifications of mangrove communities between images of different phases and sensors. The highest improvement in the F1–score of Aegiceras corniculatum was 19.85%, using the MCCUNet algorithm with the SaP-TL strategy. (5) All three transfer-learning strategies achieved high accuracy in classifying mangrove communities, with the mean F1–score of 84.37~95.25%
Brain informed transfer learning for categorizing construction hazards
A transfer learning paradigm is proposed for "knowledge" transfer between the
human brain and convolutional neural network (CNN) for a construction hazard
categorization task. Participants' brain activities are recorded using
electroencephalogram (EEG) measurements when viewing the same images (target
dataset) as the CNN. The CNN is pretrained on the EEG data and then fine-tuned
on the construction scene images. The results reveal that the EEG-pretrained
CNN achieves a 9 % higher accuracy compared with a network with same
architecture but randomly initialized parameters on a three-class
classification task. Brain activity from the left frontal cortex exhibits the
highest performance gains, thus indicating high-level cognitive processing
during hazard recognition. This work is a step toward improving machine
learning algorithms by learning from human-brain signals recorded via a
commercially available brain-computer interface. More generalized visual
recognition systems can be effectively developed based on this approach of
"keep human in the loop"
Sustainable Agriculture and Advances of Remote Sensing (Volume 1)
Agriculture, as the main source of alimentation and the most important economic activity globally, is being affected by the impacts of climate change. To maintain and increase our global food system production, to reduce biodiversity loss and preserve our natural ecosystem, new practices and technologies are required. This book focuses on the latest advances in remote sensing technology and agricultural engineering leading to the sustainable agriculture practices. Earth observation data, in situ and proxy-remote sensing data are the main source of information for monitoring and analyzing agriculture activities. Particular attention is given to earth observation satellites and the Internet of Things for data collection, to multispectral and hyperspectral data analysis using machine learning and deep learning, to WebGIS and the Internet of Things for sharing and publishing the results, among others
Semi-supervised learning in habitat classification from remotely-sensed imagery
Remote sensing helps monitor and evaluate the state of ecosystems, covering also wilderness areas that can be hard to access for field observations. Wilderness areas, such as the ones in northern Lapland, are home to endangered species and habitat types. Automatic detection and classification of habitats is a difficult task, as target class distributions are long-tailed, fine-grained, and have semantic properties that can be difficult to distinguish even for humans and especially from limited remotely sensed imagery. Training data for building models is often sparse, point-like, and limited to areas accessible by foot. This thesis presents methods for habitat classification from limited data using supervised, unsupervised, and semi-supervised methods. The presented approaches take advantage of the large amounts of unannotated and weakly annotated source data that is available. Convolutional neural networks and random forests are compared and an ensemble model combining both approaches is shown to increase classification performance. Convolutional neural networks are also used to produce fully unsupervised segmentation maps. The classification and segmentation maps are produced for the entire northern Lapland area
Physics-constrained Hyperspectral Data Exploitation Across Diverse Atmospheric Scenarios
Hyperspectral target detection promises new operational advantages, with increasing instrument spectral resolution and robust material discrimination. Resolving surface materials requires a fast and accurate accounting of atmospheric effects to increase detection accuracy while minimizing false alarms. This dissertation investigates deep learning methods constrained by the processes governing radiative transfer to efficiently perform atmospheric compensation on data collected by long-wave infrared (LWIR) hyperspectral sensors. These compensation methods depend on generative modeling techniques and permutation invariant neural network architectures to predict LWIR spectral radiometric quantities. The compensation algorithms developed in this work were examined from the perspective of target detection performance using collected data. These deep learning-based compensation algorithms resulted in comparable detection performance to established methods while accelerating the image processing chain by 8X
- …