16 research outputs found

    A Novel Method for Spectral Similarity Measure by Fusing Shape and Amplitude Features

    Get PDF
    Spectral similarity measure is the basis of spectral information extraction. The description of spectral features is the key to spectral similarity measure. To express the spectral shape and amplitude features reasonably, this paper presents the definition of shape and amplitude feature vector, constructs the shape feature distance vector and amplitude feature distance vector, proposes the spectral similarity measure by fusing shape and amplitude features (SAF), and discloses the relationship of fusing SAF with Euclidean distance and spectral information divergence. Different measures were tested on the basis of United States Geological Survey (USGS) mineral_beckman_430. Generally, measures by integrating SAF achieve the highest accuracy, followed by measures based on shape features and measures based on amplitude features. In measures by integrating SAF, fusing SAF shows the highest accuracy. Fusing SAF expresses the measured results with the inner product of shape and amplitude feature distance vectors, which integrate spectral shape and amplitude features well. Fusing SAF is superior to other similarity measures that integrate SAF, such as spectral similarity scale, spectral pan-similarity measure, and normalized spectral similarity score(NS3 )

    Quantification of cuttlefish (Sepia officinalis) camouflage : a study of color and luminance using in situ spectrometry

    Get PDF
    Author Posting. © The Author(s), 2012. This is the author's version of the work. It is posted here by permission of Springer for personal use, not for redistribution. The definitive version was published in Journal of Comparative Physiology A 199 (2013): 211-225, doi:10.1007/s00359-012-0785-3.Cephalopods are renowned for their ability to adaptively camouflage on diverse backgrounds. Sepia officinalis camouflage body patterns have been characterized spectrally in the laboratory but not in the field due to the challenges of dynamic natural light fields and the difficulty of using spectrophotometric instruments underwater. To assess cuttlefish color match in their natural habitats, we studied the spectral properties of S. officinalis and their backgrounds on the Aegean coast of Turkey using point-by-point in situ spectrometry. Fifteen spectrometry datasets were collected from seven cuttlefish; radiance spectra from animal body components and surrounding substrates were measured at depths shallower than 5m. We quantified luminance and color contrast of cuttlefish components and background substrates in the eyes of hypothetical di- and trichromatic fish predators. Additionally, we converted radiance spectra to sRGB color space to simulate their in situ appearance to a human observer. Within the range of natural colors at our study site, cuttlefish closely matched the substrate spectra in a variety of body patterns. Theoretical calculations showed that this effect might be more pronounced at greater depths. We also showed that a non-biological method (“Spectral Angle Mapper”), commonly used for spectral shape similarity assessment in the field of remote sensing, shows moderate correlation to biological measures of color contrast. This performance is comparable to that of a traditional measure of spectral shape similarity, hue and chroma. This study is among the first to quantify color matching of camouflaged cuttlefish in the wild.This study was funded by ONR grant N000140610202 to RTH.2013-12-2

    Spectral Divergence for Cultural Heritage applications

    Get PDF
    Using reflectance spectra allows to compare the pigment mixtures in paintings. In order to improve the actual subjective spectral comparison, we propose to use spectral similarity measures. The Kullback-Leibler spectral Pseudo-Divergence (KLPD) is selected due to his expected metrological properties. The comparison between a subjective assessment and the objective assessment is developed for mixture of pigments coming from a cultural heritage painting. The obtained results show the good quality of the relationship between the subjective results and the objective ones using the KLPD

    Robust hyperspectral image reconstruction for scene simulation applications

    Get PDF
    This thesis presents the development of a spectral reconstruction method for multispectral (MSI) and hyperspectral (HSI) applications through an enhanced dictionary learning and spectral unmixing methodologies. Earth observation/surveillance is largely undertaken by MSI sensing such as that given by the Landsat, WorldView, Sentinel etc, however, the practical usefulness of the MSI data set is very limited. This is mainly because of the very limited number of wave bands that can be provided by the MSI imagery. One means to remedy this major shortcoming is to extend the MSI into HSI without the need of involving expensive hardware investment. Specifically, spectral reconstruction has been one of the most critical elements in applications such as Hyperspectral scene simulation. Hyperspectral scene simulation has been an important technique particularly for defence applications. Scene simulation creates a virtual scene such that modelling of the materials in the scene can be tailored freely to allow certain parameters of the model to be studied. In the defence sector this is the most cost-effective technique to allow the vulnerability of the soldiers/vehicles to be evaluated before they are deployed to a foreign ground. The simulation of a hyperspectral scene requires the details of materials in the scene, which is normally not available. Current state-of-the-art technology is trying to make use of the MSI satellite data, and to transform it into HSI for the hyperspectral scene simulation. One way to achieve this is through a reconstruction algorithm, commonly known as spectral reconstruction, which turns the MSI into HSI using an optimisation approach. The methodology that has been adopted in this thesis is the development of a robust dictionary learning to estimate the endmember (EM) robustly. Once the EM is found the abundance of materials in the scene can be subsequently estimated through a linear unmixing approach. Conventional approaches to the material allocation of most Hyperspectral scene simulator has been using the Texture Material Mapper (TMM) algorithm, which allocates materials from a spectral library (a collection of pre-compiled endmember iii iv materials) database according to the minimum spectral Euclidean distance difference to a candidate pixel of the scene. This approach has been shown (in this work) to be highly inaccurate with large scene reconstruction error. This research attempts to use a dictionary learning technique for material allocation, solving it as an optimisation problem with the objective of: (i) to reconstruct the scene as closely as possible to the ground truth with a fraction of error as that given by the TMM method, and (ii) to learn materials which are trace (2-3 times the number of species (i.e. intrinsic dimension) in the scene) cluster to ensure all material species in the scene is included for the scene reconstruction. Furthermore, two approaches complementing the goals of the learned dictionary through a rapid orthogonal matching pursuit (r-OMP) which enhances the performance of the orthogonal matching pursuit algorithm; and secondly a semi-blind approximation of the irradiance of all pixels in the scene including those in the shaded regions, have been proposed in this work. The main result of this research is the demonstration of the effectiveness of the proposed algorithms using real data set. The SCD-SOMP has been shown capable to learn both the background and trace materials even for a dictionary with small number of atoms (≈10). Also, the KMSCD method is found to be the more versatile with overcomplete (non-orthogonal) dictionary capable to learn trace materials with high scene reconstruction accuracy (2x of accuracy enhancement over that simulated using the TMM method. Although this work has achieved an incremental improvement in spectral reconstruction, however, the need of dictionary training using hyperspectral data set in this thesis has been identified as one limitation which is needed to be removed for the future direction of research

    Introduction to Facial Micro Expressions Analysis Using Color and Depth Images: A Matlab Coding Approach (Second Edition, 2023)

    Full text link
    The book attempts to introduce a gentle introduction to the field of Facial Micro Expressions Recognition (FMER) using Color and Depth images, with the aid of MATLAB programming environment. FMER is a subset of image processing and it is a multidisciplinary topic to analysis. So, it requires familiarity with other topics of Artifactual Intelligence (AI) such as machine learning, digital image processing, psychology and more. So, it is a great opportunity to write a book which covers all of these topics for beginner to professional readers in the field of AI and even without having background of AI. Our goal is to provide a standalone introduction in the field of MFER analysis in the form of theorical descriptions for readers with no background in image processing with reproducible Matlab practical examples. Also, we describe any basic definitions for FMER analysis and MATLAB library which is used in the text, that helps final reader to apply the experiments in the real-world applications. We believe that this book is suitable for students, researchers, and professionals alike, who need to develop practical skills, along with a basic understanding of the field. We expect that, after reading this book, the reader feels comfortable with different key stages such as color and depth image processing, color and depth image representation, classification, machine learning, facial micro-expressions recognition, feature extraction and dimensionality reduction. The book attempts to introduce a gentle introduction to the field of Facial Micro Expressions Recognition (FMER) using Color and Depth images, with the aid of MATLAB programming environment.Comment: This is the second edition of the boo

    Multispectral Image Road Extraction Based Upon Automated Map Conflation

    Get PDF
    Road network extraction from remotely sensed imagery enables many important and diverse applications such as vehicle tracking, drone navigation, and intelligent transportation studies. There are, however, a number of challenges to road detection from an image. Road pavement material, width, direction, and topology vary across a scene. Complete or partial occlusions caused by nearby buildings, trees, and the shadows cast by them, make maintaining road connectivity difficult. The problems posed by occlusions are exacerbated with the increasing use of oblique imagery from aerial and satellite platforms. Further, common objects such as rooftops and parking lots are made of materials similar or identical to road pavements. This problem of common materials is a classic case of a single land cover material existing for different land use scenarios. This work addresses these problems in road extraction from geo-referenced imagery by leveraging the OpenStreetMap digital road map to guide image-based road extraction. The crowd-sourced cartography has the advantages of worldwide coverage that is constantly updated. The derived road vectors follow only roads and so can serve to guide image-based road extraction with minimal confusion from occlusions and changes in road material. On the other hand, the vector road map has no information on road widths and misalignments between the vector map and the geo-referenced image are small but nonsystematic. Properly correcting misalignment between two geospatial datasets, also known as map conflation, is an essential step. A generic framework requiring minimal human intervention is described for multispectral image road extraction and automatic road map conflation. The approach relies on the road feature generation of a binary mask and a corresponding curvilinear image. A method for generating the binary road mask from the image by applying a spectral measure is presented. The spectral measure, called anisotropy-tunable distance (ATD), differs from conventional measures and is created to account for both changes of spectral direction and spectral magnitude in a unified fashion. The ATD measure is particularly suitable for differentiating urban targets such as roads and building rooftops. The curvilinear image provides estimates of the width and orientation of potential road segments. Road vectors derived from OpenStreetMap are then conflated to image road features by applying junction matching and intermediate point matching, followed by refinement with mean-shift clustering and morphological processing to produce a road mask with piecewise width estimates. The proposed approach is tested on a set of challenging, large, and diverse image data sets and the performance accuracy is assessed. The method is effective for road detection and width estimation of roads, even in challenging scenarios when extensive occlusion occurs

    Adaptive Similarity Measures for Material Identification in Hyperspectral Imagery

    Get PDF
    Remotely-sensed hyperspectral imagery has become one the most advanced tools for analyzing the processes that shape the Earth and other planets. Effective, rapid analysis of high-volume, high-dimensional hyperspectral image data sets demands efficient, automated techniques to identify signatures of known materials in such imagery. In this thesis, we develop a framework for automatic material identification in hyperspectral imagery using adaptive similarity measures. We frame the material identification problem as a multiclass similarity-based classification problem, where our goal is to predict material labels for unlabeled target spectra based upon their similarities to source spectra with known material labels. As differences in capture conditions affect the spectral representations of materials, we divide the material identification problem into intra-domain (i.e., source and target spectra captured under identical conditions) and inter-domain (i.e., source and target spectra captured under different conditions) settings. The first component of this thesis develops adaptive similarity measures for intra-domain settings that measure the relevance of spectral features to the given classification task using small amounts of labeled data. We propose a technique based on multiclass Linear Discriminant Analysis (LDA) that combines several distinct similarity measures into a single hybrid measure capturing the strengths of each of the individual measures. We also provide a comparative survey of techniques for low-rank Mahalanobis metric learning, and demonstrate that regularized LDA yields competitive results to the state-of-the-art, at substantially lower computational cost. The second component of this thesis shifts the focus to inter-domain settings, and proposes a multiclass domain adaptation framework that reconciles systematic differences between spectra captured under similar, but not identical, conditions. Our framework computes a similarity-based mapping that captures structured, relative relationships between classes shared between source and target domains, allowing us apply a classifier trained using labeled source spectra to classify target spectra. We demonstrate improved domain adaptation accuracy in comparison to recently-proposed multitask learning and manifold alignment techniques in several case studies involving state-of-the-art synthetic and real-world hyperspectral imagery

    A Markov Chain Random Field Cosimulation-Based Approach for Land Cover Post-classification and Urban Growth Detection

    Get PDF
    The recently proposed Markov chain random field (MCRF) approach has great potential to significantly improve land cover classification accuracy when used as a post-classification method by taking advantage of expert-interpreted data and pre-classified image data. This doctoral dissertation explores the effectiveness of the MCRF cosimulation (coMCRF) model in land cover post-classification and further improves it for land cover post-classification and urban growth detection. The intellectual merits of this research include the following aspects: First, by examining the coMCRF method in different conditions, this study provides land cover classification researchers with a solid reference regarding the performance of the coMCRF method for land cover post-classification. Second, this study provides a creative idea to reduce the smoothing effect in land cover post-classification by incorporating spectral similarity into the coMCRF method, which should be also applicable to other geostatistical models. Third, developing an integrated framework by integrating multisource data, spatial statistical models, and morphological operator reasoning for large area urban vertical and horizontal growth detection from medium resolution remotely sensed images enables us to detect and study the footprint of vertical and horizontal urbanization so that we can understand global urbanization from a new angle. Such a new technology can be transformative to urban growth study. The broader impacts of this research are concentrated on several points: The first point is that the coMCRF method and the integrated approach will be turned into open access user-friendly software with a graphical user interface (GUI) and an ArcGIS tool. Researchers and other users will be able to use them to produce high-quality land cover maps or improve the quality of existing land cover maps. The second point is that these research results will lead to a better insight of urban growth in terms of horizontal and vertical dimensions, as well as the spatial and temporal relationships between urban horizontal and vertical growth and changes in socioeconomic variables. The third point is that all products will be archived and shared on the Internet

    A computational approach to the quantification of animal camouflage

    Get PDF
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution June 2014Evolutionary pressures have led to some astonishing camouflage strategies in the animal kingdom. Cephalopods like cuttlefish and octopus mastered a rather unique skill: they can rapidly adapt the way their skin looks in color, texture and pattern, blending in with their backgrounds. Showing a general resemblance to a visual background is one of the many camouflage strategies used in nature. For animals like cuttlefish that can dynamically change the way they look, we would like to be able to determine which camouflage strategy a given pattern serves. For example, does an inexact match to a particular background mean the animal has physiological limitations to the patterns it can show, or is it employing a different camouflage strategy (e.g., disrupting its outline)? This thesis uses a computational and data-driven approach to quantify camouflage patterns of cuttlefish in terms of color and pattern. First, we assess the color match of cuttlefish to the features in its natural background in the eyes of its predators. Then, we study overall body patterns to discover relationships and limitations between chromatic components. To facilitate repeatability of our work by others, we also explore ways for unbiased data acquisition using consumer cameras and conventional spectrometers, which are optical imaging instruments most commonly used in studies of animal coloration and camouflage. This thesis makes the following contributions: (1) Proposes a methodology for scene-specific color calibration for the use of RGB cameras for accurate and consistent data acquisition. (2) Introduces an equation relating the numerical aperture and diameter of the optical fiber of a spectrometer to measurement distance and angle, quantifying the degree of spectral contamination. (3) Presents the first study assessing the color match of cuttlefish (S. officinalis) to its background using in situ spectrometry. (4) Develops a computational approach to pattern quantification using techniques from computer vision, image processing, statistics and pattern recognition; and introduces Cuttlefish72x5, the first database of calibrated raw (linear) images of cuttlefish.Funding was provided by the National Science Foundation, Office of Naval Research, NIH-NEI, and the Woods Hole Oceanographic Institution Academic Programs Office

    Personality Identification from Social Media Using Deep Learning: A Review

    Get PDF
    Social media helps in sharing of ideas and information among people scattered around the world and thus helps in creating communities, groups, and virtual networks. Identification of personality is significant in many types of applications such as in detecting the mental state or character of a person, predicting job satisfaction, professional and personal relationship success, in recommendation systems. Personality is also an important factor to determine individual variation in thoughts, feelings, and conduct systems. According to the survey of Global social media research in 2018, approximately 3.196 billion social media users are in worldwide. The numbers are estimated to grow rapidly further with the use of mobile smart devices and advancement in technology. Support vector machine (SVM), Naive Bayes (NB), Multilayer perceptron neural network, and convolutional neural network (CNN) are some of the machine learning techniques used for personality identification in the literature review. This paper presents various studies conducted in identifying the personality of social media users with the help of machine learning approaches and the recent studies that targeted to predict the personality of online social media (OSM) users are reviewed
    corecore