92 research outputs found

    Decision-Making with Heterogeneous Sensors - A Copula Based Approach

    Get PDF
    Statistical decision making has wide ranging applications, from communications and signal processing to econometrics and finance. In contrast to the classical one source-one receiver paradigm, several applications have been identified in the recent past that require acquiring data from multiple sources or sensors. Information from the multiple sensors are transmitted to a remotely located receiver known as the fusion center which makes a global decision. Past work has largely focused on fusion of information from homogeneous sensors. This dissertation extends the formulation to the case when the local sensors may possess disparate sensing modalities. Both the theoretical and practical aspects of multimodal signal processing are considered. The first and foremost challenge is to \u27adequately\u27 model the joint statistics of such heterogeneous sensors. We propose the use of copula theory for this purpose. Copula models are general descriptors of dependence. They provide a way to characterize the nonlinear functional relationships between the multiple modalities, which are otherwise difficult to formalize. The important problem of selecting the `best\u27 copula function from a given set of valid copula densities is addressed, especially in the context of binary hypothesis testing problems. Both, the training-testing paradigm, where a training set is assumed to be available for learning the copula models prior to system deployment, as well as generalized likelihood ratio test (GLRT) based fusion rule for the online selection and estimation of copula parameters are considered. The developed theory is corroborated with extensive computer simulations as well as results on real-world data. Sensor observations (or features extracted thereof) are most often quantized before their transmission to the fusion center for bandwidth and power conservation. A detection scheme is proposed for this problem assuming unifom scalar quantizers at each sensor. The designed rule is applicable for both binary and multibit local sensor decisions. An alternative suboptimal but computationally efficient fusion rule is also designed which involves injecting a deliberate disturbance to the local sensor decisions before fusion. The rule is based on Widrow\u27s statistical theory of quantization. Addition of controlled noise helps to \u27linearize\u27 the higly nonlinear quantization process thus resulting in computational savings. It is shown that although the introduction of external noise does cause a reduction in the received signal to noise ratio, the proposed approach can be highly accurate when the input signals have bandlimited characteristic functions, and the number of quantization levels is large. The problem of quantifying neural synchrony using copula functions is also investigated. It has been widely accepted that multiple simultaneously recorded electroencephalographic signals exhibit nonlinear and non-Gaussian statistics. While the existing and popular measures such as correlation coefficient, corr-entropy coefficient, coh-entropy and mutual information are limited to being bivariate and hence applicable only to pairs of channels, measures such as Granger causality, even though multivariate, fail to account for any nonlinear inter-channel dependence. The application of copula theory helps alleviate both these limitations. The problem of distinguishing patients with mild cognitive impairment from the age-matched control subjects is also considered. Results show that the copula derived synchrony measures when used in conjunction with other synchrony measures improve the detection of Alzheimer\u27s disease onset

    A Novel Adaptive LBP-Based Descriptor for Color Image Retrieval

    Get PDF
    In this paper, we present two approaches to extract discriminative features for color image retrieval. The proposed local texture descriptors, based on Radial Mean Local Binary Pattern (RMLBP), are called Color RMCLBP (CRMCLBP) and Prototype Data Model (PDM). RMLBP is a robust to noise descriptor which has been proposed to extract texture features of gray scale images for texture classification. For the first descriptor, the Radial Mean Completed Local Binary Pattern is applied to channels of the color space, independently. Then, the final descriptor is achieved by concatenating the histogram of the CRMCLBP_S/M/C component of each channel. Moreover, to enhance the performance of the proposed method, the Particle Swarm Optimization (PSO) algorithm is used for feature weighting. The second proposed descriptor, PDM, uses the three outputs of CRMCLBP (CRMCLBP_S, CRMCLBP_M, CRMCLBP_C) as discriminative features for each pixel of a color image. Then, a set of representative feature vectors are selected from each image by applying k-means clustering algorithm. This set of selected prototypes are compared by means of a new similarity measure to find the most relevant images. Finally, the weighted versions of PDM is constructed using PSO algorithm. Our proposed methods are tested on Wang, Corel-5k, Corel-10k and Holidays datasets. The results show that our proposed methods makes an admissible tradeoff between speed and retrieval accuracy. The first descriptor enhances the state-of-the-art color texture descriptors in both aspects. The second one is a very fast retrieval algorithm which extracts discriminative features

    Automated dental identification: A micro-macro decision-making approach

    Get PDF
    Identification of deceased individuals based on dental characteristics is receiving increased attention, especially with the large volume of victims encountered in mass disasters. In this work we consider three important problems in automated dental identification beyond the basic approach of tooth-to-tooth matching.;The first problem is on automatic classification of teeth into incisors, canines, premolars and molars as part of creating a data structure that guides tooth-to-tooth matching, thus avoiding illogical comparisons that inefficiently consume the limited computational resources and may also mislead the decision-making. We tackle this problem using principal component analysis and string matching techniques. We reconstruct the segmented teeth using the eigenvectors of the image subspaces of the four teeth classes, and then call the teeth classes that achieve least energy-discrepancy between the novel teeth and their approximations. We exploit teeth neighborhood rules in validating teeth-classes and hence assign each tooth a number corresponding to its location in a dental chart. Our approach achieves 82% teeth labeling accuracy based on a large test dataset of bitewing films.;Because dental radiographic films capture projections of distinct teeth; and often multiple views for each of the distinct teeth, in the second problem we look for a scheme that exploits teeth multiplicity to achieve more reliable match decisions when we compare the dental records of a subject and a candidate match. Hence, we propose a hierarchical fusion scheme that utilizes both aspects of teeth multiplicity for improving teeth-level (micro) and case-level (macro) decision-making. We achieve a genuine accept rate in excess of 85%.;In the third problem we study the performance limits of dental identification due to features capabilities. We consider two types of features used in dental identification, namely teeth contours and appearance features. We propose a methodology for determining the number of degrees of freedom possessed by a feature set, as a figure of merit, based on modeling joint distributions using copulas under less stringent assumptions on the dependence between feature dimensions. We also offer workable approximations of this approach

    Change detection in optical aerial images by a multilayer conditional mixed Markov model

    Get PDF
    In this paper we propose a probabilistic model for detecting relevant changes in registered aerial image pairs taken with the time differences of several years and in different seasonal conditions. The introduced approach, called the Conditional Mixed Markov model (CXM), is a combination of a mixed Markov model and a conditionally independent random field of signals. The model integrates global intensity statistics with local correlation and contrast features. A global energy optimization process ensures simultaneously optimal local feature selection and smooth, observation-consistent segmentation. Validation is given on real aerial image sets provided by the Hungarian Institute of Geodesy, Cartography and Remote Sensing and Google Earth

    Long-term and high-resolution global time series of brightness temperature from copula-based fusion of SMAP enhanced and SMOS data

    Get PDF
    Long and consistent soil moisture time series at adequate spatial resolution are key to foster the application of soil moisture observations and remotely-sensed products in climate and numerical weather prediction models. The two L-band soil moisture satellite missions SMAP (Soil Moisture Active Passive) and SMOS (Soil Moisture and Ocean Salinity) are able to provide soil moisture estimates on global scales and in kilometer accuracy. However, the SMOS data record has an appropriate length of 7.5 years since late 2009, but with a coarse resolution of 25km only. In contrast, a spatially-enhanced SMAP product is available at a higher resolution of 9 km, but for a shorter time period (since March 2015 only). Being the fundamental observable from passive microwave sensors, reliable brightness temperatures (Tbs) are a mandatory precondition for satellite-based soil moisture products. We therefore develop, evaluate and apply a copula-based data fusion approach for combining SMAP Enhanced (SMAP_E) and SMOS brightness Temperature (Tb) data. The approach exploits both linear and non-linear dependencies between the two satellite-based Tb products and allows one to generate conditional SMAP_E-like random samples during the pre-SMAP period. Our resulting global Copula-combined SMOS-SMAP_E (CoSMOP) Tbs are statistically consistent with SMAP_E brightness temperatures, have a spatial resolution of 9km and cover the period from 2010 to 2018. A comparison with Service Soil Climate Analysis Network (SCAN)-sites over the Contiguous United States (CONUS) domain shows that the approach successfully reduces the average RMSE of the original SMOS data by 15%. At certain locations, improvements of 40% and more can be observed. Moreover, the median NSE can be enhanced from zero to almost 0.5. Hence, CoSMOP, which will be made freely available to the public, provides a first step towards a global, long-term, high-resolution and multi-sensor brightness temperature product, and thereby, also soil moisture

    Non-Gaussian data modeling with hidden Markov models

    Get PDF
    In 2015, 2.5 quintillion bytes of data were daily generated worldwide of which 90% were unstructured data that do not follow any pre-defined model. These data can be found in a great variety of formats among them are texts, images, audio tracks, or videos. With appropriate techniques, this massive amount of data is a goldmine from which one can extract a variety of meaningful embedded information. Among those techniques, machine learning algorithms allow multiple processing possibilities from compact data representation, to data clustering, classification, analysis, and synthesis, to the detection of outliers. Data modeling is the first step for performing any of these tasks and the accuracy and reliability of this initial step is thus crucial for subsequently building up a complete data processing framework. The principal motivation behind my work is the over-use of the Gaussian assumption for data modeling in the literature. Though this assumption is probably the best to make when no information about the data to be modeled is available, in most cases studying a few data properties would make other distributions a better assumption. In this thesis, I focus on proportional data that are most commonly known in the form of histograms and that naturally arise in a number of situations such as in bag-of-words methods. These data are non-Gaussian and their modeling with distributions belonging the Dirichlet family, that have common properties, is expected to be more accurate. The models I focus on are the hidden Markov models, well-known for their capabilities to easily handle dynamic ordered multivariate data. They have been shown to be very effective in numerous fields for various applications for the last 30 years and especially became a corner stone in speech processing. Despite their extensive use in almost all computer vision areas, they are still mainly suited for Gaussian data modeling. I propose here to theoretically derive different approaches for learning and applying to real-world situations hidden Markov models based on mixtures of Dirichlet, generalized Dirichlet, Beta-Liouville distributions, and mixed data. Expectation-Maximization and variational learning approaches are studied and compared over several data sets, specifically for the task of detecting and localizing unusual events. Hybrid HMMs are proposed to model mixed data with the goal of detecting changes in satellite images corrupted by different noises. Finally, several parametric distances for comparing Dirichlet and generalized Dirichlet-based HMMs are proposed and extensively tested for assessing their robustness. My experimental results show situations in which such models are worthy to be used, but also unravel their strength and limitations

    Human metrology for person classification and recognition

    Get PDF
    Human metrological features generally refers to geometric measurements extracted from humans, such as height, chest circumference or foot length. Human metrology provides an important soft biometric that can be used in challenging situations, such as person classification and recognition at a distance, where hard biometric traits such as fingerprints and iris information cannot easily be acquired. In this work, we first study the question of predictability and correlation in human metrology. We show that partial or available measurements can be used to predict other missing measurements. We then investigate the use of human metrology for the prediction of other soft biometrics, viz. gender and weight. The experimental results based on our proposed copula-based model suggest that human body metrology contains enough information for reliable prediction of gender and weight. Also, the proposed copula-based technique is observed to reduce the impact of noise on prediction performance. We then study the question of whether face metrology can be exploited for reliable gender prediction. A new method based solely on metrological information from facial landmarks is developed. The performance of the proposed metrology-based method is compared with that of a state-of-the-art appearance-based method for gender classification. Results on several face databases show that the metrology-based approach resulted in comparable accuracy to that of the appearance-based method. Furthermore, we study the question of person recognition (classification and identification) via whole body metrology. Using CAESAR 1D database as baseline, we simulate intra-class variation with various noise models. The experimental results indicate that given enough number of features, our metrology-based recognition system can have promising performance that is comparable to several recent state-of-the-art recognition systems. We propose a non-parametric feature selection methodology, called adapted k-nearest neighbor estimator, which does not rely on intra-class distribution of the query set. This leads to improved results over other nearest neighbor estimators (as feature selection criteria) for moderate number of features. Finally we quantify the discrimination capability of human metrology, from both individuality and capacity perspectives. Generally, a biometric-based recognition technique relies on an assumption that the given biometric is unique to an individual. However, the validity of this assumption is not yet generally confirmed for most soft biometrics, such as human metrology. In this work, we first develop two schemes that can be used to quantify the individuality of a given soft-biometric system. Then, a Poisson channel model is proposed to analyze the recognition capacity of human metrology. Our study suggests that the performance of such a system depends more on the accuracy of the ground truth or training set
    • …
    corecore