51 research outputs found

    Underdetermined Blind Identification for kk-Sparse Component Analysis using RANSAC-based Orthogonal Subspace Search

    Full text link
    Sparse component analysis is very popular in solving underdetermined blind source separation (UBSS) problem. Here, we propose a new underdetermined blind identification (UBI) approach for estimation of the mixing matrix in UBSS. Previous approaches either rely on single dominant component or consider km1k \leq m-1 active sources at each time instant, where mm is the number of mixtures, but impose constraint on the level of noise replacing inactive sources. Here, we propose an effective, computationally less complex, and more robust to noise UBI approach to tackle such restrictions when k=m1k = m-1 based on a two-step scenario: (1) estimating the orthogonal complement subspaces of the overall space and (2) identifying the mixing vectors. For this purpose, an integrated algorithm is presented to solve both steps based on Gram-Schmidt process and random sample consensus method. Experimental results using simulated data show more effectiveness of the proposed method compared with the existing algorithms

    Forward uncertainty quantification with special emphasis on a Bayesian active learning perspective

    Get PDF
    Uncertainty quantification (UQ) in its broadest sense aims at quantitatively studying all sources of uncertainty arising from both computational and real-world applications. Although many subtopics appear in the UQ field, there are typically two major types of UQ problems: forward and inverse uncertainty propagation. The present study focuses on the former, which involves assessing the effects of the input uncertainty in various forms on the output response of a computational model. In total, this thesis reports nine main developments in the context of forward uncertainty propagation, with special emphasis on a Bayesian active learning perspective. The first development is concerned with estimating the extreme value distribution and small first-passage probabilities of uncertain nonlinear structures under stochastic seismic excitations, where a moment-generating function-based mixture distribution approach (MGF-MD) is proposed. As the second development, a triple-engine parallel Bayesian global optimization (T-PBGO) method is presented for interval uncertainty propagation. The third contribution develops a parallel Bayesian quadrature optimization (PBQO) method for estimating the response expectation function, its variable importance and bounds when a computational model is subject to hybrid uncertainties in the form of random variables, parametric probability boxes (p-boxes) and interval models. In the fourth research, of interest is the failure probability function when the inputs of a performance function are characterized by parametric p-boxes. To do so, an active learning augmented probabilistic integration (ALAPI) method is proposed based on offering a partially Bayesian active learning perspective on failure probability estimation, as well as the use of high-dimensional model representation (HDMR) technique. Note that in this work we derive an upper-bound of the posterior variance of the failure probability, which bounds our epistemic uncertainty about the failure probability due to a kind of numerical uncertainty, i.e., discretization error. The fifth contribution further strengthens the previously developed active learning probabilistic integration (ALPI) method in two ways, i.e., enabling the use of parallel computing and enhancing the capability of assessing small failure probabilities. The resulting method is called parallel adaptive Bayesian quadrature (PABQ). The sixth research presents a principled Bayesian failure probability inference (BFPI) framework, where the posterior variance of the failure probability is derived (not in closed form). Besides, we also develop a parallel adaptive-Bayesian failure probability learning (PA-BFPI) method upon the BFPI framework. For the seventh development, we propose a partially Bayesian active learning line sampling (PBAL-LS) method for assessing extremely small failure probabilities, where a partially Bayesian active learning insight is offered for the classical LS method and an upper-bound for the posterior variance of the failure probability is deduced. Following the PBAL-LS method, the eighth contribution finally obtains the expression of the posterior variance of the failure probability in the LS framework, and a Bayesian active learning line sampling (BALLS) method is put forward. The ninth contribution provides another Bayesian active learning alternative, Bayesian active learning line sampling with log-normal process (BAL-LS-LP), to the traditional LS. In this method, the log-normal process prior, instead of a Gaussian process prior, is assumed for the beta function so as to account for the non-negativity constraint. Besides, the approximation error resulting from the root-finding procedure is also taken into consideration. In conclusion, this thesis presents a set of novel computational methods for forward UQ, especially from a Bayesian active learning perspective. The developed methods are expected to enrich our toolbox for forward UQ analysis, and the insights gained can stimulate further studies

    A novel underdetermined source recovery algorithm based on k-sparse component analysis

    Get PDF
    Sparse component analysis (SCA) is a popular method for addressing underdetermined blind source separation in array signal processing applications. We are motivated by problems that arise in the applications where the sources are densely sparse (i.e. the number of active sources is high and very close to the number of sensors). The separation performance of current underdetermined source recovery (USR) solutions, including the relaxation and greedy families, reduces with decreasing the mixing system dimension and increasing the sparsity level (k). In this paper, we present a k-SCA-based algorithm that is suitable for USR in low-dimensional mixing systems. Assuming the sources is at most (m−1) sparse where m is the number of mixtures; the proposed method is capable of recovering the sources from the mixtures given the mixing matrix using a subspace detection framework. Simulation results show that the proposed algorithm achieves better separation performance in k-SCA conditions compared to state-of-the-art USR algorithms such as basis pursuit, minimizing norm-L1, smoothed L0, focal underdetermined system solver and orthogonal matching pursuit

    Revolutionizing Future Connectivity: A Contemporary Survey on AI-empowered Satellite-based Non-Terrestrial Networks in 6G

    Full text link
    Non-Terrestrial Networks (NTN) are expected to be a critical component of 6th Generation (6G) networks, providing ubiquitous, continuous, and scalable services. Satellites emerge as the primary enabler for NTN, leveraging their extensive coverage, stable orbits, scalability, and adherence to international regulations. However, satellite-based NTN presents unique challenges, including long propagation delay, high Doppler shift, frequent handovers, spectrum sharing complexities, and intricate beam and resource allocation, among others. The integration of NTNs into existing terrestrial networks in 6G introduces a range of novel challenges, including task offloading, network routing, network slicing, and many more. To tackle all these obstacles, this paper proposes Artificial Intelligence (AI) as a promising solution, harnessing its ability to capture intricate correlations among diverse network parameters. We begin by providing a comprehensive background on NTN and AI, highlighting the potential of AI techniques in addressing various NTN challenges. Next, we present an overview of existing works, emphasizing AI as an enabling tool for satellite-based NTN, and explore potential research directions. Furthermore, we discuss ongoing research efforts that aim to enable AI in satellite-based NTN through software-defined implementations, while also discussing the associated challenges. Finally, we conclude by providing insights and recommendations for enabling AI-driven satellite-based NTN in future 6G networks.Comment: 40 pages, 19 Figure, 10 Tables, Surve

    Digital Image Analysis of Vitiligo for Monitoring of Vitiligo Treatment

    Get PDF
    Vitiligo is an acquired pigmentary skin disorder characterized by depigmented macules that result from damage to and destruction of epidermal melanocytes. Visually, the vitiligous areas are paler in contrast to normal skin or completely white due to the lack of pigment melanin. The course of vitiligo is unpredictable where the vitiligous skin lesions may remain stable for years before worsening. Vitiligo treatments have two objectives, to arrest disease progression and to re-pigment the vitiligous skin lesions. To monitor the efficacy of the treatment, dermatologists observe the disease directly, or indirectly using digital photos. Currently there is no objective method to determine the efficacy of the vitiligo treatment. Physician's Global Assessment (PGA) scale is the current scoring system used by dermatologists to evaluate the treatment. The scale is based on the degree of repigmentation within lesions over time. This quantitative tool however may not be help to detect slight changes due to treatment as it would still be largely dependent on the human eye and judgment to produce the scorings. In addition, PGA score is also subjective, as it varies with dermatologists. The progression of vitiligo treatment can be very slow and can take more than 6 months. It is observed that dermatologists find it visually hard to determine the areas of skin repigmentation due to this slow progress and as a result the observations are made after a longer time frame. The objective of this research is to develop a tool that enables dermatologists to determine and quantify areas of repigmentation objectively over a shorter time frame during treatment. The approaches towards achieving this objective are based on digital image processing techniques. Skin color is due to the combination of skin histological parameters, namely pigment melanin and haemoglobin. However in digital imaging, color is produced by combining three different spectral bands, namely red, green, and blue (RGB). It is believed that the spatial distribution of melanin and haemoglobin in skin image could be separated. It is found that skin color distribution lies on a two-dimensional melanin-haemoglobin color subspace. In order to determine repigmentation (due to pigment melanin) it is necessary to perform a conversion from RGB skin image to this two-dimensional color subspace. Using principal component analysis (PCA) as a dimensional reduction tool, the two-dimensional subspace can be represented by its first and second principal components. Independent component analysis is employed to convert the twodimensional subspace into a skin image that represents skin areas due to melanin and haemoglobin only. In the skin image that represents skin areas due to melanin, vitiligous skin lesions are identified as skin areas that lack melanin. Segmentation is performed to separate the healthy skin and the vitiligous lesions. The difference in the vitiligous surface areas between skin images before and after treatment will be expressed as a percentage of repigmentation in each vitiligo lesion. This percentage will represent the repigmentation progression of a particular body region. Results of preliminary and pre-clinical trial study show that our vitiligo monitoring system has been able to determine repigmentation progression objectively and thus treatment efficacy on a shorter time cycle. An intensive clinical trial is currently undertaken in Hospital Kuala Lumpur using our developed system. VI

    Model combination by decomposition and aggregation

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Nuclear Engineering, 2004.Includes bibliographical references (p. 265-282).This thesis focuses on a general problem in statistical modeling, namely model combination. It proposes a novel feature-based model combination method to improve model accuracy and reduce model uncertainty. In this method, a set of candidate models are first decomposed into a group of components or features and then components are selected and aggregated into a composite model based on data. However, in implementing this new method, some central challenges have to be addressed, which include candidate model choice, component selection, data noise modeling, model uncertainty reduction and model locality. In order to solve these problems, some new methods are put forward. In choosing candidate models, some criteria are proposed including accuracy, diversity, independence as well as completeness and then corresponding quantitative measures are designed to quantify these criteria, and finally an overall preference score is generated for each model in the pool. Principal component analysis (PCA) and independent component analysis (ICA) are applied to decompose candidate models into components and multiple linear regression is employed to aggregate components into a composite model.(cont.) In order to reduce model structure uncertainty, a new concept of fuzzy variable selection is introduced to carry out component selection, which is able to combine the interpretability of classical variable selection and the stability of shrinkage estimators. In dealing with parameter estimation uncertainty, exponential power distribution is proposed to model unknown non-Gaussian noise and parametric weighted least-squares method is devise to estimate parameters in the context of non-Gaussian noise. These two methods are combined to work together to reduce model uncertainty, including both model structure uncertainty and parameter uncertainty. To handle model locality, i.e. candidate models do not work equally well over different regions, the adaptive fuzzy mixture of local ICA models is developed. Basically, it splits the entire input space into domains, build local ICA models within each sub-region and then combine them into a mixture model. Many different experiments are carried out to demonstrate the performance of this novel method. Our simulation study and comparison show that this new method meets our goals and outperforms existing methods in most situations.by Mingyang Xu.Ph.D

    Efficient multitemporal change detection techniques for hyperspectral images on GPU

    Get PDF
    Hyperspectral images contain hundreds of reflectance values for each pixel. Detecting regions of change in multiple hyperspectral images of the same scene taken at different times is of widespread interest for a large number of applications. For remote sensing, in particular, a very common application is land-cover analysis. The high dimensionality of the hyperspectral images makes the development of computationally efficient processing schemes critical. This thesis focuses on the development of change detection approaches at object level, based on supervised direct multidate classification, for hyperspectral datasets. The proposed approaches improve the accuracy of current state of the art algorithms and their projection onto Graphics Processing Units (GPUs) allows their execution in real-time scenarios

    Distributed Learning, Prediction and Detection in Probabilistic Graphs.

    Full text link
    Critical to high-dimensional statistical estimation is to exploit the structure in the data distribution. Probabilistic graphical models provide an efficient framework for representing complex joint distributions of random variables through their conditional dependency graph, and can be adapted to many high-dimensional machine learning applications. This dissertation develops the probabilistic graphical modeling technique for three statistical estimation problems arising in real-world applications: distributed and parallel learning in networks, missing-value prediction in recommender systems, and emerging topic detection in text corpora. The common theme behind all proposed methods is a combination of parsimonious representation of uncertainties in the data, optimization surrogate that leads to computationally efficient algorithms, and fundamental limits of estimation performance in high dimension. More specifically, the dissertation makes the following theoretical contributions: (1) We propose a distributed and parallel framework for learning the parameters in Gaussian graphical models that is free of iterative global message passing. The proposed distributed estimator is shown to be asymptotically consistent, improve with increasing local neighborhood sizes, and have a high-dimensional error rate comparable to that of the centralized maximum likelihood estimator. (2) We present a family of latent variable Gaussian graphical models whose marginal precision matrix has a “low-rank plus sparse” structure. Under mild conditions, we analyze the high-dimensional parameter error bounds for learning this family of models using regularized maximum likelihood estimation. (3) We consider a hypothesis testing framework for detecting emerging topics in topic models, and propose a novel surrogate test statistic for the standard likelihood ratio. By leveraging the theory of empirical processes, we prove asymptotic consistency for the proposed test and provide guarantees of the detection performance.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/110499/1/mengzs_1.pd

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described
    corecore