627 research outputs found

    Ecosystem Monitoring and Port Surveillance Systems

    No full text
    International audienceIn this project, we should build up a novel system able to perform a sustainable and long term monitoring coastal marine ecosystems and enhance port surveillance capability. The outcomes will be based on the analysis, classification and the fusion of a variety of heterogeneous data collected using different sensors (hydrophones, sonars, various camera types, etc). This manuscript introduces the identified approaches and the system structure. In addition, it focuses on developed techniques and concepts to deal with several problems related to our project. The new system will address the shortcomings of traditional approaches based on measuring environmental parameters which are expensive and fail to provide adequate large-scale monitoring. More efficient monitoring will also enable improved analysis of climate change, and provide knowledge informing the civil authority's economic relationship with its coastal marine ecosystems

    Denoising using local projective subspace methods

    Get PDF
    In this paper we present denoising algorithms for enhancing noisy signals based on Local ICA (LICA), Delayed AMUSE (dAMUSE) and Kernel PCA (KPCA). The algorithm LICA relies on applying ICA locally to clusters of signals embedded in a high-dimensional feature space of delayed coordinates. The components resembling the signals can be detected by various criteria like estimators of kurtosis or the variance of autocorrelations depending on the statistical nature of the signal. The algorithm proposed can be applied favorably to the problem of denoising multi-dimensional data. Another projective subspace denoising method using delayed coordinates has been proposed recently with the algorithm dAMUSE. It combines the solution of blind source separation problems with denoising efforts in an elegant way and proofs to be very efficient and fast. Finally, KPCA represents a non-linear projective subspace method that is well suited for denoising also. Besides illustrative applications to toy examples and images, we provide an application of all algorithms considered to the analysis of protein NMR spectra.info:eu-repo/semantics/publishedVersio

    Overcomplete Independent Component Analysis via SDP

    Full text link
    We present a novel algorithm for overcomplete independent components analysis (ICA), where the number of latent sources k exceeds the dimension p of observed variables. Previous algorithms either suffer from high computational complexity or make strong assumptions about the form of the mixing matrix. Our algorithm does not make any sparsity assumption yet enjoys favorable computational and theoretical properties. Our algorithm consists of two main steps: (a) estimation of the Hessians of the cumulant generating function (as opposed to the fourth and higher order cumulants used by most algorithms) and (b) a novel semi-definite programming (SDP) relaxation for recovering a mixing component. We show that this relaxation can be efficiently solved with a projected accelerated gradient descent method, which makes the whole algorithm computationally practical. Moreover, we conjecture that the proposed program recovers a mixing component at the rate k < p^2/4 and prove that a mixing component can be recovered with high probability when k < (2 - epsilon) p log p when the original components are sampled uniformly at random on the hyper sphere. Experiments are provided on synthetic data and the CIFAR-10 dataset of real images.Comment: Appears in: Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS 2019). 21 page

    Multiresolution image models and estimation techniques

    Get PDF

    A New Basis for Sparse PCA

    Full text link
    The statistical and computational performance of sparse principal component analysis (PCA) can be dramatically improved when the principal components are allowed to be sparse in a rotated eigenbasis. For this, we propose a new method for sparse PCA. In the simplest version of the algorithm, the component scores and loadings are initialized with a low-rank singular value decomposition. Then, the singular vectors are rotated with orthogonal rotations to make them approximately sparse. Finally, soft-thresholding is applied to the rotated singular vectors. This approach differs from prior approaches because it uses an orthogonal rotation to approximate a sparse basis. Our sparse PCA framework is versatile; for example, it extends naturally to the two-way analysis of a data matrix for simultaneous dimensionality reduction of rows and columns. We identify the close relationship between sparse PCA and independent component analysis for separating sparse signals. We provide empirical evidence showing that for the same level of sparsity, the proposed sparse PCA method is more stable and can explain more variance compared to alternative methods. Through three applications---sparse coding of images, analysis of transcriptome sequencing data, and large-scale clustering of Twitter accounts, we demonstrate the usefulness of sparse PCA in exploring modern multivariate data.Comment: 33 pages, 8 figure

    Improved Feature Extraction, Feature Selection, and Identification Techniques That Create a Fast Unsupervised Hyperspectral Target Detection Algorithm

    Get PDF
    This research extends the emerging field of hyperspectral image (HSI) target detectors that assume a global linear mixture model (LMM) of HSI and employ independent component analysis (ICA) to unmix HSI images. Via new techniques to fully automate feature extraction, feature selection, and target pixel identification, an autonomous global anomaly detector, AutoGAD, has been developed for potential employment in an operational environment for real-time processing of HSI targets. For dimensionality reduction (initial feature extraction prior to ICA), a geometric solution that effectively approximates the number of distinct spectral signals is presented. The solution is based on the theory of the shape of the eigenvalue curve of the covariance matrix of spectral data containing noise. For feature selection, previously a subjective definition called significant kurtosis change was used to denote the separation between targets classes and non-target classes. This research presents two new measures, potential target signal to noise ratio (PT SNR) and max pixel score which computed for each of the ICA features to create a new two dimensional feature space where the overlap between target and non-target classes is reduced compared to the one dimensional kurtosis value feature space. Finally, after target feature selection, adaptive noise filtering, but with an iterative approach, is applied to the signals. The effect is a reduction in the power of the noise while preserving the power of the target signal prior to target identification to reduce false positive detections. A zero-detection histogram method is applied to the smoothed signals to identify target locations to the user. MATLAB code for the AutoGAD algorithm is provided

    Enhancing brain-computer interfacing through advanced independent component analysis techniques

    No full text
    A Brain-computer interface (BCI) is a direct communication system between a brain and an external device in which messages or commands sent by an individual do not pass through the brain’s normal output pathways but is detected through brain signals. Some severe motor impairments, such as Amyothrophic Lateral Sclerosis, head trauma, spinal injuries and other diseases may cause the patients to lose their muscle control and become unable to communicate with the outside environment. Currently no effective cure or treatment has yet been found for these diseases. Therefore using a BCI system to rebuild the communication pathway becomes a possible alternative solution. Among different types of BCIs, an electroencephalogram (EEG) based BCI is becoming a popular system due to EEG’s fine temporal resolution, ease of use, portability and low set-up cost. However EEG’s susceptibility to noise is a major issue to develop a robust BCI. Signal processing techniques such as coherent averaging, filtering, FFT and AR modelling, etc. are used to reduce the noise and extract components of interest. However these methods process the data on the observed mixture domain which mixes components of interest and noise. Such a limitation means that extracted EEG signals possibly still contain the noise residue or coarsely that the removed noise also contains part of EEG signals embedded. Independent Component Analysis (ICA), a Blind Source Separation (BSS) technique, is able to extract relevant information within noisy signals and separate the fundamental sources into the independent components (ICs). The most common assumption of ICA method is that the source signals are unknown and statistically independent. Through this assumption, ICA is able to recover the source signals. Since the ICA concepts appeared in the fields of neural networks and signal processing in the 1980s, many ICA applications in telecommunications, biomedical data analysis, feature extraction, speech separation, time-series analysis and data mining have been reported in the literature. In this thesis several ICA techniques are proposed to optimize two major issues for BCI applications: reducing the recording time needed in order to speed up the signal processing and reducing the number of recording channels whilst improving the final classification performance or at least with it remaining the same as the current performance. These will make BCI a more practical prospect for everyday use. This thesis first defines BCI and the diverse BCI models based on different control patterns. After the general idea of ICA is introduced along with some modifications to ICA, several new ICA approaches are proposed. The practical work in this thesis starts with the preliminary analyses on the Southampton BCI pilot datasets starting with basic and then advanced signal processing techniques. The proposed ICA techniques are then presented using a multi-channel event related potential (ERP) based BCI. Next, the ICA algorithm is applied to a multi-channel spontaneous activity based BCI. The final ICA approach aims to examine the possibility of using ICA based on just one or a few channel recordings on an ERP based BCI. The novel ICA approaches for BCI systems presented in this thesis show that ICA is able to accurately and repeatedly extract the relevant information buried within noisy signals and the signal quality is enhanced so that even a simple classifier can achieve good classification accuracy. In the ERP based BCI application, after multichannel ICA the data just applied to eight averages/epochs can achieve 83.9% classification accuracy whilst the data by coherent averaging can reach only 32.3% accuracy. In the spontaneous activity based BCI, the use of the multi-channel ICA algorithm can effectively extract discriminatory information from two types of singletrial EEG data. The classification accuracy is improved by about 25%, on average, compared to the performance on the unpreprocessed data. The single channel ICA technique on the ERP based BCI produces much better results than results using the lowpass filter. Whereas the appropriate number of averages improves the signal to noise rate of P300 activities which helps to achieve a better classification. These advantages will lead to a reliable and practical BCI for use outside of the clinical laboratory
    • …
    corecore