84,056 research outputs found

    Machine Learning in Wireless Sensor Networks: Algorithms, Strategies, and Applications

    Get PDF
    Wireless sensor networks monitor dynamic environments that change rapidly over time. This dynamic behavior is either caused by external factors or initiated by the system designers themselves. To adapt to such conditions, sensor networks often adopt machine learning techniques to eliminate the need for unnecessary redesign. Machine learning also inspires many practical solutions that maximize resource utilization and prolong the lifespan of the network. In this paper, we present an extensive literature review over the period 2002-2013 of machine learning methods that were used to address common issues in wireless sensor networks (WSNs). The advantages and disadvantages of each proposed algorithm are evaluated against the corresponding problem. We also provide a comparative guide to aid WSN designers in developing suitable machine learning solutions for their specific application challenges.Comment: Accepted for publication in IEEE Communications Surveys and Tutorial

    Estimating Time-Varying Effective Connectivity in High-Dimensional fMRI Data Using Regime-Switching Factor Models

    Full text link
    Recent studies on analyzing dynamic brain connectivity rely on sliding-window analysis or time-varying coefficient models which are unable to capture both smooth and abrupt changes simultaneously. Emerging evidence suggests state-related changes in brain connectivity where dependence structure alternates between a finite number of latent states or regimes. Another challenge is inference of full-brain networks with large number of nodes. We employ a Markov-switching dynamic factor model in which the state-driven time-varying connectivity regimes of high-dimensional fMRI data are characterized by lower-dimensional common latent factors, following a regime-switching process. It enables a reliable, data-adaptive estimation of change-points of connectivity regimes and the massive dependencies associated with each regime. We consider the switching VAR to quantity the dynamic effective connectivity. We propose a three-step estimation procedure: (1) extracting the factors using principal component analysis (PCA) and (2) identifying dynamic connectivity states using the factor-based switching vector autoregressive (VAR) models in a state-space formulation using Kalman filter and expectation-maximization (EM) algorithm, and (3) constructing the high-dimensional connectivity metrics for each state based on subspace estimates. Simulation results show that our proposed estimator outperforms the K-means clustering of time-windowed coefficients, providing more accurate estimation of regime dynamics and connectivity metrics in high-dimensional settings. Applications to analyzing resting-state fMRI data identify dynamic changes in brain states during rest, and reveal distinct directed connectivity patterns and modular organization in resting-state networks across different states.Comment: 21 page

    Model-free reconstruction of neuronal network connectivity from calcium imaging signals

    Get PDF
    A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically unfeasible even in dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct approximations to network structural connectivities from network activity monitored through calcium fluorescence imaging. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time-series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the effective network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (e.g., bursting or non-bursting). We thus demonstrate how conditioning with respect to the global mean activity improves the performance of our method. [...] Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good reconstruction of the network clustering coefficient, allowing to discriminate between weakly or strongly clustered topologies, whereas on the other hand an approach based on cross-correlations would invariantly detect artificially high levels of clustering. Finally, we present the applicability of our method to real recordings of in vitro cortical cultures. We demonstrate that these networks are characterized by an elevated level of clustering compared to a random graph (although not extreme) and by a markedly non-local connectivity.Comment: 54 pages, 8 figures (+9 supplementary figures), 1 table; submitted for publicatio

    Complex networks: new trends for the analysis of brain connectivity

    Full text link
    Today, the human brain can be studied as a whole. Electroencephalography, magnetoencephalography, or functional magnetic resonance imaging techniques provide functional connectivity patterns between different brain areas, and during different pathological and cognitive neuro-dynamical states. In this Tutorial we review novel complex networks approaches to unveil how brain networks can efficiently manage local processing and global integration for the transfer of information, while being at the same time capable of adapting to satisfy changing neural demands.Comment: Tutorial paper to appear in the Int. J. Bif. Chao

    Dynamic Tensor Clustering

    Full text link
    Dynamic tensor data are becoming prevalent in numerous applications. Existing tensor clustering methods either fail to account for the dynamic nature of the data, or are inapplicable to a general-order tensor. Also there is often a gap between statistical guarantee and computational efficiency for existing tensor clustering solutions. In this article, we aim to bridge this gap by proposing a new dynamic tensor clustering method, which takes into account both sparsity and fusion structures, and enjoys strong statistical guarantees as well as high computational efficiency. Our proposal is based upon a new structured tensor factorization that encourages both sparsity and smoothness in parameters along the specified tensor modes. Computationally, we develop a highly efficient optimization algorithm that benefits from substantial dimension reduction. In theory, we first establish a non-asymptotic error bound for the estimator from the structured tensor factorization. Built upon this error bound, we then derive the rate of convergence of the estimated cluster centers, and show that the estimated clusters recover the true cluster structures with a high probability. Moreover, our proposed method can be naturally extended to co-clustering of multiple modes of the tensor data. The efficacy of our approach is illustrated via simulations and a brain dynamic functional connectivity analysis from an Autism spectrum disorder study.Comment: Accepted at Journal of the American Statistical Associatio
    corecore