19 research outputs found

    Vector-Quantization by density matching in the minimum Kullback-Leibler divergence sense

    Get PDF
    Abstract- Representation of a large set of bigh-dimensional data is a fundamental problem in many applications such as communications and biomedical systems. The problem has been tackled by encoding the data with a compact set of code-vectors called processing elements. In this study, we propose a vector quantization technique that encodes the information in the data using concepts derived from information theoretic learning. The algorithm minimizes a cost function based on the Kullback-Liebler divergence to match the distribution of the processing elements with the distribution of the data. The performance of this algorithm is demonstrated on synthetic data as well as on an edge-image of a face. Comparisons are provided with some of the existing algorithms such as LEG and SOM. I

    Clustering Approach to Quantify Long-Term Spatio-Temporal Interactions in Epileptic Intracranial Electroencephalography

    Get PDF
    Abnormal dynamical coupling between brain structures is believed to be primarily responsible for the generation of epileptic seizures and their propagation. In this study, we attempt to identify the spatio-temporal interactions of an epileptic brain using a previously proposed nonlinear dependency measure. Using a clustering model, we determine the average spatial mappings in an epileptic brain at different stages of a complex partial seizure. Results involving 8 seizures from 2 epileptic patients suggest that there may be a fixed pattern associated with regional spatio-temporal dynamics during the interictal to pre-post-ictal transition

    A BRUTE-FORCE ANALYTICAL FORMULATION OF THE INDEPENDENT COMPONENTS ANALYSIS SOLUTION

    Get PDF
    ABSTRACT Many algorithms based on information theoretic measures and/or temporal statistics of the signals have been proposed for ICA in the literature. There have also been analytical solutions suggested based on predictive modeling of the signals. In this paper, we show that finding an analytical solution for the ICA problem through solving a system of nonlinear equations is possible. We demonstrate that this solution is robust to decreasing sample size and measurement SNR. Nevertheless, finding the root of the nonlinear function proves to be a challenge. Besides the analytical solution approach, we try finding the solution using a least squares approach with the derived analytical equations. Monte Carlo simulations using the least squares approach are performed to investigate the effect of sample size and measurement noise on the performance

    Noisy dynamics of U(1) lattice gauge theory in ultracold atomic mixtures

    No full text
    Gauge theories are the governing principles of elementary interactions between matter particles and the mediating gauge fields. Even though they have been studied extensively in the realm of High Energy Physics (HEP), realizing their quantum dynamical aspects still remains an arduous task. Quantum simulators provide a promising approach in this regard by mimicking such systems, and mapping them onto other physical platforms that are exper- imentally accessible. In this thesis, an extension of the experimental studies undertaken to simulate a minimalistic version U(1) of Lattice Gauge Theory (LGT), also known as Schwinger model is presented [1]. The experiment conducted therein used an ultracold gas mixture of sodium (23Na) and lithium (7Li), where 23Na realized the gauge field and 7Li realized the mat- ter component. The principle of local gauge invariance, which is a consequence of matter gauge coupling was realized through interspecies spin changing collisions (SCC). A theoreti- cal framework based on mean field approach was then used to describe the corresponding ex- perimental data. As the data was acquired over multiple realizations, it exhibited fluctuations, which were unaccounted for in the previous description of the model. This thesis provides an account of data analysis and theoretical treatments that were performed in order to investigate the cause of such fluctuations. Along with a better understanding of the underlying dynamics, this study opens up further insights. For instance, the fluctuations arising from finite tempera- ture of the atoms might reveal the long time behavior of the system. Furthermore, fluctuations of technical origin are critical in assessing the stability of an experimental setup, which when reduced, pave the way to the study of quantum fluctuations, the role of which is pivotal in all the areas of fundamental physics

    On Spatio-Temporal Dependency Changes in Epileptic Intracranial EEG: A Statistical Assessment

    No full text
    Abstract—Pathological manifestations of epilepsy are generally associated with a set of clinical events that possess both spatial and temporal patterns. In this paper, based on a similar hypothesis, we study the evolution of epileptic seizures by analyzing temporal changes in the spatial bindings between various cortical structures. We propose to apply the Mantel statistics to quantitatively analyze the temporal changes in spatialcorrelation matrices. The Mantel test is applied to 6 complex partial seizures of an epileptic patient. We show that, in 5 of the 6 instances, the spatial structures undergo significant connectivity changes in the 2 hours time-interval prior to the occurrence of a seizure. I

    Refractive-index sensing using hybrid all-dielectric nanoantennae

    No full text
    by Sarang Kulkarni Anant and Ravi Hegd

    Vector-Quantization using Information Theoretic Concepts

    No full text
    Abstract. The process of representing a large data set with a smaller number of vectors in the best possible way, also known as vector quantization, has been intensively studied in the recent years. Very efficient algorithms like the Kohonen Self Organizing Map (SOM) and the Linde Buzo Gray (LBG) algorithm have been devised. In this paper a physical approach to the problem is taken, and it is shown that by considering the processing elements as points moving in a potential field an algorithm equally efficient as the before mentioned can be derived. Unlike SOM and LBG this algorithm has a clear physical interpretation and relies on minimization of a well defined cost-function. It is also shown how the potential field approach can be linked to information theory by use of the Parzen density estimator. In the light of information theory it becomes clear that minimizing the free energy of the system is in fact equivalent to minimizing a divergence measure between the distribution of the data and the distribution of the processing element, hence, the algorithm can be seen as a density matching method

    Recursive Principal Components Analysis Using Eigenvector Matrix Perturbation

    No full text
    Principal components analysis is an important and well-studied subject in statistics and signal processing. The literature has an abundance of algorithms for solving this problem, where most of these algorithms could be grouped into one of the following three approaches: adaptation based on Hebbian updates and deflation, optimization of a second-order statistical criterion (like reconstruction error or output variance), and fixed point update rules with deflation. In this paper, we take a completely different approach that avoids deflation and the optimization of a cost function using gradients. The proposed method updates the eigenvector and eigenvalue matrices simultaneously with every new sample such that the estimates approximately track their true values as would be calculated from the current sample estimate of the data covariance matrix. The performance of this algorithm is compared with that of traditional methods like Sanger's rule and APEX, as well as a structurally similar matrix perturbation-based method
    corecore