19 research outputs found

    Parameter identification in Choquet Integral by the Kullback-Leibler diversgence on continuous densities with application to classification fusion.

    No full text
    International audienceClassifier fusion is a means to increase accuracy and decision-making of classification systems by designing a set of basis classifiers and then combining their outputs. The combination is made up by non linear functional dependent on fuzzy measures called Choquet integral. It constitues a vast family of aggregation operators including minimum, maximum or weighted sum. The main issue before applying the Choquet integral is to identify the 2M − 2 parameters for M classifiers. We follow a previous work by Kojadinovic and one of the authors where the identification is performed using an informationtheoritic approach. The underlying probability densities are made smooth by fitting continuous parametric and then the Kullback-Leibler divergence is used to identify fuzzy measures. The proposed framework is applied on widely used datasets

    Statistical approaches for synaptic characterization

    Get PDF
    Synapses are fascinatingly complex transmission units. One of the fundamental features of synaptic transmission is its stochasticity, as neurotransmitter release exhibits variability and possible failures. It is also quantised: postsynaptic responses to presynaptic stimulations are built up of several and similar quanta of current, each of them arising from the release of one presynaptic vesicle. Moreover, they are dynamic transmission units, as their activity depends on the history of previous spikes and stimulations, a phenomenon known as synaptic plasticity. Finally, synapses exhibit a very broad range of dynamics, features, and connection strengths, depending on neuromodulators concentration [5], the age of the subject [6], their localization in the CNS or in the PNS, or the type of neurons [7]. Addressing the complexity of synaptic transmission is a relevant problem for both biologists and theoretical neuroscientists. From a biological perspective, a finer understanding of transmission mechanisms would allow to study possibly synapse-related diseases, or to determine the locus of plasticity and homeostasis. From a theoretical perspective, different normative explanations for synaptic stochasticity have been proposed, including its possible role in uncertainty encoding, energy-efficient computation, or generalization while learning. A precise description of synaptic transmission will be critical for the validation of these theories and for understanding the functional relevance of this probabilistic and dynamical release. A central issue, which is common to all these areas of research, is the problem of synaptic characterization. Synaptic characterization (also called synaptic interrogation [8]) refers to a set of methods for exploring synaptic functions, inferring the value of synaptic parameters, and assessing features such as plasticity and modes of release. This doctoral work sits at the crossroads of experimental and theoretical neuroscience: its main aim is to develop statistical tools and methods to improve synaptic characterization, and hence to bring quantitative solutions to biological questions. In this thesis, we focus on model-based approaches to quantify synaptic transmission, for which different methods are reviewed in Chapter 3. By fitting a generative model of postsynaptic currents to experimental data, it is possible to infer the value of the synapse’s parameters. By performing model selection, we can compare different modelizations of a synapse and thus quantify its features. The main goal of this thesis is thus to develop theoretical and statistical tools to improve the efficiency of both model fitting and model selection. A first question that often arises when recording synaptic currents is how to precisely observe and measure a quantal transmission. As mentioned above, synaptic transmission has been observed to be quantised. Indeed, the opening of a single presynaptic vesicle (and the release of the neurotransmitters it contains) will create a stereotypical postsynaptic current q, which is called the quantal amplitude. As the number of activated presynaptic vesicles increases, the total postsynaptic current will increase in step-like increments of amplitude q. Hence, at chemical synapses, the postsynaptic responses to presynaptic stimulations are built up of k quanta of current, where k is a random variable corresponding to the number of open vesicles. Excitatory postsynaptic current (EPSC) thus follows a multimodal distribution, where each component has its mean located to a multiple kq with k 2 N and has a width corresponding to the recording noise σ. If σ is large with respect to q, these components will fuse into a unimodal distribution, impeding the possibility to identify quantal transmission and to compute q. How to characterize the regime of parameters in which quantal transmission can be identified? This question led us to define a practical identifiability criterion for statistical model, which is presented in Chapter 4. In doing so, we also derive a mean-field approach for fast likelihood computation (Appendix A) and discuss the possibility to use the Bayesian Information Criterion (a classically used model selection criterion) with correlated observations (Appendix B). A second question that is especially relevant for experimentalists is how to optimally stimulate the presynaptic cell in order to maximize the informativeness of the recordings. The parameters of a chemical synapse (namely, the number of presynaptic vesicles N, their release probability p, the quantal amplitude q, the short-term depression time constant τD, etc.) cannot be measured directly, but can be estimated from the synapse’s postsynaptic responses to evoked stimuli. However, these estimates critically depend on the stimulation protocol being used. For instance, if inter-spike intervals are too large, no short-term plasticity will appear in the recordings; conversely, a too high stimulation frequency will lead to a depletion of the presynaptic vesicles and to a poor informativeness of the postsynaptic currents. How to perform Optimal Experiment Design (OED) for synaptic characterization? We developed an Efficient Sampling-Based Bayesian Active Learning (ESB-BAL) framework, which is efficient enough to be used in real-time biological experiments (Chapter 5), and propose a link between our proposed definition of practical identifiability and Optimal Experiment Design for model selection (Chapter 6). Finally, a third biological question to which we ought to bring a theoretical answer is how to make sense of the observed organization of synaptic proteins. Microscopy observations have shown that presynaptic release sites and postsynaptic receptors are organized in ring-like patterns, which are disrupted upon genetic mutations. In Chapter 7, we propose a normative approach to this protein organization, and suggest that it might optimize a certain biological cost function (e.g. the mean current or SNR after vesicle release). The different theoretical tools and methods developed in this thesis are general enough to be applicable not only to synaptic characterization, but also to different experimental settings and systems studied in physiology. Overall, we expect to democratize and simplify the use of quantitative and normative approaches in biology, thus reducing the cost of experimentation in physiology, and paving the way to more systematic and automated experimental designs

    ISIPTA'07: Proceedings of the Fifth International Symposium on Imprecise Probability: Theories and Applications

    Get PDF
    B

    Acoustic event detection and localization using distributed microphone arrays

    Get PDF
    Automatic acoustic scene analysis is a complex task that involves several functionalities: detection (time), localization (space), separation, recognition, etc. This thesis focuses on both acoustic event detection (AED) and acoustic source localization (ASL), when several sources may be simultaneously present in a room. In particular, the experimentation work is carried out with a meeting-room scenario. Unlike previous works that either employed models of all possible sound combinations or additionally used video signals, in this thesis, the time overlapping sound problem is tackled by exploiting the signal diversity that results from the usage of multiple microphone array beamformers. The core of this thesis work is a rather computationally efficient approach that consists of three processing stages. In the first, a set of (null) steering beamformers is used to carry out diverse partial signal separations, by using multiple arbitrarily located linear microphone arrays, each of them composed of a small number of microphones. In the second stage, each of the beamformer output goes through a classification step, which uses models for all the targeted sound classes (HMM-GMM, in the experiments). Then, in a third stage, the classifier scores, either being intra- or inter-array, are combined using a probabilistic criterion (like MAP) or a machine learning fusion technique (fuzzy integral (FI), in the experiments). The above-mentioned processing scheme is applied in this thesis to a set of complexity-increasing problems, which are defined by the assumptions made regarding identities (plus time endpoints) and/or positions of sounds. In fact, the thesis report starts with the problem of unambiguously mapping the identities to the positions, continues with AED (positions assumed) and ASL (identities assumed), and ends with the integration of AED and ASL in a single system, which does not need any assumption about identities or positions. The evaluation experiments are carried out in a meeting-room scenario, where two sources are temporally overlapped; one of them is always speech and the other is an acoustic event from a pre-defined set. Two different databases are used, one that is produced by merging signals actually recorded in the UPC¿s department smart-room, and the other consists of overlapping sound signals directly recorded in the same room and in a rather spontaneous way. From the experimental results with a single array, it can be observed that the proposed detection system performs better than either the model based system or a blind source separation based system. Moreover, the product rule based combination and the FI based fusion of the scores resulting from the multiple arrays improve the accuracies further. On the other hand, the posterior position assignment is performed with a very small error rate. Regarding ASL and assuming an accurate AED system output, the 1-source localization performance of the proposed system is slightly better than that of the widely-used SRP-PHAT system, working in an event-based mode, and it even performs significantly better than the latter one in the more complex 2-source scenario. Finally, though the joint system suffers from a slight degradation in terms of classification accuracy with respect to the case where the source positions are known, it shows the advantage of carrying out the two tasks, recognition and localization, with a single system, and it allows the inclusion of information about the prior probabilities of the source positions. It is worth noticing also that, although the acoustic scenario used for experimentation is rather limited, the approach and its formalism were developed for a general case, where the number and identities of sources are not constrained

    A comparison of the CAR and DAGAR spatial random effects models with an application to diabetics rate estimation in Belgium

    Get PDF
    When hierarchically modelling an epidemiological phenomenon on a finite collection of sites in space, one must always take a latent spatial effect into account in order to capture the correlation structure that links the phenomenon to the territory. In this work, we compare two autoregressive spatial models that can be used for this purpose: the classical CAR model and the more recent DAGAR model. Differently from the former, the latter has a desirable property: its ρ parameter can be naturally interpreted as the average neighbor pair correlation and, in addition, this parameter can be directly estimated when the effect is modelled using a DAGAR rather than a CAR structure. As an application, we model the diabetics rate in Belgium in 2014 and show the adequacy of these models in predicting the response variable when no covariates are available

    A Statistical Approach to the Alignment of fMRI Data

    Get PDF
    Multi-subject functional Magnetic Resonance Image studies are critical. The anatomical and functional structure varies across subjects, so the image alignment is necessary. We define a probabilistic model to describe functional alignment. Imposing a prior distribution, as the matrix Fisher Von Mises distribution, of the orthogonal transformation parameter, the anatomical information is embedded in the estimation of the parameters, i.e., penalizing the combination of spatially distant voxels. Real applications show an improvement in the classification and interpretability of the results compared to various functional alignment methods
    corecore