24 research outputs found

    From Error Probability to Information Theoretic (Multi-Modal) Signal Processing

    Get PDF
    We propose an information theoretic model that unifies a wide range of existing information theoretic signal processing algorithms in a compact mathematical framework. It is mainly based on stochastic processes, Markov chains and error probabilities. The proposed framework will allow us to discuss revealing analogies and differences between several well known algorithms and to propose interesting extensions resulting directly from our formalism. We will then describe how the theory can be applied to the rapidly emerging field of multi-modal signal processing: we will show how our framework can be efficiently used for multi-modal medical image processing and for joint analysis of multi-media sequences (audio and video)

    Role of Alpha Oscillations During Short Time Memory Task Investigated by Graph Based Partitioning

    Get PDF
    In this study, we investigate the clustering pattern of alpha band (8 Hz - 12 Hz) electroencephalogram (EEG) oscillations obtained from healthy individuals during a short time memory task with 3 different memory loads. The retention period during which subjects were asked to memorize a pattern in a square matrix is analyzed with a graph theoretical approach. The functional coupling among EEG electrodes are quantified via mutual information in the time-frequency plane. A spectral clustering algorithm followed by bootstrapping is used to parcellate memory related circuits and for identifying significant clusters in the brain. The main outcome of the study is that the size of the significant clusters formed by alpha oscillations decreases as the memory load increases. This finding corroborates the active inhibition hypothesis about alpha oscillations

    Multiple Access Channels with States Causally Known at Transmitters

    Full text link
    It has been recently shown by Lapidoth and Steinberg that strictly causal state information can be beneficial in multiple access channels (MACs). Specifically, it was proved that the capacity region of a two-user MAC with independent states, each known strictly causally to one encoder, can be enlarged by letting the encoders send compressed past state information to the decoder. In this work, a generalization of the said strategy is proposed whereby the encoders compress also the past transmitted codewords along with the past state sequences. The proposed scheme uses a combination of long-message encoding, compression of the past state sequences and codewords without binning, and joint decoding over all transmission blocks. The proposed strategy has been recently shown by Lapidoth and Steinberg to strictly improve upon the original one. Capacity results are then derived for a class of channels that include two-user modulo-additive state-dependent MACs. Moreover, the proposed scheme is extended to state-dependent MACs with an arbitrary number of users. Finally, output feedback is introduced and an example is provided to illustrate the interplay between feedback and availability of strictly causal state information in enlarging the capacity region.Comment: Accepted by IEEE Transactions on Information Theory, November 201

    Detecting the number of components in a non-stationary signal using the Rényi entropy of its time-frequency distributions

    Get PDF
    A time-frequency distribution provides many advantages in the analysis of multicomponent non-stationary signals. The simultaneous signal representation with respect to the time and frequency axis defines the signal amplitude, frequency, bandwidth, and the number of components at each time moment. The Rényi entropy, applied to a time-frequency distribution, is shown to be a valuable indicator of the signal complexity. The aim of this paper is to determine which of the treated time-frequency distributions (TFDs) (namely, the Wigner-Ville distribution, the Choi-Williams distribution, and the spectrogram) has the best properties for estimation of the number of components when there is no prior knowledge of the signal. The optimal Rényi entropy parameter α is determined for each TFD. Accordingly, the effects of different time durations, bandwidths and amplitudes of the signal components on the Rényi entropy have been analysed. The concept of a class, when the Rényi entropy is applied to TFDs, is also introduced

    Mutual information based sensor registration and calibration

    Full text link
    Knowledge of calibration, that defines the location of sensors relative to each other, and registration, that relates sensor response due to the same physical phenomena, are essential in order to be able to fuse information from multiple sensors. In this paper, a Mutual Information (MI) based approach for automatic sensor registration and calibration is presented. Unsupervised learning of a nonparametric sensing model by maximizing mutual information between signal streams is used to relate information from different sensors, allowing unknown sensor registration and calibration to be determined. Experiments conducted in an office environment are used to illustrate the effectiveness of the proposed technique. Two laser sensors are used to capture people mobbing in an arbitrarily manner in the environment and MI from a number of attributes of the motion are used for relating the signal streams from the sensors. Thus the sensor registration and calibration is achieved without using artificial patterns or pre-specified motions. © 2006 IEEE

    Cognitive Access Policies under a Primary ARQ process via Forward-Backward Interference Cancellation

    Get PDF
    This paper introduces a novel technique for access by a cognitive Secondary User (SU) using best-effort transmission to a spectrum with an incumbent Primary User (PU), which uses Type-I Hybrid ARQ. The technique leverages the primary ARQ protocol to perform Interference Cancellation (IC) at the SU receiver (SUrx). Two IC mechanisms that work in concert are introduced: Forward IC, where SUrx, after decoding the PU message, cancels its interference in the (possible) following PU retransmissions of the same message, to improve the SU throughput; Backward IC, where SUrx performs IC on previous SU transmissions, whose decoding failed due to severe PU interference. Secondary access policies are designed that determine the secondary access probability in each state of the network so as to maximize the average long-term SU throughput by opportunistically leveraging IC, while causing bounded average long-term PU throughput degradation and SU power expenditure. It is proved that the optimal policy prescribes that the SU prioritizes its access in the states where SUrx knows the PU message, thus enabling IC. An algorithm is provided to optimally allocate additional secondary access opportunities in the states where the PU message is unknown. Numerical results are shown to assess the throughput gain provided by the proposed techniques.Comment: 16 pages, 11 figures, 2 table

    Sur la décombinaison de fonctions de croyance

    Get PDF
    International audienceThe evidence combination is a kind of decision-level information fusion in the theory of belief functions. Given two basic belief assignments (BBAs) originated from different sources, one can combine them using some combination rule, e.g., Dempster's rule to expect a better decision result. If one only has a combined BBA, how to determine the original two BBAs to combine? This can be considered as a defusion of information. This is useful, e.g., one can analyze the difference or dissimilarity between two different information sources based on the BBAs obtained using evidence decombination. Therefore, in this paper, we research on such a defusion in the theory of belief functions. We find that it is a well-posed problem if one original BBA and the combined BBA are both available, and it is an under-determined problem if both BBAs to combine are unknown. We propose an optimization-based approach for the evidence decombination according to the criteria of divergence maximization. Numerical examples are provided illustrate and verify our proposed decombination approach, which is expected to be used in applications such the difference analysis between information sources in information fusion systems when the original BBAs are discarded, and performance evaluation of combination rules

    Image segmentation with variational active contours

    Get PDF
    An important branch of computer vision is image segmentation. Image segmentation aims at extracting meaningful objects lying in images either by dividing images into contiguous semantic regions, or by extracting one or more specific objects in images such as medical structures. The image segmentation task is in general very difficult to achieve since natural images are diverse, complex and the way we perceive them vary according to individuals. For more than a decade, a promising mathematical framework, based on variational models and partial differential equations, have been investigated to solve the image segmentation problem. This new approach benefits from well-established mathematical theories that allow people to analyze, understand and extend segmentation methods. Moreover, this framework is defined in a continuous setting which makes the proposed models independent with respect to the grid of digital images. This thesis proposes four new image segmentation models based on variational models and the active contours method. The active contours or snakes model is more and more used in image segmentation because it relies on solid mathematical properties and its numerical implementation uses the efficient level set method to track evolving contours. The first model defined in this dissertation proposes to determine global minimizers of the active contour/snake model. Despite of great theoretic properties, the active contours model suffers from the existence of local minima which makes the initial guess critical to get satisfactory results. We propose to couple the geodesic/geometric active contours model with the total variation functional and the Mumford-Shah functional to determine global minimizers of the snake model. It is interesting to notice that the merging of two well-known and "opposite" models of geodesic/geometric active contours, based on the detection of edges, and active contours without edges provides a global minimum to the image segmentation algorithm. The second model introduces a method that combines at the same time deterministic and statistical concepts. We define a non-parametric and non-supervised image classification model based on information theory and the shape gradient method. We show that this new segmentation model generalizes, in a conceptual way, many existing models based on active contours, statistical and information theoretic concepts such as mutual information. The third model defined in this thesis is a variational model that extracts in images objects of interest which geometric shape is given by the principal components analysis. The main interest of the proposed model is to combine the three families of active contours, based on the detection of edges, the segmentation of homogeneous regions and the integration of geometric shape prior, in order to use simultaneously the advantages of each family. Finally, the last model presents a generalization of the active contours model in scale spaces in order to extract structures at different scales of observation. The mathematical framework which allows us to define an evolution equation for active contours in scale spaces comes from string theory. This theory introduces a mathematical setting to process a manifold such as an active contour embedded in higher dimensional Riemannian spaces such as scale spaces. We thus define the energy functional and the evolution equation of the multiscale active contours model which can evolve in the most well-known scale spaces such as the linear or the curvature scale space
    corecore