65 research outputs found

    Information profiles for DNA pattern discovery

    Full text link
    Finite-context modeling is a powerful tool for compressing and hence for representing DNA sequences. We describe an algorithm to detect genomic regularities, within a blind discovery strategy. The algorithm uses information profiles built using suitable combinations of finite-context models. We used the genome of the fission yeast Schizosaccharomyces pombe strain 972 h- for illustration, unveilling locations of low information content, which are usually associated with DNA regions of potential biological interest.Comment: Full version of DCC 2014 paper "Information profiles for DNA pattern discovery

    Estimating Speaking Rate by Means of Rhythmicity Parameters

    Get PDF
    In this paper we present a speech rate estimator based on so-called rhythmicity features derived from a modified version of the short-time energy envelope. To evaluate the new method, it is compared to a traditional speech rate estimator on the basis of semi-automatic segmentation. Speech material from the Alcohol Language Corpus (ALC) covering intoxicated and sober speech of different speech styles provides a statistically sound foundation to test upon. The proposed measure clearly correlates with the semi-automatically determined speech rate and seems to be robust across speech styles and speaker states

    Modelling uncertainty in transcriptome measurements enhances network component analysis of yeast metabolic cycle

    Get PDF
    Using high throughput DNA binding data for transcription factors and DNA microarray time course data, we constructed four transcription regulatory networks and analysed them using a novel extension to the network component analysis (NCA) approach. We incorporated probe level uncertainties in gene expression measurements into the NCA analysis by the application of probabilistic principal component analysis (PPCA), and applied the method to data from yeast metabolic cycle. Analysis shows statistically significant enhancement to periodicity in a large fraction of the transcription factor activities inferred from the model. For several of these we found literature evidence of post-transcriptional regulation. Accounting for probe level uncertainty of microarray measurements leads to improved network component analysis. Transcription factor profiles showing greater periodicity at their activity levels, rather than at the corresponding mRNA levels, for over half the regulators in the networks points to extensive post-transcriptional regulations. ©2009 IEEE.published_or_final_versio

    Surpassing the Theoretical 1-Norm Phase Transition in Compressive Sensing by Tuning the Smoothed L0 Algorithm

    Get PDF
    <p>This is a poster presented at ICASSP 2013 in Vancouver.</p

    Bayesian integration of audio and visual information for multi-target tracking using a CB-member filter

    Get PDF
    A new method is presented for integration of audio and visual information in multiple target tracking applications. The proposed approach uses a Bayesian filtering formulation and exploits multi-Bernoulli random finite set approximations. The work presented in this paper is the first principled Bayesian estimation approach to solve the sensor fusion problems that involve intermittent sensory data (e.g. audio data for a person who occasionally speaks.) We have examined our method with case studies from the SPEVI database. The results show nearly perfect tracking of people not only when they are silent but also when they are not visible to the camera (but speaking).Reza Hoseinnezhad, Ba-Ngu Vo, Ba-Tuong Vo, David Suterhttp://www.icassp2011.com/en/welcom

    New Negentropy Optimization Schemes for Blind Signal Extraction of Complex Valued Sources

    Get PDF
    Blind signal extraction, a hot issue in the field of communication signal processing, aims to retrieve the sources through the optimization of contrast functions. Many contrasts based on higher-order statistics such as kurtosis, usually behave sensitive to outliers. Thus, to achieve robust results, nonlinear functions are utilized as contrasts to approximate the negentropy criterion, which is also a classical metric for non-Gaussianity. However, existing methods generally have a high computational cost, hence leading us to address the problem of efficient optimization of contrast function. More precisely, we design a novel “reference-based” contrast function based on negentropy approximations, and then propose a new family of algorithms (Alg.1 and Alg.2) to maximize it. Simulations confirm the convergence of our method to a separating solution, which is also analyzed in theory. We also validate the theoretic complexity analysis that Alg.2 has a much lower computational cost than Alg.1 and existing optimization methods based on negentropy criterion. Finally, experiments for the separation of single sideband signals illustrate that our method has good prospects in real-world applications

    Does capsule quality matter? A comparison study between spherical microphone arrays using different types of omnidirectional capsules

    No full text
    7 pagesInternational audienceThis study presents an objective comparison between two rigid spherical microphone arrays of the exact same size but differing by the type of capsules. An analysis of the simulations and the encoding process is presented and known limitations of the spherical arrays are discussed such as the degraded reconstruction of the spherical harmonics due to the size of the sphere and the size and number of the capsules
    • 

    corecore