2 research outputs found

    Computation of mutual information from Hidden Markov Models.

    No full text
    Understanding evolution at the sequence level is one of the major research visions of bioinformatics. To this end, several abstract models--such as Hidden Markov Models--and several quantitative measures--such as the mutual information--have been introduced, thoroughly investigated, and applied to several concrete studies in molecular biology. With this contribution we want to undertake a first step to merge these approaches (models and measures) for easy and immediate computation, e.g. for a database of a large number of externally fitted models (such as PFAM). Being able to compute such measures is of paramount importance in data mining, model development, and model comparison. Here we describe how one can efficiently compute the mutual information of a homogenous Hidden Markov Model orders of magnitude faster than with a naive, straight-forward approach. In addition, our algorithm avoids sampling issues of real-world sequences, thus allowing for direct comparison of various models. We applied the method to genomic sequences and discuss properties as well as convergence issues
    corecore