34 research outputs found

    Intra-individual variability of the electrocardiogram

    Get PDF

    Intra-individual variability of the electrocardiogram

    Get PDF

    Intra-individual Variability of the Electrocardiogram: Assessment and exploitation in computerized ECG analysis

    Get PDF
    Computer interpretation of the electrocardiogram (ECG) was one of the first applications of computers in health care. The first systems were developed in the early sixties by Pipberger and Caceres. In the last decades, computerized ECG analysis has become one of the most widespread computer applications for decision support in health care. For example, over 50 million ECGs were analyzed by computer in the United States already in 1988. Large research efforts have been made to bring the performance of these systems to a level acceptable for routine use. In an international cooperative study, called 'Common Standards for Quantitative Electrocardiography' (CSE), most present-day ECG computer programs were evaluated. This evaluation was done by comparing measurements and interpretations computed by these systems with those of a panel of experienced cardiologists on the one hand, and with a diagnosis based on clinical evidence, not involving the EeG, on the other. It was concluded that ECG analysis programs can assist clinicians in achieving lllore unifornl and consistent interpretations of ECGs. However, continued testing and refinenlent was deelned necessary to enhance the perfonnance of these systenls. Despite their good diagnostic performance, ECG analysis programs suffer from a number of drawbacks that limit their practical utility. One of the most important shortcomings is the vulnerability for individual ECG variability. Identical ECG signals will result in identical measurements and interpretations, but small (and diagnostic inconsequential) differences between signals may result in an entirely different diagnostic interpretation. This variability can already be a problem when, e.g., multiple ECGs of the same patient are interpreted that have been recorded only minutes apart. Instable interpretations will not only considerably diminish the practical use of ECG computer prograrlls, users will also lose confidence in their performance. Furthernlore, variation in interpretations of shnilar ECGs suggests that the accuracy of programs can still be improved

    Rewriting and suppressing UMLS terms for improved biomedical term identification

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Identification of terms is essential for biomedical text mining.. We concentrate here on the use of vocabularies for term identification, specifically the Unified Medical Language System (UMLS). To make the UMLS more suitable for biomedical text mining we implemented and evaluated nine term rewrite and eight term suppression rules. The rules rely on UMLS properties that have been identified in previous work by others, together with an additional set of new properties discovered by our group during our work with the UMLS. Our work complements the earlier work in that we measure the impact on the number of terms identified by the different rules on a MEDLINE corpus. The number of uniquely identified terms and their frequency in MEDLINE were computed before and after applying the rules. The 50 most frequently found terms together with a sample of 100 randomly selected terms were evaluated for every rule.</p> <p>Results</p> <p>Five of the nine rewrite rules were found to generate additional synonyms and spelling variants that correctly corresponded to the meaning of the original terms and seven out of the eight suppression rules were found to suppress only undesired terms. Using the five rewrite rules that passed our evaluation, we were able to identify 1,117,772 new occurrences of 14,784 rewritten terms in MEDLINE. Without the rewriting, we recognized 651,268 terms belonging to 397,414 concepts; with rewriting, we recognized 666,053 terms belonging to 410,823 concepts, which is an increase of 2.8% in the number of terms and an increase of 3.4% in the number of concepts recognized. Using the seven suppression rules, a total of 257,118 undesired terms were suppressed in the UMLS, notably decreasing its size. 7,397 terms were suppressed in the corpus.</p> <p>Conclusions</p> <p>We recommend applying the five rewrite rules and seven suppression rules that passed our evaluation when the UMLS is to be used for biomedical term identification in MEDLINE. A software tool to apply these rules to the UMLS is freely available at <url>http://biosemantics.org/casper</url>.</p

    Thesaurus-based disambiguation of gene symbols

    Get PDF
    BACKGROUND: Massive text mining of the biological literature holds great promise of relating disparate information and discovering new knowledge. However, disambiguation of gene symbols is a major bottleneck. RESULTS: We developed a simple thesaurus-based disambiguation algorithm that can operate with very little training data. The thesaurus comprises the information from five human genetic databases and MeSH. The extent of the homonym problem for human gene symbols is shown to be substantial (33% of the genes in our combined thesaurus had one or more ambiguous symbols), not only because one symbol can refer to multiple genes, but also because a gene symbol can have many non-gene meanings. A test set of 52,529 Medline abstracts, containing 690 ambiguous human gene symbols taken from OMIM, was automatically generated. Overall accuracy of the disambiguation algorithm was up to 92.7% on the test set. CONCLUSION: The ambiguity of human gene symbols is substantial, not only because one symbol may denote multiple genes but particularly because many symbols have other, non-gene meanings. The proposed disambiguation approach resolves most ambiguities in our test set with high accuracy, including the important gene/not a gene decisions. The algorithm is fast and scalable, enabling gene-symbol disambiguation in massive text mining applications

    Ambiguity of human gene symbols in LocusLink and MEDLINE: creating an inventory and a disambiguation test collection

    Get PDF
    Genes are discovered almost on a daily basis and new names have to be found. Although there are guidelines for gene nomenclature, the naming process is highly creative. Human genes are often named with a gene symbol and a longer, more descriptive term; the short form is very often an abbreviation of the long form. Abbreviations in biomedical language are highly ambiguous, i.e., one gene symbol often refers to more than one gene.Using an existing abbreviation expansion algorithm,we explore MEDLINE for the use of human gene symbols derived from LocusLink. It turns out that just over 40% of these symbols occur in MEDLINE, however, many of these occurrences are not related to genes. Along the process of making an inventory, a disambiguation test collection is constructed automatically

    Using contextual queries

    Get PDF
    Search engines generally treat search requests in isolation. The results for a given query are identical, independent of the user, or the context in which the user made the request. An approach is demonstrated that explores implicit contexts as obtained from a document the user is reading. The approach inserts into an original (web) document functionality to directly activate context driven queries that yield related articles obtained from various information sources

    Clustering More than Two Million Biomedical Publications: Comparing the Accuracies of Nine Text-Based Similarity Approaches

    Get PDF
    We investigate the accuracy of different similarity approaches for clustering over two million biomedical documents. Clustering large sets of text documents is important for a variety of information needs and applications such as collection management and navigation, summary and analysis. The few comparisons of clustering results from different similarity approaches have focused on small literature sets and have given conflicting results. Our study was designed to seek a robust answer to the question of which similarity approach would generate the most coherent clusters of a biomedical literature set of over two million documents.We used a corpus of 2.15 million recent (2004-2008) records from MEDLINE, and generated nine different document-document similarity matrices from information extracted from their bibliographic records, including titles, abstracts and subject headings. The nine approaches were comprised of five different analytical techniques with two data sources. The five analytical techniques are cosine similarity using term frequency-inverse document frequency vectors (tf-idf cosine), latent semantic analysis (LSA), topic modeling, and two Poisson-based language models--BM25 and PMRA (PubMed Related Articles). The two data sources were a) MeSH subject headings, and b) words from titles and abstracts. Each similarity matrix was filtered to keep the top-n highest similarities per document and then clustered using a combination of graph layout and average-link clustering. Cluster results from the nine similarity approaches were compared using (1) within-cluster textual coherence based on the Jensen-Shannon divergence, and (2) two concentration measures based on grant-to-article linkages indexed in MEDLINE.PubMed's own related article approach (PMRA) generated the most coherent and most concentrated cluster solution of the nine text-based similarity approaches tested, followed closely by the BM25 approach using titles and abstracts. Approaches using only MeSH subject headings were not competitive with those based on titles and abstracts

    Biosignal Processing Methods

    No full text
    corecore