27 research outputs found

    A random forest system combination approach for error detection in digital dictionaries

    Full text link
    When digitizing a print bilingual dictionary, whether via optical character recognition or manual entry, it is inevitable that errors are introduced into the electronic version that is created. We investigate automating the process of detecting errors in an XML representation of a digitized print dictionary using a hybrid approach that combines rule-based, feature-based, and language model-based methods. We investigate combining methods and show that using random forests is a promising approach. We find that in isolation, unsupervised methods rival the performance of supervised methods. Random forests typically require training data so we investigate how we can apply random forests to combine individual base methods that are themselves unsupervised without requiring large amounts of training data. Experiments reveal empirically that a relatively small amount of data is sufficient and can potentially be further reduced through specific selection criteria.Comment: 9 pages, 7 figures, 10 tables; appeared in Proceedings of the Workshop on Innovative Hybrid Approaches to the Processing of Textual Data, April 201

    Detecting Structural Irregularity in Electronic Dictionaries Using Language Modeling

    Get PDF
    Dictionaries are often developed using tools that save to Extensible Markup Language (XML)-based standards. These standards often allow high-level repeating elements to represent lexical entries, and utilize descendants of these repeating elements to represent the structure within each lexical entry, in the form of an XML tree. In many cases, dictionaries are published that have errors and inconsistencies that are expensive to find manually. This paper discusses a method for dictionary writers to quickly audit structural regularity across entries in a dictionary by using statistical language modeling. The approach learns the patterns of XML nodes that could occur within an XML tree, and then calculates the probability of each XML tree in the dictionary against these patterns to look for entries that diverge from the norm.This material is based upon work supported, in whole or in part, with funding from the United States Government. Any opinions, findings and conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the University of Maryland, College Park and/or any agency or entity of the United States Government. Nothing in this report is intended to be and shall not be treated or construed as an endorsement or recommendation by the University of Maryland, United States Government, or the authors of the product, process, or service that is the subject of this report. No one may use any information contained or based on this report in advertisements or promotional materials related to any company product, process, or service or in support of other commercial purposes

    A random forest system combination approach for error detection in digital dictionaries

    Get PDF
    When digitizing a print bilingual dictionary, whether via optical character recognition or manual entry, it is inevitable that errors are introduced into the electronic version that is created. We investigate automating the process of detecting errors in an XML representation of a digitized print dictionary using a hybrid approach that combines rule-based, feature-based, and language model-based methods. We investigate combining methods and show that using random forests is a promising approach. We find that in isolation, unsupervised methods rival the performance of supervised methods. Random forests typically require training data so we investigate how we can apply random forests to combine individual base methods that are themselves unsupervised without requiring large amounts of training data. Experiments reveal empirically that a relatively small amount of data is sufficient and can potentially be further reduced through specific selection criteria

    Correcting Errors in Digital Lexicographic Resources Using a Dictionary Manipulation Language

    Get PDF
    We describe a paradigm for combining manual and automatic error correction of noisy structured lexicographic data. Modifications to the structure and underlying text of the lexicographic data are expressed in a simple, interpreted programming language. Dictionary Manipulation Language (DML) commands identify nodes by unique identifiers, and manipulations are performed using simple commands such as create, move, set text, etc. Corrected lexicons are produced by applying sequences of DML commands to the source version of the lexicon. DML commands can be written manually to repair one-off errors or generated automatically to correct recurring problems. We discuss advantages of the paradigm for the task of editing digital bilingual dictionaries.This material is based upon work supported, in whole or in part, with funding from the United States Government. Any opinions, findings and conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the University of Maryland, College Park and/or any agency or entity of the United States Government. Nothing in this report is intended to be and shall not be treated or construed as an endorsement or recommendation by the University of Maryland, United States Government, or the authors of the product, process, or service that is the subject of this report. No one may use any information contained or based on this report in advertisements or promotional materials related to any company product, process, or service or in support of other commercial purposes

    Weekends affect mortality risk and chance of discharge in critically ill patients: a retrospective study in the Austrian registry for intensive care.

    Get PDF
    BACKGROUND: In this study, we primarily investigated whether ICU admission or ICU stay at weekends (Saturday and Sunday) is associated with a different risk of ICU mortality or chance of ICU discharge than ICU admission or ICU stay on weekdays (Monday to Friday). Secondarily, we analysed whether weekend ICU admission or ICU stay influences risk of hospital mortality or chance of hospital discharge. METHODS: A retrospective study was performed for all adult patients admitted to 119 ICUs participating in the benchmarking project of the Austrian Centre for Documentation and Quality Assurance in Intensive Care (ASDI) between 2012 and 2015. Readmissions to the ICU during the same hospital stay were excluded. RESULTS: In a multivariable competing risk analysis, a strong weekend effect was observed. Patients admitted to ICUs on Saturday or Sunday had a higher mortality risk after adjustment for severity of illness by Simplified Acute Physiology Score (SAPS) 3, year, month of the year, type of admission, ICU, and weekday of death or discharge. Hazard ratios (95% confidence interval) for death in the ICU following admission on a Saturday or Sunday compared with Wednesday were 1.15 (1.08-1.23) and 1.11 (1.03-1.18), respectively. Lower hazard ratios were observed for dying on a Saturday (0.93 (0.87-1.00)) or Sunday (0.85 (0.80-0.91)) compared with Wednesday. This is probably related to the reduced chance of being discharged from the ICU at the weekend (0.63 (0.62-064) for Saturday and 0.56 (0.55-0.57) for Sunday). Similar results were found for hospital mortality and hospital discharge following ICU admission. CONCLUSIONS: Patients admitted to ICUs at weekends are at increased risk of death in both the ICU and the hospital even after rigorous adjustment for severity of illness. Conversely, death in the ICU and discharge from the ICU are significantly less likely at weekends

    Multilevel Splitting for Estimating Rare Event Probabilities

    No full text
    We analyze the performance of a splitting technique for the estimation of rare event probabilities by simulation. A straightforward estimator of the probability of an event evaluates the proportion of simulated paths on which the event occurs. If the event is rare, even a large number of paths may produce little information about its probability using this approach. The method we study reinforces promising paths at intermediate thresholds by splitting them into subpaths which then evolve independently. If implemented appropriately, this has the effect of dedicating a greater fraction of the computational effort to informative runs. Under some assumptions about the simulated process, we identify the optimal degree of splitting at each threshold as the rarity of the event increases: it should be set so that the expected number of subpaths reaching each threshold remains roughly constant. Thus implemented, the method is provably effective for rare event simulation. These results follow fr..
    corecore