62,022 research outputs found

    Localization Recall Precision (LRP): A New Performance Metric for Object Detection

    Get PDF
    Average precision (AP), the area under the recall-precision (RP) curve, is the standard performance measure for object detection. Despite its wide acceptance, it has a number of shortcomings, the most important of which are (i) the inability to distinguish very different RP curves, and (ii) the lack of directly measuring bounding box localization accuracy. In this paper, we propose 'Localization Recall Precision (LRP) Error', a new metric which we specifically designed for object detection. LRP Error is composed of three components related to localization, false negative (FN) rate and false positive (FP) rate. Based on LRP, we introduce the 'Optimal LRP', the minimum achievable LRP error representing the best achievable configuration of the detector in terms of recall-precision and the tightness of the boxes. In contrast to AP, which considers precisions over the entire recall domain, Optimal LRP determines the 'best' confidence score threshold for a class, which balances the trade-off between localization and recall-precision. In our experiments, we show that, for state-of-the-art object (SOTA) detectors, Optimal LRP provides richer and more discriminative information than AP. We also demonstrate that the best confidence score thresholds vary significantly among classes and detectors. Moreover, we present LRP results of a simple online video object detector which uses a SOTA still image object detector and show that the class-specific optimized thresholds increase the accuracy against the common approach of using a general threshold for all classes. At https://github.com/cancam/LRP we provide the source code that can compute LRP for the PASCAL VOC and MSCOCO datasets. Our source code can easily be adapted to other datasets as well.Comment: to appear in ECCV 201

    Reproducibility of physiological and performance measures from a squash-specific fitness test

    Get PDF
    Purpose We examined the reproducibility of performance and physiological responses on a squash-specific incremental test. Methods Eight trained squash players habituated to procedures with two prior visits performed an incremental squash test to volitional exhaustion on two occasions 7 days apart. Breath-by-breath oxygen uptake (Vo2) and heart rate were determined continuously using a portable telemetric system. Blood lactate concentration at the end of 4-min stages was assessed to determine lactate threshold. Once threshold was determined, test speed was increased every minute until volitional exhaustion for assessment of maximal oxygen uptake (Vo2max), maximum heart rate (HRmax), and performance time. Economy was taken as the 60-s mean of Vo2 in the final minute of the fourth stage (below lactate threshold for all participants). Typical error of measurement (TEM) with associated 90% confidence intervals, limits of agreement, paired sample t tests, and least products regression were used to assess the reproducibility of scores. Results Performance time (TEM 27 s, 4%, 90% CI 19 to 49 s) Vo2max (TEM 2.4 mL·kg−1·min−1, 4.7%, 90% CI 1.7 to 4.3 mL·kg−1·min−1), maximum heart rate (TEM 2 beats·min−1, 1.3%, 90% CI 2 to 4 beats·min−1), and economy (TEM 1.6 mL·kg−1·min−1, 4.1%, 90% CI 1.1 to 2.8 mL·kg−1·min−1) were reproducible. Conclusions The results suggest that endurance performance and physiological responses to a squash-specific fitness test are reproducible

    Automated and Interpretable Patient ECG Profiles for Disease Detection, Tracking, and Discovery

    Full text link
    The electrocardiogram or ECG has been in use for over 100 years and remains the most widely performed diagnostic test to characterize cardiac structure and electrical activity. We hypothesized that parallel advances in computing power, innovations in machine learning algorithms, and availability of large-scale digitized ECG data would enable extending the utility of the ECG beyond its current limitations, while at the same time preserving interpretability, which is fundamental to medical decision-making. We identified 36,186 ECGs from the UCSF database that were 1) in normal sinus rhythm and 2) would enable training of specific models for estimation of cardiac structure or function or detection of disease. We derived a novel model for ECG segmentation using convolutional neural networks (CNN) and Hidden Markov Models (HMM) and evaluated its output by comparing electrical interval estimates to 141,864 measurements from the clinical workflow. We built a 725-element patient-level ECG profile using downsampled segmentation data and trained machine learning models to estimate left ventricular mass, left atrial volume, mitral annulus e' and to detect and track four diseases: pulmonary arterial hypertension (PAH), hypertrophic cardiomyopathy (HCM), cardiac amyloid (CA), and mitral valve prolapse (MVP). CNN-HMM derived ECG segmentation agreed with clinical estimates, with median absolute deviations (MAD) as a fraction of observed value of 0.6% for heart rate and 4% for QT interval. Patient-level ECG profiles enabled quantitative estimates of left ventricular and mitral annulus e' velocity with good discrimination in binary classification models of left ventricular hypertrophy and diastolic function. Models for disease detection ranged from AUROC of 0.94 to 0.77 for MVP. Top-ranked variables for all models included known ECG characteristics along with novel predictors of these traits/diseases.Comment: 13 pages, 6 figures, 1 Table + Supplemen

    Coordination Matters : Interpersonal Synchrony Influences Collaborative Problem-Solving

    Get PDF
    The authors thank Martha von Werthern and Caitlin Taylor for their assistance with data collection, Cathy Macpherson for her assistance with the preparation of the manuscript, and Mike Richardson, Alex Paxton, and Rick Dale for providing MATLAB code to assist with data analysis. The research was funded by the British Academy (SG131613).Peer reviewedPublisher PD

    Intention Tremor and Deficits of Sensory Feedback Control in Multiple Sclerosis: a Pilot Study

    Get PDF
    Background Intention tremor and dysmetria are leading causes of upper extremity disability in Multiple Sclerosis (MS). The development of effective therapies to reduce tremor and dysmetria is hampered by insufficient understanding of how the distributed, multi-focal lesions associated with MS impact sensorimotor control in the brain. Here we describe a systems-level approach to characterizing sensorimotor control and use this approach to examine how sensory and motor processes are differentially impacted by MS. Methods Eight subjects with MS and eight age- and gender-matched healthy control subjects performed visually-guided flexion/extension tasks about the elbow to characterize a sensory feedback control model that includes three sensory feedback pathways (one for vision, another for proprioception and a third providing an internal prediction of the sensory consequences of action). The model allows us to characterize impairments in sensory feedback control that contributed to each MS subject’s tremor. Results Models derived from MS subject performance differed from those obtained for control subjects in two ways. First, subjects with MS exhibited markedly increased visual feedback delays, which were uncompensated by internal adaptive mechanisms; stabilization performance in individuals with the longest delays differed most from control subject performance. Second, subjects with MS exhibited misestimates of arm dynamics in a way that was correlated with tremor power. Subject-specific models accurately predicted kinematic performance in a reach and hold task for neurologically-intact control subjects while simulated performance of MS patients had shorter movement intervals and larger endpoint errors than actual subject responses. This difference between simulated and actual performance is consistent with a strategic compensatory trade-off of movement speed for endpoint accuracy. Conclusions Our results suggest that tremor and dysmetria may be caused by limitations in the brain’s ability to adapt sensory feedback mechanisms to compensate for increases in visual information processing time, as well as by errors in compensatory adaptations of internal estimates of arm dynamics

    Performance Following: Real-Time Prediction of Musical Sequences Without a Score

    Get PDF
    (c)2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works

    Using Variable Dwell Time to Accelerate Gaze-Based Web Browsing with Two-Step Selection

    Full text link
    In order to avoid the "Midas Touch" problem, gaze-based interfaces for selection often introduce a dwell time: a fixed amount of time the user must fixate upon an object before it is selected. Past interfaces have used a uniform dwell time across all objects. Here, we propose a gaze-based browser using a two-step selection policy with variable dwell time. In the first step, a command, e.g. "back" or "select", is chosen from a menu using a dwell time that is constant across the different commands. In the second step, if the "select" command is chosen, the user selects a hyperlink using a dwell time that varies between different hyperlinks. We assign shorter dwell times to more likely hyperlinks and longer dwell times to less likely hyperlinks. In order to infer the likelihood each hyperlink will be selected, we have developed a probabilistic model of natural gaze behavior while surfing the web. We have evaluated a number of heuristic and probabilistic methods for varying the dwell times using both simulation and experiment. Our results demonstrate that varying dwell time improves the user experience in comparison with fixed dwell time, resulting in fewer errors and increased speed. While all of the methods for varying dwell time resulted in improved performance, the probabilistic models yielded much greater gains than the simple heuristics. The best performing model reduces error rate by 50% compared to 100ms uniform dwell time while maintaining a similar response time. It reduces response time by 60% compared to 300ms uniform dwell time while maintaining a similar error rate.Comment: This is an Accepted Manuscript of an article published by Taylor & Francis in the International Journal of Human-Computer Interaction on 30 March, 2018, available online: http://www.tandfonline.com/10.1080/10447318.2018.1452351 . For an eprint of the final published article, please access: https://www.tandfonline.com/eprint/T9d4cNwwRUqXPPiZYm8Z/ful

    Selective phenotyping, entropy reduction, and the mastermind game.

    Get PDF
    BACKGROUND: With the advance of genome sequencing technologies, phenotyping, rather than genotyping, is becoming the most expensive task when mapping genetic traits. The need for efficient selective phenotyping strategies, i.e. methods to select a subset of genotyped individuals for phenotyping, therefore increases. Current methods have focused either on improving the detection of causative genetic variants or their precise genomic location separately. RESULTS: Here we recognize selective phenotyping as a Bayesian model discrimination problem and introduce SPARE (Selective Phenotyping Approach by Reduction of Entropy). Unlike previous methods, SPARE can integrate the information of previously phenotyped individuals, thereby enabling an efficient incremental strategy. The effective performance of SPARE is demonstrated on simulated data as well as on an experimental yeast dataset. CONCLUSIONS: Using entropy reduction as an objective criterion gives a natural way to tackle both issues of detection and localization simultaneously and to integrate intermediate phenotypic data. We foresee entropy-based strategies as a fruitful research direction for selective phenotyping
    corecore