3,369 research outputs found

    Multisensory task demands temporally extend the causal requirement for visual cortex in perception

    Get PDF
    Primary sensory areas constitute crucial nodes during perceptual decision making. However, it remains unclear to what extent they mainly constitute a feedforward processing step, or rather are continuously involved in a recurrent network together with higher-order areas. We found that the temporal window in which primary visual cortex is required for the detection of identical visual stimuli was extended when task demands were increased via an additional sensory modality that had to be monitored. Late-onset optogenetic inactivation preserved bottom-up, early-onset responses which faithfully encoded stimulus features, and was effective in impairing detection only if it preceded a late, report-related phase of the cortical response. Increasing task demands were marked by longer reaction times and the effect of late optogenetic inactivation scaled with reaction time. Thus, independently of visual stimulus complexity, multisensory task demands determine the temporal requirement for ongoing sensory-related activity in V1, which overlaps with report-related activity

    Cluster consistency: Simple yet effect robust learning algorithm on large-scale photoplethysmography for atrial fibrillation detection in the presence of real-world label noise

    Full text link
    Obtaining large-scale well-annotated is always a daunting challenge, especially in the medical research domain because of the shortage of domain expert. Instead of human annotation, in this work, we use the alarm information generated from bed-side monitor to get the pseudo label for the co-current photoplethysmography (PPG) signal. Based on this strategy, we end up with over 8 million 30-second PPG segment. To solve the label noise caused by false alarms, we propose the cluster consistency, which use an unsupervised auto-encoder (hence not subject to label noise) approach to cluster training samples into a finite number of clusters. Then the learned cluster membership is used in the subsequent supervised learning phase to force the distance in the latent space of samples in the same cluster to be small while that of samples in different clusters to be big. In the experiment, we compare with the state-of-the-art algorithms and test on external datasets. The results show the superiority of our method in both classification performance and efficiency

    Hybridized neural networks for non-invasive and continuous mortality risk assessment in neonates

    Get PDF
    Premature birth is the primary risk factor in neonatal deaths, with the majority of extremely premature babies cared for in neonatal intensive care units (NICUs). Mortality risk prediction in this setting can greatly improve patient outcomes and resource utilization. However, existing schemes often require laborious medical testing and calculation, and are typically only calculated once at admission. In this work, we propose a shallow hybrid neural network for the prediction of mortality risk in 3-day, 7-day, and 14-day risk windows using only birthweight, gestational age, sex, and heart rate (HR) and respiratory rate (RR) information from a 12-h window. As such, this scheme is capable of continuously updating mortality risk assessment, enabling analysis of health trends and responses to treatment. The highest performing scheme was the network that considered mortality risk within 3 days, with this scheme outperforming state-of-the-art works in the literature and achieving an area under the receiver-operator curve (AUROC) of 0.9336 with standard deviation of 0.0337 across 5 folds of cross-validation. As such, we conclude that our proposed scheme could readily be used for continuously-updating mortality risk prediction in NICU environments

    Fairness in Biometrics: a figure of merit to assess biometric verification systems

    Full text link
    Machine learning-based (ML) systems are being largely deployed since the last decade in a myriad of scenarios impacting several instances in our daily lives. With this vast sort of applications, aspects of fairness start to rise in the spotlight due to the social impact that this can get in minorities. In this work aspects of fairness in biometrics are addressed. First, we introduce the first figure of merit that is able to evaluate and compare fairness aspects between multiple biometric verification systems, the so-called Fairness Discrepancy Rate (FDR). A use case with two synthetic biometric systems is introduced and demonstrates the potential of this figure of merit in extreme cases of fair and unfair behavior. Second, a use case using face biometrics is presented where several systems are evaluated compared with this new figure of merit using three public datasets exploring gender and race demographics.Comment: 11 page

    Systematic review of studies generating individual participant data on the efficacy of drugs for treating soil-transmitted helminthiases and the case for data-sharing

    Get PDF
    Preventive chemotherapy and transmission control (PCT) by mass drug administration is the cornerstone of the World Health Organization (WHO)’s policy to control soil-transmitted helminthiases (STHs) caused by Ascaris lumbricoides (roundworm), Trichuris trichiura (whipworm) and hookworm species (Necator americanus and Ancylostama duodenale) which affect over 1 billion people globally. Despite consensus that drug efficacies should be monitored for signs of decline that could jeopardise the effectiveness of PCT, systematic monitoring and evaluation is seldom implemented. Drug trials mostly report aggregate efficacies in groups of participants, but heterogeneities in design complicate classical meta-analyses of these data. Individual participant data (IPD) permit more detailed analysis of drug efficacies, offering increased sensitivity to identify atypical responses potentially caused by emerging drug resistance
    • 

    corecore