3,867 research outputs found

    Atrial fibrillation signatures on intracardiac electrograms identified by deep learning

    Get PDF
    Automatic detection of atrial fibrillation (AF) by cardiac devices is increasingly common yet suboptimally groups AF, flutter or tachycardia (AT) together as 'high rate events'. This may delay or misdirect therapy. Objective: We hypothesized that deep learning (DL) can accurately classify AF from AT by revealing electrogram (EGM) signatures. Methods: We studied 86 patients in whom the diagnosis of AF or AT was established at electrophysiological study (25 female, 65 ± 11 years). Custom DL architectures were trained to identify AF using N = 29,340 unipolar and N = 23,760 bipolar EGM segments. We compared DL to traditional classifiers based on rate or regularity. We explained DL using computer models to assess the impact of controlled variations in shape, rate and timing on AF/AT classification in 246,067 EGMs reconstructed from clinical data. Results: DL identified AF with AUC of 0.97 ± 0.04 (unipolar) and 0.92 ± 0.09 (bipolar). Rule-based classifiers misclassified ∼10-12% of cases. DL classification was explained by regularity in EGM shape (13%) or timing (26%), and rate (60%; p 15% timing variation, <0.48 correlation in beat-to-beat EGM shapes and CL < 190 ms (p < 0.001). Conclusions: Deep learning of intracardiac EGMs can identify AF or AT via signatures of rate, regularity in timing or shape, and specific EGM shapes. Future work should examine if these signatures differ between different clinical subpopulations with AF

    Self-explaining AI as an alternative to interpretable AI

    Full text link
    The ability to explain decisions made by AI systems is highly sought after, especially in domains where human lives are at stake such as medicine or autonomous vehicles. While it is often possible to approximate the input-output relations of deep neural networks with a few human-understandable rules, the discovery of the double descent phenomena suggests that such approximations do not accurately capture the mechanism by which deep neural networks work. Double descent indicates that deep neural networks typically operate by smoothly interpolating between data points rather than by extracting a few high level rules. As a result, neural networks trained on complex real world data are inherently hard to interpret and prone to failure if asked to extrapolate. To show how we might be able to trust AI despite these problems we introduce the concept of self-explaining AI. Self-explaining AIs are capable of providing a human-understandable explanation of each decision along with confidence levels for both the decision and explanation. For this approach to work, it is important that the explanation actually be related to the decision, ideally capturing the mechanism used to arrive at the explanation. Finally, we argue it is important that deep learning based systems include a "warning light" based on techniques from applicability domain analysis to warn the user if a model is asked to extrapolate outside its training distribution. For a video presentation of this talk see https://www.youtube.com/watch?v=Py7PVdcu7WY& .Comment: 10pgs, 2 column forma

    Slave to the Algorithm? Why a \u27Right to an Explanation\u27 Is Probably Not the Remedy You Are Looking For

    Get PDF
    Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic “black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, “subject-centric explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers\u27 worries of intellectual property or trade secrets disclosure. Based on our analysis, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy.” But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ( right to be forgotten ) and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centered

    People Talking and AI Listening: How Stigmatizing Language in EHR Notes Affect AI Performance

    Full text link
    Electronic health records (EHRs) serve as an essential data source for the envisioned artificial intelligence (AI)-driven transformation in healthcare. However, clinician biases reflected in EHR notes can lead to AI models inheriting and amplifying these biases, perpetuating health disparities. This study investigates the impact of stigmatizing language (SL) in EHR notes on mortality prediction using a Transformer-based deep learning model and explainable AI (XAI) techniques. Our findings demonstrate that SL written by clinicians adversely affects AI performance, particularly so for black patients, highlighting SL as a source of racial disparity in AI model development. To explore an operationally efficient way to mitigate SL's impact, we investigate patterns in the generation of SL through a clinicians' collaborative network, identifying central clinicians as having a stronger impact on racial disparity in the AI model. We find that removing SL written by central clinicians is a more efficient bias reduction strategy than eliminating all SL in the entire corpus of data. This study provides actionable insights for responsible AI development and contributes to understanding clinician behavior and EHR note writing in healthcare.Comment: 54 pages, 9 figure

    On the use of multi-sensor digital traces to discover spatio-temporal human behavioral patterns

    Get PDF
    134 p.La tecnología ya es parte de nuestras vidas y cada vez que interactuamos con ella, ya sea en una llamada telefónica, al realizar un pago con tarjeta de crédito o nuestra actividad en redes sociales, se almacenan trazas digitales. En esta tesis nos interesan aquellas trazas digitales que también registran la geolocalización de las personas al momento de realizar sus actividades diarias. Esta información nos permite conocer cómo las personas interactúan con la ciudad, algo muy valioso en planificación urbana,gestión de tráfico, políticas publicas e incluso para tomar acciones preventivas frente a desastres naturales.Esta tesis tiene por objetivo estudiar patrones de comportamiento humano a partir de trazas digitales. Para ello se utilizan tres conjuntos de datos masivos que registran la actividad de usuarios anonimizados en cuanto a llamados telefónicos, compras en tarjetas de crédito y actividad en redes sociales (check-ins,imágenes, comentarios y tweets). Se propone una metodología que permite extraer patrones de comportamiento humano usando modelos de semántica latente, Latent Dirichlet Allocation y DynamicTopis Models. El primero para detectar patrones espaciales y el segundo para detectar patrones espaciotemporales. Adicionalmente, se propone un conjunto de métricas para contar con un métodoobjetivo de evaluación de patrones obtenidos

    Cognitivism and Innovation in Economics - Two Lectures

    Get PDF
    This issue of the Department W.P. reproduces two lectures by Professor Loasby organized by the CISEPS (Centre for Interdisciplinary Studies in Economics, Psychology and the Social Sciences at Bicocca) in collaboration with the IEP, the Istituto di Economia Politica of the Bocconi University in Milan. The first lecture was delivered at the University of Milano-Bicocca on 13 October 2003 and the second was staged the day after at the Bocconi University. The lectures are reproduced here together with a comment by dr. Stefano Brusoni of Bocconi and SPRU. Two further comments were presented at the time by Professor Richard Arena of the University of Nice and by Professor Pier Luigi Sacco of the University of Venice. Both of them deserve gratitude for active participation to the initiative. Unfortunately it has not been possible to include their comments in the printed form. In these lectures Brian Loasby opens under the title of Psychology of Wealth (a title echoing a famous essay by Carlo Cattaneo) and he develops an argument in cognitive economics which is based on Hayek’s theory of the human mind with significant complements and extensions, mainly from Smith and Marshall. The second lecture provides a discussion on organization and the human mind. It can be read independently although it is linked to the former. Indeed, in Professor Loasby’s words, “the psychology of wealth leads to a particular perspective on this problem of organization”. The gist of the argument lies in the need to appreciate the significance of an appropriate “balance between apparently conflicting principles: the coherence, and therefore the effectiveness, of this differentiated system requires some degree of compatibility between its elements, but the creation of differentiated knowledge and skills depends on the freedom to make idiosyncratic patterns by thinking and acting in ways which may be radically different from those of many other people”. This dilemma of compatibility vs. independence can find solution in a variety of contexts, as Loasby’s analysis shows. In his comments Richard Arena had focussed on the rationality issues, so prominent in Loasby’s text. For example, he had suggested that the cleavage between rational choice equilibrium and evolutionary order offers ground to new forms of self-organization. Pier Luigi Sacco had emphasized that Loasby’s approach breaks new ground on the economics of culture and paves the way to less simplistic conceptions of endogenous growth than is suggested by the conventional wisdom of current models. Unfortunately, as hinted above, is has proved impossible to include those comments in the present booklet along with Loasby’s lectures. A special obligation must be recorded to Dr. Stefano Brusoni, who has prepared a written version of his own comment which has been printed in this booklet and can be offered to the reader. Among other participants Roberto Scazzieri, of the University of Bologna, Tiziano Raffaelli, of the University of Pisa, Luigino Bruni of Bicocca, Riccardo Cappellin of Rome ‘Tor Vergata’ and others were able to offer significant comments during the two sessions of the initiative. The organizers are particularly grateful to Professor Brian Loasby for the active and generous support of the initiative. Together with our colleagues and students we have been able to admire his enthusiasm and intellectual creativity in treating some of the more fascinating topics of contemporary economics.
    corecore