1,455 research outputs found

    Bending the Automation Bias Curve: A Study of Human and AI-based Decision Making in National Security Contexts

    Full text link
    Uses of artificial intelligence (AI), especially those powered by machine learning approaches, are growing in sectors and societies around the world. How will AI adoption proceed, especially in the international security realm? Research on automation bias suggests that humans can often be overconfident in AI, whereas research on algorithm aversion shows that, as the stakes of a decision rise, humans become more cautious about trusting algorithms. We theorize about the relationship between background knowledge about AI, trust in AI, and how these interact with other factors to influence the probability of automation bias in the international security context. We test these in a preregistered task identification experiment across a representative sample of 9000 adults in 9 countries with varying levels of AI industries. The results strongly support the theory, especially concerning AI background knowledge. A version of the Dunning Kruger effect appears to be at play, whereby those with the lowest level of experience with AI are slightly more likely to be algorithm-averse, then automation bias occurs at lower levels of knowledge before leveling off as a respondent's AI background reaches the highest levels. Additional results show effects from the task's difficulty, overall AI trust, and whether a human or AI decision aid is described as highly competent or less competent

    Lexomic Tools and Methods for Textual Analysis: Providing Deep Access to Digitized Texts

    Get PDF
    This project hybridizes traditional humanistic approaches to textual scholarship, such as source study and the analysis of style, with advanced computational and statistical comparative methods, allowing scholars "deep access" to digitized texts and textual corpora. Our multi-disciplinary collaboration enables us to discover patterns in (and between) texts previously invisible to traditional methods. Going forward, we will build on the success of our previous Digital Humanities Start-up Grant by further developing tools and documentation (in an open, on-line community) for applying advanced statistical methodologies to textual and literary problems. At the same time we will demonstrate the value of the approach by applying the tools and methods to texts from a variety of languages and time periods, including Old English, medieval Latin, and Modern English works from the twentieth-century Harlem Renaissance

    Adopting AI: How Familiarity Breeds Both Trust and Contempt

    Full text link
    Despite pronouncements about the inevitable diffusion of artificial intelligence and autonomous technologies, in practice it is human behavior, not technology in a vacuum, that dictates how technology seeps into -- and changes -- societies. In order to better understand how human preferences shape technological adoption and the spread of AI-enabled autonomous technologies, we look at representative adult samples of US public opinion in 2018 and 2020 on the use of four types of autonomous technologies: vehicles, surgery, weapons, and cyber defense. By focusing on these four diverse uses of AI-enabled autonomy that span transportation, medicine, and national security, we exploit the inherent variation between these AI-enabled autonomous use cases. We find that those with familiarity and expertise with AI and similar technologies were more likely to support all of the autonomous applications we tested (except weapons) than those with a limited understanding of the technology. Individuals that had already delegated the act of driving by using ride-share apps were also more positive about autonomous vehicles. However, familiarity cut both ways; individuals are also less likely to support AI-enabled technologies when applied directly to their life, especially if technology automates tasks they are already familiar with operating. Finally, opposition to AI-enabled military applications has slightly increased over time

    The Medical Informatics Group: Ongoing Research

    Get PDF
    Two current research projects within the Medical Informatics Group are described. The first, the Diabetes Data Management Project, has as its major goal the effective analysis, display, and summarization of information relevant to the care of insulin-dependent diabetics. These goals are achieved through the use of quantitative and qualitative modeling techniques, object-oriented graphical display methods, and natural language generation programs. The second research activity, the Hypertext Medical Handbook Project, emphasizes many aspects of electronic publishing and biomedical communication. In particular, the project explores machine-assisted information retrieval by combining user feedback with Bayesian inference networks

    A Psychophysical Comparison of Two Methods for Adaptive Histogram Equalization

    Get PDF
    Adaptive histogram equalization (ahe) is a method for adaptive contrast enhancement of digital images propped by Pizer et. Al.. It has the properties that it is an automatic, reproducible method for the simultaneous viewing of contrast within a digital image with a large dynamic range. Recent experiments have show that in specific cases, there is no significant difference in the ability of ahe and linear intensity windowing to display grey-scale contrast. More recently, Pizer et al. have proposed a variant of ahe which limits the allowed contrast enhancement of the image. The contrast-limited adaptive histogram equalization (clahe) produces images in which the noise content of an image is nor excessively enhanced, but in which sufficient contrast is provided for the visualization of structures within the image. Images processed with clahe have a more natural appearance and facilitate the comparison of different areas of an image. However, the reduced contrast enhancement of clahe may hinder the ability of an observer to detect the presence of some significant grey-scale contrast. In this work, a psychophysical observer experiment was performed to determine if there is a significant difference in the ability of ahe and clahe to depict grey-scale contrast. Observers were presented with CT images of the chest processed with ahe and clahe into some of which subtle artificial lesions were introduced. The observers were asked to rate their confidence regarding the presence of the lesions; this rating-scale data was analyzed using Receiver Operating Characteristic curving techniques. These ROC curves were compared for significant differences in the observers\u27 performances. In this study, no difference was found in the abilities of ahe and clahe to depict contrast information

    Dissociable contributions of ventromedial prefrontal and posterior parietal cortex to value-guided choice

    Get PDF
    AbstractTwo long-standing traditions have highlighted cortical decision mechanisms in the parietal and prefrontal cortices of primates, but it has not been clear how these processes differ, or when each cortical region may influence behaviour. Recent data from ventromedial prefrontal cortex (vmPFC) and posterior parietal cortex (PPC) have suggested one possible axis on which the two decision processes might be delineated. Fast decisions may be resolved primarily by parietal mechanisms, whereas decisions made without time pressure may rely on prefrontal mechanisms. Here, we report direct evidence for such dissociation. During decisions under time pressure, a value comparison process was evident in PPC, but not in vmPFC. Value-related activity was still found in vmPFC under time pressure. However, vmPFC represented overall input value rather than compared output value. In contrast, when decisions were made without time pressure, vmPFC transitioned to encode a value comparison while value-related parameters were entirely absent from PPC. Furthermore, under time pressure, decision performance was primarily governed by PPC, while it was dominated by vmPFC at longer decision times. These data demonstrate that parallel cortical mechanisms may resolve the same choices in differing circumstances, and offer an explanation of the diverse neural signals reported in vmPFC and PPC during value-guided choice

    SpliceMiner: a high-throughput database implementation of the NCBI Evidence Viewer for microarray splice variant analysis

    Get PDF
    BACKGROUND: There are many fewer genes in the human genome than there are expressed transcripts. Alternative splicing is the reason. Alternatively spliced transcripts are often specific to tissue type, developmental stage, environmental condition, or disease state. Accurate analysis of microarray expression data and design of new arrays for alternative splicing require assessment of probes at the sequence and exon levels. DESCRIPTION: SpliceMiner is a web interface for querying Evidence Viewer Database (EVDB). EVDB is a comprehensive, non-redundant compendium of splice variant data for human genes. We constructed EVDB as a queryable implementation of the NCBI Evidence Viewer (EV). EVDB is based on data obtained from NCBI Entrez Gene and EV. The automated EVDB build process uses only complete coding sequences, which may or may not include partial or complete 5' and 3' UTRs, and filters redundant splice variants. Unlike EV, which supports only one-at-a-time queries, SpliceMiner supports high-throughput batch queries and provides results in an easily parsable format. SpliceMiner maps probes to splice variants, effectively delineating the variants identified by a probe. CONCLUSION: EVDB can be queried by gene symbol, genomic coordinates, or probe sequence via a user-friendly web-based tool we call SpliceMiner (). The EVDB/SpliceMiner combination provides an interface with human splice variant information and, going beyond the very valuable NCBI Evidence Viewer, supports fluent, high-throughput analysis. Integration of EVDB information into microarray analysis and design pipelines has the potential to improve the analysis and bioinformatic interpretation of gene expression data, for both batch and interactive processing. For example, whenever a gene expression value is recognized as important or appears anomalous in a microarray experiment, the interactive mode of SpliceMiner can be used quickly and easily to check for possible splice variant issues

    SpliceCenter: A suite of web-based bioinformatic applications for evaluating the impact of alternative splicing on RT-PCR, RNAi, microarray, and peptide-based studies

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Over 60% of protein-coding genes in vertebrates express mRNAs that undergo alternative splicing. The resulting collection of transcript isoforms poses significant challenges for contemporary biological assays. For example, RT-PCR validation of gene expression microarray results may be unsuccessful if the two technologies target different splice variants. Effective use of sequence-based technologies requires knowledge of the specific splice variant(s) that are targeted. In addition, the critical roles of alternative splice forms in biological function and in disease suggest that assay results may be more informative if analyzed in the context of the targeted splice variant.</p> <p>Results</p> <p>A number of contemporary technologies are used for analyzing transcripts or proteins. To enable investigation of the impact of splice variation on the interpretation of data derived from those technologies, we have developed SpliceCenter. SpliceCenter is a suite of user-friendly, web-based applications that includes programs for analysis of RT-PCR primer/probe sets, effectors of RNAi, microarrays, and protein-targeting technologies. Both interactive and high-throughput implementations of the tools are provided. The interactive versions of SpliceCenter tools provide visualizations of a gene's alternative transcripts and probe target positions, enabling the user to identify which splice variants are or are not targeted. The high-throughput batch versions accept user query files and provide results in tabular form. When, for example, we used SpliceCenter's batch siRNA-Check to process the Cancer Genome Anatomy Project's large-scale shRNA library, we found that only 59% of the 50,766 shRNAs in the library target all known splice variants of the target gene, 32% target some but not all, and 9% do not target any currently annotated transcript.</p> <p>Conclusion</p> <p>SpliceCenter <url>http://discover.nci.nih.gov/splicecenter</url> provides unique, user-friendly applications for assessing the impact of transcript variation on the design and interpretation of RT-PCR, RNAi, gene expression microarrays, antibody-based detection, and mass spectrometry proteomics. The tools are intended for use by bench biologists as well as bioinformaticists.</p
    corecore