75,797 research outputs found

    Cancer prediction using graph-based gene selection and explainable classifier

    Get PDF
    Several Artificial Intelligence-based models have been developed for cancer prediction. In spite of the promise of artificial intelligence, there are very few models which bridge the gap between traditional human-centered prediction and the potential future of machine-centered cancer prediction. In this study, an efficient and effective model is developed for gene selection and cancer prediction. Moreover, this study proposes an artificial intelligence decision system to provide physicians with a simple and human-interpretable set of rules for cancer prediction. In contrast to previous deep learning-based cancer prediction models, which are difficult to explain to physicians due to their black-box nature, the proposed prediction model is based on a transparent and explainable decision forest model. The performance of the developed approach is compared to three state-of-the-art cancer prediction including TAGA, HPSO and LL. The reported results on five cancer datasets indicate that the developed model can improve the accuracy of cancer prediction and reduce the execution time

    Human-Centered Design to Address Biases in Artificial Intelligence

    Get PDF
    The potential of artificial intelligence (AI) to reduce health care disparities and inequities is recognized, but it can also exacerbate these issues if not implemented in an equitable manner. This perspective identifies potential biases in each stage of the AI life cycle, including data collection, annotation, machine learning model development, evaluation, deployment, operationalization, monitoring, and feedback integration. To mitigate these biases, we suggest involving a diverse group of stakeholders, using human-centered AI principles. Human-centered AI can help ensure that AI systems are designed and used in a way that benefits patients and society, which can reduce health disparities and inequities. By recognizing and addressing biases at each stage of the AI life cycle, AI can achieve its potential in health care

    Returning Integrated Genomic Risk and Clinical Recommendations: The eMERGE Study

    Get PDF
    The potential of artificial intelligence (AI) to reduce health care disparities and inequities is recognized, but it can also exacerbate these issues if not implemented in an equitable manner. This perspective identifies potential biases in each stage of the AI life cycle, including data collection, annotation, machine learning model development, evaluation, deployment, operationalization, monitoring, and feedback integration. To mitigate these biases, we suggest involving a diverse group of stakeholders, using human-centered AI principles. Human-centered AI can help ensure that AI systems are designed and used in a way that benefits patients and society, which can reduce health disparities and inequities. By recognizing and addressing biases at each stage of the AI life cycle, AI can achieve its potential in health car

    Artificial Intelligence and Patient-Centered Decision-Making

    Get PDF
    Advanced AI systems are rapidly making their way into medical research and practice, and, arguably, it is only a matter of time before they will surpass human practitioners in terms of accuracy, reliability, and knowledge. If this is true, practitioners will have a prima facie epistemic and professional obligation to align their medical verdicts with those of advanced AI systems. However, in light of their complexity, these AI systems will often function as black boxes: the details of their contents, calculations, and procedures cannot be meaningfully understood by human practitioners. When AI systems reach this level of complexity, we can also speak of black-box medicine. In this paper, we want to argue that black-box medicine conflicts with core ideals of patient-centered medicine. In particular, we claim, black-box medicine is not conducive for supporting informed decision-making based on shared information, shared deliberation, and shared mind between practitioner and patient

    From Human-Centered to Social-Centered Artificial Intelligence: Assessing ChatGPT's Impact through Disruptive Events

    Full text link
    Large language models (LLMs) and dialogue agents have existed for years, but the release of recent GPT models has been a watershed moment for artificial intelligence (AI) research and society at large. Immediately recognized for its generative capabilities and versatility, ChatGPT's impressive proficiency across technical and creative domains led to its widespread adoption. While society grapples with the emerging cultural impacts of ChatGPT, critiques of ChatGPT's impact within the machine learning community have coalesced around its performance or other conventional Responsible AI evaluations relating to bias, toxicity, and 'hallucination.' We argue that these latter critiques draw heavily on a particular conceptualization of the 'human-centered' framework, which tends to cast atomized individuals as the key recipients of both the benefits and detriments of technology. In this article, we direct attention to another dimension of LLMs and dialogue agents' impact: their effect on social groups, institutions, and accompanying norms and practices. By illustrating ChatGPT's social impact through three disruptive events, we challenge individualistic approaches in AI development and contribute to ongoing debates around the ethical and responsible implementation of AI systems. We hope this effort will call attention to more comprehensive and longitudinal evaluation tools and compel technologists to go beyond human-centered thinking and ground their efforts through social-centered AI
    • …
    corecore