518 research outputs found

    Transforming to Yoked Neural Networks to Improve ANN Structure

    Full text link
    Most existing classical artificial neural networks (ANN) are designed as a tree structure to imitate neural networks. In this paper, we argue that the connectivity of a tree is not sufficient to characterize a neural network. The nodes of the same level of a tree cannot be connected with each other, i.e., these neural unit cannot share information with each other, which is a major drawback of ANN. Although ANN has been significantly improved in recent years to more complex structures, such as the directed acyclic graph (DAG), these methods also have unidirectional and acyclic bias for ANN. In this paper, we propose a method to build a bidirectional complete graph for the nodes in the same level of an ANN, which yokes the nodes of the same level to formulate a neural module. We call our model as YNN in short. YNN promotes the information transfer significantly which obviously helps in improving the performance of the method. Our YNN can imitate neural networks much better compared with the traditional ANN. In this paper, we analyze the existing structural bias of ANN and propose a model YNN to efficiently eliminate such structural bias. In our model, nodes also carry out aggregation and transformation of features, and edges determine the flow of information. We further impose auxiliary sparsity constraint to the distribution of connectedness, which promotes the learned structure to focus on critical connections. Finally, based on the optimized structure, we also design small neural module structure based on the minimum cut technique to reduce the computational burden of the YNN model. This learning process is compatible with the existing networks and different tasks. The obtained quantitative experimental results reflect that the learned connectivity is superior to the traditional NN structure.Comment: arXiv admin note: text overlap with arXiv:2008.08261 by other authors. arXiv admin note: text overlap with arXiv:2008.08261 by other author

    The First Zagreb Index, Vertex-Connectivity, Minimum Degree And Independent Number in Graphs

    Get PDF
    Let G be a simple, undirected and connected graph. Defined by M1(G) and RMTI(G) the first Zagreb index and the reciprocal Schultz molecular topological index of G, respectively. In this paper, we determined the graphs with maximal M1 among all graphs having prescribed vertex-connectivity and minimum degree, vertex-connectivity and bipartition, vertex-connectivity and vertex-independent number, respectively. As applications, all maximal elements with respect to RMTI are also determined among the above mentioned graph families, respectively

    The Impact of Stigmatizing Language in EHR Notes on AI Performance and Fairness

    Get PDF
    Today, there is significant interest in using electronic health record data to generate new clinical insights for diagnosis and treatment decisions. However, there are concerns that such data may be biased and result in accentuating racial disparities. We study how clinician biases reflected in EHR notes affect the performance and fairness of artificial intelligence models in the context of mortality prediction for intensive care unit patients. We apply a Transformer-based deep learning model and explainable AI techniques to quantify negative impacts on performance and fairness. Our findings demonstrate that stigmatizing language written by clinicians adversely affects AI performance, particularly so for black patients, highlighting SL as a source of racial disparity in AI model development. As an effective mitigation approach, removing SL from EHR notes can significantly improve AI performance and fairness. This study provides actionable insights for responsible AI development and contributes to understanding clinician EHR note writing

    People Talking and AI Listening: How Stigmatizing Language in EHR Notes Affect AI Performance

    Full text link
    Electronic health records (EHRs) serve as an essential data source for the envisioned artificial intelligence (AI)-driven transformation in healthcare. However, clinician biases reflected in EHR notes can lead to AI models inheriting and amplifying these biases, perpetuating health disparities. This study investigates the impact of stigmatizing language (SL) in EHR notes on mortality prediction using a Transformer-based deep learning model and explainable AI (XAI) techniques. Our findings demonstrate that SL written by clinicians adversely affects AI performance, particularly so for black patients, highlighting SL as a source of racial disparity in AI model development. To explore an operationally efficient way to mitigate SL's impact, we investigate patterns in the generation of SL through a clinicians' collaborative network, identifying central clinicians as having a stronger impact on racial disparity in the AI model. We find that removing SL written by central clinicians is a more efficient bias reduction strategy than eliminating all SL in the entire corpus of data. This study provides actionable insights for responsible AI development and contributes to understanding clinician behavior and EHR note writing in healthcare.Comment: 54 pages, 9 figure
    corecore