35 research outputs found

    Blockchain-based auditing of legal decisions supported by explainable AI and generative AI tools

    Get PDF
    Generative AI tools powered by Large Language Models (LLMs) have demonstrated advanced capabilities in understanding and articulating legal facts closer to the level of legal practitioners. However, scholars hold contrasting views on the reliability of the reasoning behind a decision derived from LLMs due to its black-box nature. Law firms are vigilant in recognizing the potential risks of violating confidentiality and inappropriate exposure of sensitive legal data through the prompt sent to Generative AI. This research attempts to find an equilibrium between responsible usage and control of human legal professionals over content produced by Generative AI through regular audits. It investigates the potential of Generative AI in drafting correspondence for pre-litigation decisions derived from an eXplainable AI (XAI) algorithm. This research presents an end-to-end process of designing the architecture and methodology for a blockchain-based auditing system. It detects unauthorized alterations of data repositories containing the decisions by an XAI model and automated textual explanation by Generative AI. The automated auditing by blockchain facilitates responsible usage of AI technologies and reduces discrepancies in tracing the accountability of adversarial decisions. It conceptualizes the two algorithms. First, strategic on-chain (within blockchain) and off-chain (outside blockchain) data storage in compliance with the data protection laws and critical requirements of stakeholders in a legal firm. Second, auditing by comparison of the unique signature as Merkle roots of files stored off-chain with their immutable blockchain counterpart. A case study on liability cases under tort law demonstrates the system implementation results

    Scimitar syndrome - A rare cause of recurrent pneumonia

    Get PDF
    Scimitar syndrome is a congenital anomaly characterized by anomalous drainage of the right lung into inferior vena cava. This may be associated with other anomalies in the form of pulmonary hypoplasia, systemic arterial supply of right lung, and congenital heart diseases. We report an infant with recurrent pneumonia who turned out to be a case of scimitar syndrome on further workup. The patient was managed surgically by selective embolization of the artery from celiac trunk to sequestered lung. This case report highlights the fact that scimitar syndrome should be suspected in a patient with recurrent pneumonia with typical chest X-ray findings

    Evidential reasoning for preprocessing uncertain categorical data for trustworthy decisions: An application on healthcare and finance

    Get PDF
    The uncertainty attributed by discrepant data in AI-enabled decisions is a critical challenge in highly regulated domains such as health care and finance. Ambiguity and incompleteness due to missing values in output and input attributes, respectively, is ubiquitous in these domains. It could have an adverse impact on a certain unrepresented set of people in the training data without a developer's intention to discriminate. The inherently non-numerical nature of categorical attributes than numerical attributes and the presence of incomplete and ambiguous categorical attributes in a dataset increases the uncertainty in decision-making. This paper addresses the challenges in handling categorical attributes as it is not addressed comprehensively in previous research. Three sources of uncertainties in categorical attributes are recognised in this research. The informational uncertainty, unforeseeable uncertainty in the decision task environment, and the uncertainty due to lack of pre-modelling explainability in categorical attributes are addressed in the proposed methodology on maximum likelihood evidential reasoning (MAKER). It can transform and impute incomplete and ambiguous categorical attributes into interpretable numerical features. It utilises a notion of weight and reliability to include subjective expert preference over a piece of evidence and the quality of evidence in a categorical attribute, respectively. The MAKER framework strives to integrate the recognised uncertainties in the transformed input data that allow a model to perceive data limitations during the training regime and acknowledge doubtful predictions by supporting trustworthy pre-modelling and post modelling explainability. The ability to handle uncertainty and its impact on explainability is demonstrated on a real-world healthcare and finance data for different missing data scenarios in three types of AI algorithms: deep-learning, tree-based, and rule-based model

    Human-AI Collaboration to Mitigate Decision Noise in Financial Underwriting: A Study on FinTech Innovation in a Lending Firm

    Get PDF
    Financial institutions have recognized the value of collaborating human expertise and AI to create high-performance augmented decision-support systems. Stakeholders at lending firms have increasingly acknowledged that plugging data into AI algorithms and eliminating the role of human underwriters by automation, with the expectation of immediate returns on investment from business process automation, is a flawed strategy. This research emphasizes the necessity of auditing the consistency of decisions (or professional judgment) made by human underwriters and monitoring the ability of data to capture the lending policies of a firm to lay a strong foundation for a legitimate system before investing millions in AI projects. The judgments made by experts in the past re-emerge in the future as outcomes or labels in the data used to train and evaluate algorithms. This paper presents Evidential Reasoning-eXplainer, a methodology to estimate probability mass as an extent of support for a given decision on a loan application by jointly assessing multiple independent and conflicting pieces of evidence. It quantifies variability in past decisions by comparing the subjective judgments of underwriters during manual financial underwriting with outcomes estimated from data. The consistency analysis improves decision quality by bridging the gap between past inconsistent decisions and desired ultimate-true decisions. A case study on a specialist lending firm demonstrates the strategic work plan adapted to configure underwriters and developers to capture the correct data and audit the quality of decisions

    Explainable Artificial Intelligence for Digital Forensics: Opportunities, Challenges and a Drug Testing Case Study

    Get PDF
    Forensic analysis is typically a complex and time-consuming process requiring forensic investigators to collect and analyse different pieces of evidence to arrive at a solid recommendation. Our interest lies in forensic drug testing, where evidence comprises a multitude of experimentally obtained data from samples (e.g. hair or nails), occasionally combined with questionnaire data, with a goal of quantifying the likelihood of drug use. The availability of intelligent data-driven technologies can support holistic decision-making in such scenarios, but this needs to be done in a transparent fashion (as opposed to using black-box models). To this end, this book chapter investigates the opportunities and challenges of developing interactive and eXplainable Artificial Intelligence (XAI) systems to support digital forensics and automate the decision-making process to enable fast and reliable generation of evidence for the court of law. Relevant XAI techniques and their applications in forensic testing, including feature section, missing data handling, XAI for multi-criteria and interactive learning, are discussed in detail. A case study on a forensic science company is used to demonstrate the real challenges of forensic reporting and potential for making use of forensic data to pave the way for future research towards XAI-driven digital forensics

    Advances in Developing Therapies to Combat Zika Virus: Current Knowledge and Future Perspectives

    Get PDF
    Zika virus (ZIKV) remained largely quiescent for nearly six decades after its first appearance in 1947. ZIKV reappeared after 2007, resulting in a declaration of an international “public health emergency” in 2016 by the World Health Organization (WHO). Until this time, ZIKV was considered to induce only mild illness, but it has now been established as the cause of severe clinical manifestations, including fetal anomalies, neurological problems, and autoimmune disorders. Infection during pregnancy can cause congenital brain abnormalities, including microcephaly and neurological degeneration, and in other cases, Guillain-Barré syndrome, making infections with ZIKV a substantial public health concern. Genomic and molecular investigations are underway to investigate ZIKV pathology and its recent enhanced pathogenicity, as well as to design safe and potent vaccines, drugs, and therapeutics. This review describes progress in the design and development of various anti-ZIKV therapeutics, including drugs targeting virus entry into cells and the helicase protein, nucleosides, inhibitors of NS3 protein, small molecules, methyltransferase inhibitors, interferons, repurposed drugs, drugs designed with the aid of computers, neutralizing antibodies, convalescent serum, antibodies that limit antibody-dependent enhancement, and herbal medicines. Additionally, covalent inhibitors of viral protein expression and anti-Toll-like receptor molecules are discussed. To counter ZIKV-associated disease, we need to make rapid progress in developing novel therapies that work effectually to inhibit ZIKV
    corecore