1,023 research outputs found

    Mechanisms of biliary carcinogenesis: A pathogenetic multi-stage cascade towards cholangiocarcinoma

    Get PDF
    Carcinomas of the biliary tract are rare cancers developing from the epithelial or blast-like cells lining the bile ducts. A variety of known predisposing factors and recent experimental modelsof biliary carcinogenesis (e.g., infection with the liver fluke Opisthorchis viverrini, models of chemically induced carcinogenesis and experimental models of pancreaticobiliary maljunction) have elucidated different stages of this complex system of biliary tumorigenesis. Chronic inflammatory processes, generation of active oxygen radicals, altered cellular detoxificationmechanisms, activation of oncogenes, functional loss of tumor-suppressor genes and dysregulation of cell proliferation and cell apoptotic mechanisms have been identified as important contributorsin the development of cholangiocarcinomas. In this review, the known mechanisms involved in the carcinogenesis of biliary epithelium are addressed. We will divide the topic into four stages: 1) Predisposition and risk factors of biliary cancer. 2) Genotoxic events and alterations leading to specific DNA damage and mutation patterns. 3) Dysregulation of DNA repair mechanisms and apoptosis, permitting survival of mutated cells and 4) Morphological evolution from premalignant biliary lesions to cholangiocarcinoma. Finally, established and hypothetical future therapeutic strategies directed towards specific pathogenetic events during biliary carcinogenesis will be addresse

    Use of a fluorescent bile acid to enhance visualization of the biliary tract and bile leaks during laparoscopic surgery in rabbits

    Get PDF
    Background: We set out to determine whether intravenously administered cholylglycylaminofluorescein (CGF), a fluorescent bile acid, would enhance the visualization of the biliary tract and bile leaks in rabbits undergoing laparoscopic cholecystectomy (LC). Methods: CGF was infused at doses of 1, 5, and 10 mg/kg b.w. Biliary recovery was determined spectrophotometrically (six rabbits). For LC (seven rabbits), a blue (fluorescein) filter was attached to the light source, and a fluorescein-emission filter was attached to the charge coupled device (CCD) camera. The biliary tract and bile leak (made by incising the gallbladder) was observed under standard and fluorescent illumination. Results: Apple-green fluorescence appeared in 2 min and persisted for 30-60 min, enhancing visualization of bile duct anatomy as well as the bile leak. Biliary recovery of CGF at 90 min was high (86-96% of the infused dose). Conclusion: In rabbits, CGF is secreted quantitatively in bile, induces biliary fluorescence, and enhances visualization of the bile ducts and bile leaks when viewed with appropriate filter

    Characterizing Scales of Genetic Recombination and Antibiotic Resistance in Pathogenic Bacteria Using Topological Data Analysis

    Full text link
    Pathogenic bacteria present a large disease burden on human health. Control of these pathogens is hampered by rampant lateral gene transfer, whereby pathogenic strains may acquire genes conferring resistance to common antibiotics. Here we introduce tools from topological data analysis to characterize the frequency and scale of lateral gene transfer in bacteria, focusing on a set of pathogens of significant public health relevance. As a case study, we examine the spread of antibiotic resistance in Staphylococcus aureus. Finally, we consider the possible role of the human microbiome as a reservoir for antibiotic resistance genes.Comment: 12 pages, 6 figures. To appear in AMT 2014 Special Session on Advanced Methods of Interactive Data Mining for Personalized Medicin

    Towards Explainability for AI Fairness

    Full text link
    AI explainability is becoming indispensable to allow users to gain insights into the AI system’s decision-making process. Meanwhile, fairness is another rising concern that algorithmic predictions may be misaligned to the designer’s intent or social expectations such as discrimination to specific groups. In this work, we provide a state-of-the-art overview on the relations between explanation and AI fairness and especially the roles of explanation on human’s fairness judgement. The investigations demonstrate that fair decision making requires extensive contextual understanding, and AI explanations help identify potential variables that are driving the unfair outcomes. It is found that different types of AI explanations affect human’s fairness judgements differently. Some properties of features and social science theories need to be considered in making senses of fairness with explanations. Different challenges are identified to make responsible AI for trustworthy decision making from the perspective of explainability and fairness

    Quantitative Shape-Classification of Misfitting Precipitates during Cubic to Tetragonal Transformations: Phase-Field Simulations and Experiments

    Get PDF
    The effectiveness of the mechanism of precipitation strengthening in metallic alloys de-pends on the shapes of the precipitates. Two different material systems are considered: tetragonal γ′′ precipitates in Ni-based alloys and tetragonal θ′ precipitates in Al-Cu-alloys. The shape formation and evolution of the tetragonally misfitting precipitates was investigated by means of experiments and phase-field simulations. We employed the method of invariant moments for the consistent shape quantification of precipitates obtained from the simulation as well as those obtained from the experiment. Two well-defined shape-quantities are proposed: (i) a generalized measure for the particles aspect ratio and (ii) the normalized λ2, as a measure for shape deviations from an ideal ellipse of the given aspect ratio. Considering the size dependence of the aspect ratio of γ′′ precipitates, we find good agreement between the simulation results and the experiment. Further, the precipitates’ in-plane shape is defined as the central 2D cut through the 3D particle in a plane normal to the tetragonal c-axes of the precipitate. The experimentally observed in-plane shapes of γ′′-precipitates can be quantitatively reproduced by the phase-field model. © 2021 by the authors. Licensee MDPI, Basel, Switzerland

    Expectation Maximization in Deep Probabilistic Logic Programming

    Get PDF
    Probabilistic Logic Programming (PLP) combines logic and probability for representing and reasoning over domains with uncertainty. Hierarchical probability Logic Programming (HPLP) is a recent language of PLP whose clauses are hierarchically organized forming a deep neural network or arithmetic circuit. Inference in HPLP is done by circuit evaluation and learning is therefore cheaper than any generic PLP language. We present in this paper an Expectation Maximization algorithm, called Expectation Maximization Parameter learning for HIerarchical Probabilistic Logic programs (EMPHIL), for learning HPLP parameters. The algorithm converts an arithmetic circuit into a Bayesian network and performs the belief propagation algorithm over the corresponding factor graph

    Evaluating the Quality of Machine Learning Explanations: A Survey on Methods and Metrics

    Full text link
    The most successful Machine Learning (ML) systems remain complex black boxes to end-users, and even experts are often unable to understand the rationale behind their decisions. The lack of transparency of such systems can have severe consequences or poor uses of limited valuable resources in medical diagnosis, financial decision-making, and in other high-stake domains. Therefore, the issue of ML explanation has experienced a surge in interest from the research community to application domains. While numerous explanation methods have been explored, there is a need for evaluations to quantify the quality of explanation methods to determine whether and to what extent the offered explainability achieves the defined objective, and compare available explanation methods and suggest the best explanation from the comparison for a specific task. This survey paper presents a comprehensive overview of methods proposed in the current literature for the evaluation of ML explanations. We identify properties of explainability from the review of definitions of explainability. The identified properties of explainability are used as objectives that evaluation metrics should achieve. The survey found that the quantitative metrics for both model-based and example-based explanations are primarily used to evaluate the parsimony/simplicity of interpretability, while the quantitative metrics for attribution-based explanations are primarily used to evaluate the soundness of fidelity of explainability. The survey also demonstrated that subjective measures, such as trust and confidence, have been embraced as the focal point for the human-centered evaluation of explainable systems. The paper concludes that the evaluation of ML explanations is a multidisciplinary research topic. It is also not possible to define an implementation of evaluation metrics, which can be applied to all explanation methods.</jats:p
    • …
    corecore