2,923 research outputs found

    Computational Approaches for Estimating Life Cycle Inventory Data

    Full text link
    Data gaps in life cycle inventory (LCI) are stumbling blocks for investigating the life cycle performance and impact of emerging technologies. It can be tedious, expensive and time consuming for LCI practitioners to collect LCI data or to wait for experime ntal data become available. I propose a computational approach to estimate missing LCI data using link prediction techniques in network science. LCI data in E coinvent 3.1 is used to test the method. The proposed approach is based on the similarities between different processes or environmental intervention s in the LCI database. By comparing two processes’ material inputs and emission outputs, I measure the similarity of these processes. I hypothesize that similar processes tend to have similar material inputs and emission outputs which are life cycle inventory data I want to estimate. In particular, I measure similarity using four metrics, including average difference, Pearson correlation coefficient, Euclidean di stance, and SimRank with or without data normalization . I test these four metrics and normalization method for their performance of estimating missing LCI data. The results show that processes in the same industrial classification have higher similarities, which validat e the approach of measuring the similarity between unit processes. I remove a small set of data (from one data point to 50) for each process and then use the rest of LCI data as to train the model for estimating the removed data. I t is found that approximately 80% of removed data can be successfully estimated with less than 10% errors. This st udy is the first attempt in the searching for an effective computational method for estimating missing LCI data. I t is anticipate d that this approach wil l significantly transform LCI compilation and LCA studies in future.Master of ScienceNatural Resources and EnvironmentUniversity of Michiganhttp://deepblue.lib.umich.edu/bitstream/2027.42/134693/3/Cai_Jiarui_Document.pd

    Understanding from Machine Learning Models

    Get PDF
    Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In this paper, using the case of deep neural networks, I argue that it is not the complexity or black box nature of a model that limits how much understanding the model provides. Instead, it is a lack of scientific and empirical evidence supporting the link that connects a model to the target phenomenon that primarily prohibits understanding

    Evidential Reasoning & Analytical Techniques In Criminal Pre-Trial Fact Investigation

    Get PDF
    This thesis is the work of the author and is concerned with the development of a neo-Wigmorean approach to evidential reasoning in police investigation. The thesis evolved out of dissatisfaction with cardinal aspects of traditional approaches to police investigation, practice and training. Five main weaknesses were identified: Firstly, a lack of a theoretical foundation for police training and practice in the investigation of crime and evidence management; secondly, evidence was treated on the basis of its source rather than it's inherent capacity for generating questions; thirdly, the role of inductive elimination was underused and misunderstood; fourthly, concentration on single, isolated cases rather than on the investigation of multiple cases and, fifthly, the credentials of evidence were often assumed rather than considered, assessed and reasoned within the context of argumentation. Inspiration from three sources were used to develop the work: Firstly, John Henry Wigmore provided new insights into the nature of evidential reasoning and formal methods for the construction of arguments; secondly, developments in biochemistry provided new insights into natural methods of storing and using information; thirdly, the science of complexity provided new insights into the complex nature of collections of data that could be developed into complex systems of information and evidence. This thesis is an application of a general methodology supported by new diagnostic and analytical techniques. The methodology was embodied in a software system called Forensic Led Intelligence System: FLINTS. My standpoint is that of a forensic investigator with an interest in how evidential reasoning can improve the operation we call investigation. New areas of evidential reasoning are in progress and these are discussed including a new application in software designed by the author: MAVERICK. There are three main themes; Firstly, how a broadened conception of evidential reasoning supported by new diagnostic and analytical techniques can improve the investigation and discovery process. Secondly, an explanation of how a greater understanding of the roles and effects of different styles of reasoning can assist the user; and thirdly; a range of concepts and tools are presented for the combination, comparison, construction and presentation of evidence in imaginative ways. Taken together these are intended to provide examples of a new approach to the science of evidential reasoning. Originality will be in four key areas; 1. Extending and developing Wigmorean techniques to police investigation and evidence management. 2. Developing existing approaches in single case analysis and introducing an intellectual model for multi case analysis. 3. Introducing a new model for police training in investigative evidential reasoning. 4. Introducing a new software system to manage evidence in multi case approaches using forensic scientific evidence. FLINTS

    How Evidential is the Epistemic Conditional?

    Get PDF
    This paper aims to reassess how —and if— the epistemic conditional in French relates to evidentiality, focusing on its use in reportative and non-reportative declarative sentences as well as in conjectural polar questions. It is proposed that the epistemic conditional developed from the modal hypothetical use, which accounts for its ability to establish an epistemic frame. The epistemic conditional is defined as a construction that conveys an assumption whatever the source of information. It is claimed that the epistemic conditional does not primarily encode information source. Although the epistemic nature of the epistemic conditional makes it prone to draw on reportative evidence, it is not primarily an evidential marker. Nonetheless, the epistemic conditional is claimed to have indirect evidential and mirative extensions. Rather than the type of information source, the conditional encodes the speaker’s lack of control over information, which affects her level of commitment. Such an approach allows handling the different uses of the epistemic conditional in declarative sentences as well as in conjectural questions in a unified way

    Synergies between machine learning and reasoning - An introduction by the Kay R. Amel group

    Get PDF
    This paper proposes a tentative and original survey of meeting points between Knowledge Representation and Reasoning (KRR) and Machine Learning (ML), two areas which have been developed quite separately in the last four decades. First, some common concerns are identified and discussed such as the types of representation used, the roles of knowledge and data, the lack or the excess of information, or the need for explanations and causal understanding. Then, the survey is organised in seven sections covering most of the territory where KRR and ML meet. We start with a section dealing with prototypical approaches from the literature on learning and reasoning: Inductive Logic Programming, Statistical Relational Learning, and Neurosymbolic AI, where ideas from rule-based reasoning are combined with ML. Then we focus on the use of various forms of background knowledge in learning, ranging from additional regularisation terms in loss functions, to the problem of aligning symbolic and vector space representations, or the use of knowledge graphs for learning. Then, the next section describes how KRR notions may benefit to learning tasks. For instance, constraints can be used as in declarative data mining for influencing the learned patterns; or semantic features are exploited in low-shot learning to compensate for the lack of data; or yet we can take advantage of analogies for learning purposes. Conversely, another section investigates how ML methods may serve KRR goals. For instance, one may learn special kinds of rules such as default rules, fuzzy rules or threshold rules, or special types of information such as constraints, or preferences. The section also covers formal concept analysis and rough sets-based methods. Yet another section reviews various interactions between Automated Reasoning and ML, such as the use of ML methods in SAT solving to make reasoning faster. Then a section deals with works related to model accountability, including explainability and interpretability, fairness and robustness. Finally, a section covers works on handling imperfect or incomplete data, including the problem of learning from uncertain or coarse data, the use of belief functions for regression, a revision-based view of the EM algorithm, the use of possibility theory in statistics, or the learning of imprecise models. This paper thus aims at a better mutual understanding of research in KRR and ML, and how they can cooperate. The paper is completed by an abundant bibliography

    The Flawed Probabilistic Foundation of Law and Economics

    Get PDF

    Data mining for decision support with uncertainty on the airplane

    Get PDF
    This study describes the formalization of the medical decision-making process under uncertainty underpinned by conditional preferences, the theory of evidence and the exploitation of high-utility patterns in data mining. To assist a decision maker, the medical process (clinical pathway) was implemented using a Conditional Preferences Base (CPB). Then for knowledge engineering, a Dempster-Shafer ontology integrating uncertainty underpinned by evidence theory was built. Beliefs from different sources are established with the use of data mining. The result is recorded in an In-flight Electronic Health Records (IEHR). The IEHR contains evidential items corresponding to the variables determining the management of medical incidents. Finally, to manage tolerance to uncertainty, a belief fusion algorithm was developed. There is an inherent risk in the practice of medicine that can affect the conditions of medical activities (diagnostic or therapeutic purposes). The management of uncertainty is also an integral part of decision-making processes in the medical field. Different models of medical decisions under uncertainty have been proposed. Much of the current literature on these models pays particular attention to health economics inspired by how to manage uncertainty in economic decisions. However, these models fail to consider the purely medical aspect of the decision that always remains poorly characterized. Besides, the models achieving interesting decision outcomes are those considering the patient's health variable and other variables such as the costs associated with the care services. These models are aimed at defining health policy (health economics) without a deep consideration for the uncertainty surrounding the medical practices and associated technologies. Our approach is to integrate the management of uncertainty into clinical reasoning models such as Clinical Pathway and to exploit the relationships between the determinants of incident management using data mining tools. To this end, how healthcare professionals see and conceive uncertainty has been investigated. This allowed for the identification of the characteristics determining people under uncertainty and to understand the different forms and representations of uncertainty. Furthermore, what an in-flight medical incident is and how its management is a decision under uncertainty issues was defined. This is the first phase of common data mining that will provide an evidential transaction basis. Subsequently an evidential and ontological rea-soning to manage this uncertainty has been established in order to support decision making processes on the airplane
    corecore