1,974 research outputs found

    Using evidential reasoning to make qualified predictions of software quality

    Full text link
    Software quality is commonly characterised in a top-down manner. High-level notions such as quality are decomposed into hierarchies of sub-factors, ranging from abstract notions such as maintainability and reliability to lower-level notions such as test coverage or team-size. Assessments of abstract factors are derived from relevant sources of information about their respective lower-level sub-factors, by surveying sources such as metrics data and inspection reports. This can be difficult because (1) evidence might not be available, (2) interpretations of the data with respect to certain quality factors may be subject to doubt and intuition, and (3) there is no straightforward means of blending hierarchies of heterogeneous data into a single coherent and quantitative prediction of quality. This paper shows how Evidential Reasoning (ER) - a mathematical technique for reasoning about uncertainty and evidence - can address this problem. It enables the quality assessment to proceed in a bottom-up manner, by the provision of low-level assessments that make any uncertainty explicit, and automatically propagating these up to higher-level 'belief-functions' that accurately summarise the developer's opinion and make explicit any doubt or ignorance

    Monitoring sources of event memories: A cross-linguistic investigation

    Get PDF
    When monitoring the origins of their memories, people tend to mistakenly attribute mem- ories generated from internal processes (e.g., imagination, visualization) to perception. Here, we ask whether speaking a language that obligatorily encodes the source of informa- tion might help prevent such errors. We compare speakers of English to speakers of Turkish, a language that obligatorily encodes information source (direct/perceptual vs. indirect/hearsay or inference) for past events. In our experiments, participants reported having seen events that they had only inferred from post-event visual evidence. In general, error rates were higher when visual evidence that gave rise to inferences was relatively close to direct visual evidence. Furthermore, errors persisted even when participants were asked to report the specific sources of their memories. Crucially, these error patterns were equivalent across language groups, suggesting that speaking a language that obligatorily encodes source of information does not increase sensitivity to the distinction between per- ception and inference in event memory

    Explanation in Science

    Get PDF
    Scientific explanation is an important goal of scientific practise. Philosophers have proposed a striking diversity of seemingly incompatible accounts of explanation, from deductive-nomological to statistical relevance, unification, pragmatic, causal-mechanical, mechanistic, causal intervention, asymptotic, and model-based accounts. In this dissertation I apply two novel methods to reexamine our evidence about scientific explanation in practise and thereby address the fragmentation of philosophical accounts. I start by collecting a data set of 781 articles from one year of the journal Science. Using automated text mining techniques I measure the frequency and distribution of several groups of philosophically interesting words, such as explain , cause , evidence , theory , law , mechanism , and model . I show that explain words are much more common in scientific writing than in other genres, occurring in roughly half of all articles, and that their use is very often qualified or negated. These results about the use of words complement traditional conceptual analysis. Next I use random samples from the data set to develop a large number of small case studies across a wide range of scientific disciplines. I use a sample of explain sentences to develop and defend a new general philosophical account of scientific explanation, and then test my account against a larger set of randomly sampled sentences and abstracts. Five coarse categories can classify the explanans and explananda of my cases: data, entities, kinds, models, and theories. The pair of the categories of the explanans and explanandum indicates the form of an explanation. The explain-relation supports counterfactual reasoning about the dependence of qualities of the explanandum on qualities of the explanans. But for each form there is a different core relation between explanans and explanandum that supports the explain-relation. Causation, modelling, and argument are the core relations for different forms of scientific explanation between different categories of explanans and explananda. This flexibility allows me to resolve some of the fragmentation in the philosophical literature. I provide empirical evidence to show that my general philosophical account successfully describes a wide range of scientific practise across a large number of scientific disciplines

    Toward an Analysis of the Abductive Moral Argument for God’s Existence: Assessing the Evidential Quality of Moral Phenomena and the Evidential Virtuosity of Christian Theological Models

    Get PDF
    The moral argument for God’s existence is perhaps the oldest and most salient of the arguments from natural theology. In contemporary literature, there has been a focus on the abductive version of the moral argument. Although the mode of reasoning, abduction, has been articulated, there has not been a robust articulation of the individual components of the argument. Such an articulation would include the data quality of moral phenomena, the theoretical virtuosity of theological models that explain the moral phenomena, and how both contribute to the likelihood of moral arguments. The goal of this paper is to provide such an articulation. Our method is to catalog the phenomena, sort them by their location on the emergent hierarchy of sciences, then describe how the ecumenical Christian theological model exemplifies evidential virtues in explaining them. Our results show that moral arguments are neither of the highest or lowest quality yet can be assented to on a principled level of investigation, especially given existential considerations

    (Im)probable stories:combining Bayesian and explanation-based accounts of rational criminal proof

    Get PDF
    A key question in criminal trials is, ‘may we consider the facts of the case proven?’ Partially in response to miscarriages of justice, philosophers, psychologists and mathematicians have considered how we can answer this question rationally. The two most popular answers are the Bayesian and the explanation-based accounts. Bayesian models cast criminal evidence in terms of probabilities. Explanation-based approaches view the criminal justice process as a comparison between causal explanations of the evidence. Such explanations usually take the form of scenarios – stories about how a crime was committed. The two approaches are often seen as rivals. However, this thesis argues that both perspectives are necessary for a good theory of rational criminal proof. By comparing scenarios, we can, among other things, determine what the key evidence is, how the items of evidence interrelate, and what further evidence to collect. Bayesian probability theory helps us pinpoint when we can and cannot conclude that a scenario is likely to be true. This thesis considers several questions regarding criminal evidence from this combined perspective, such as: can a defendant sometimes be convicted on the basis of an implausible guilt scenario? When can we assume that we are not overlooking scenarios or evidence? Should judges always address implausible innocence scenarios of the accused? When is it necessary to look for new evidence? How do we judge whether an eyewitness is reliable? By combining the two theories, we arrive at new insights on how to rationally reason about these, and other questions surrounding criminal evidence

    Safety Culture Monitoring: A Management Approach for Assessing Nuclear Safety Culture Health Performance Utilizing Multiple-Criteria Decision Analysis

    Get PDF
    Nuclear power plants are among the most technologically complex of all energy facilities. This complexity reflects the precision needed in design, maintenance and operations to harness the energy of the atom safely, reliably and economically. Nuclear energy thus requires consistent, high levels of organizational performance by the highly skilled professionals who operate and maintain nuclear power plants (Nuclear Energy Institute [NEI], 2014, p. 1). A key element for achieving consistent, high levels of performance in a nuclear organization is its safety culture. Nuclear safety culture is for an organization what character and personality is for an individual: a feature that is made visible primarily through behaviors and espoused values. Nuclear safety culture is undergoing constant change. It represents the collective behaviors of the organization, which change as the organization and its members change and apply themselves to their daily activities. As problems arise, the organization learns from them. Successes and failures become ingrained in the organization’s nuclear safety culture and form the basis on which the organization conducts business. These behaviors are taught to new members of the organization as the correct way to perceive, think, act and feel (NEI, 2014, p. 1). Nuclear Safety Culture (NSC) is defined as the core values and behaviors resulting from a collective commitment by leaders and individuals to emphasize safety over competing goals to ensure protection of people and the environment (Institute of Nuclear Power Operations [INPO], 2012a, p. iv). Thus, nuclear safety culture depends on every employee, from the board of directors, to the control room operator, to the field technician in the switchyard, to the security officers and to contractors on site. That is, nuclear safety culture is affected by everything we say and everything we do. Nuclear safety is a collective responsibility meaning no one in the organization is exempt from the obligation to ensure nuclear safety first (NEI, 2014, p. 1). Furthermore, NSC is a leadership responsibility. Leaders reinforce safety culture at every opportunity so that the health of safety culture is not taken for granted. Leaders frequently measure the health of safety culture with a focus on trends rather than absolute values. Leaders communicate what constitutes a healthy safety culture and ensure everyone understands his or her role in its promotion. Leaders recognize that safety culture is not all or nothing but is, rather, constantly moving along a continuum. As a result, there is a comfort in discussing safety culture within the organization as well as with outside groups, such as regulatory agencies (INPO, 2012a). That is, NSC like everything else rises and falls based on leadership (Maxwell, 1998). In order to facilitate a healthy NSC, which is the sine qua non of safe nuclear plant operation, the leadership team needs to understand its present health in order to address NSC issues. It has been said “To manage risk, one has first to comprehend it” (Gheorghe, 2005, p. xvii). Equally true, in order to manage the nuclear safety culture of an organization we must first comprehend it. The goal of this research is to provide an ongoing holistic, objective, transparent and safety-focused process to identify early indications of potential problems linked to culture. The process uses a cross-section of available data (e.g., the corrective action program, performance trends, NRC inspections, industry evaluations, nuclear safety culture assessments, self-assessments, audits, operating experience, workforce issues and employee concerns program and other process inputs). These data are then analyzed utilizing Multiple-criteria Decision Analysis (MCDA) methodology that incorporates belief degrees of the management team leading to insights about its meaning which may lead directly to corrective actions

    Thirty years of Artificial Intelligence and Law:the second decade

    Get PDF
    The first issue of Artificial Intelligence and Law journal was published in 1992. This paper provides commentaries on nine significant papers drawn from the Journal’s second decade. Four of the papers relate to reasoning with legal cases, introducing contextual considerations, predicting outcomes on the basis of natural language descriptions of the cases, comparing different ways of representing cases, and formalising precedential reasoning. One introduces a method of analysing arguments that was to become very widely used in AI and Law, namely argumentation schemes. Two relate to ontologies for the representation of legal concepts and two take advantage of the increasing availability of legal corpora in this decade, to automate document summarisation and for the mining of arguments

    Decision Making Analysis for an Integrated Risk Management Framework of Maritime Container Port Infrastructure and Transportation Systems

    Get PDF
    This research proposes a risk management framework and develops generic risk-based decision-making, and risk-assessment models for dealing with potential Hazard Events (HEs) and risks associated with uncertainty for Operational Safety Performance (OSP) in container terminals and maritime ports. Three main sections are formulated in this study: Section 1: Risk Assessment, in the first phase, all HEs are identified through a literature review and human knowledge base and expertise. In the second phase, a Fuzzy Rule Base (FRB) is developed using the proportion method to assess the most significant HEs identified. The FRB leads to the development of a generic risk-based model incorporating the FRB and a Bayesian Network (BN) into a Fuzzy Rule Base Bayesian Network (FRBN) method using Hugin software to evaluate each HE individually and prioritise their specific risk estimations locally. The third phase demonstrated the FRBN method with a case study. The fourth phase concludes this section with a developed generic risk-based model incorporating FRBN and Evidential Reasoning to form an FRBER method using the Intelligence Decision System (IDS) software to evaluate all HEs aggregated collectively for their Risk Influence (RI) globally with a case study demonstration. In addition, a new sensitivity analysis method is developed to rank the HEs based on their True Risk Influence (TRI) considering their specific risk estimations locally and their RI globally. Section 2: Risk Models Simulations, the first phase explains the construction of the simulation model Bayesian Network Artificial Neural Networks (BNANNs), which is formed by applying Artificial Neural Networks (ANNs). In the second phase, the simulation model Evidential Reasoning Artificial Neural Networks (ERANNs) is constructed. The final phase in this section integrates the BNANNs and ERANNs that can predict the risk magnitude for HEs and provide a panoramic view on the risk inference in both perspectives, locally and globally. Section 3: Risk Control Options is the last link that finalises the risk management based methodology cycle in this study. The Analytical Hierarchal Process (AHP) method was used for determining the relative weights of all criteria identified in the first phase. The last phase develops a risk control options method by incorporating Fuzzy Logic (FL) and the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) to form an FTOPSIS method. The novelty of this research provides an effective risk management framework for OSP in container terminals and maritime ports. In addition, it provides an efficient safety prediction tool that can ease all the processes in the methods and techniques used with the risk management framework by applying the ANN concept to simulate the risk models
    corecore