12,208 research outputs found

    Hypotheses that attribute false beliefs: A two‐part epistemology

    Get PDF
    Is there some general reason to expect organisms that have beliefs to have false beliefs? And after you observe that an organism occasionally occupies a given neural state that you think encodes a perceptual belief, how do you evaluate hypotheses about the semantic content that that state has, where some of those hypotheses attribute beliefs that are sometimes false while others attribute beliefs that are always true? To address the first of these questions, we discuss evolution by natural selection and show how organisms that are risk-prone in the beliefs they form can be fitter than organisms that are risk-free. To address the second question, we discuss a problem that is widely recognized in statistics – the problem of over-fitting – and one influential device for addressing that problem, the Akaike Information Criterion (AIC). We then use AIC to solve epistemological versions of the disjunction and distality problems, which are two key problems concerning what it is for a belief state to have one semantic content rather than another

    Ways of Applying Artificial Intelligence in Software Engineering

    Full text link
    As Artificial Intelligence (AI) techniques have become more powerful and easier to use they are increasingly deployed as key components of modern software systems. While this enables new functionality and often allows better adaptation to user needs it also creates additional problems for software engineers and exposes companies to new risks. Some work has been done to better understand the interaction between Software Engineering and AI but we lack methods to classify ways of applying AI in software systems and to analyse and understand the risks this poses. Only by doing so can we devise tools and solutions to help mitigate them. This paper presents the AI in SE Application Levels (AI-SEAL) taxonomy that categorises applications according to their point of AI application, the type of AI technology used and the automation level allowed. We show the usefulness of this taxonomy by classifying 15 papers from previous editions of the RAISE workshop. Results show that the taxonomy allows classification of distinct AI applications and provides insights concerning the risks associated with them. We argue that this will be important for companies in deciding how to apply AI in their software applications and to create strategies for its use

    Vigilance and control

    Get PDF
    We sometimes fail unwittingly to do things that we ought to do. And we are, from time to time, culpable for these unwitting omissions. We provide an outline of a theory of responsibility for unwitting omissions. We emphasize two distinctive ideas: (i) many unwitting omissions can be understood as failures of appropriate vigilance, and; (ii) the sort of self-control implicated in these failures of appropriate vigilance is valuable. We argue that the norms that govern vigilance and the value of self-control explain culpability for unwitting omissions

    Word-decoding as a function of temporal processing in the visual system.

    Get PDF
    This study explored the relation between visual processing and word-decoding ability in a normal reading population. Forty participants were recruited at Arizona State University. Flicker fusion thresholds were assessed with an optical chopper using the method of limits by a 1-deg diameter green (543 nm) test field. Word decoding was measured using reading-word and nonsense-word decoding tests. A non-linguistic decoding measure was obtained using a computer program that consisted of Landolt C targets randomly presented in four cardinal orientations, at 3-radial distances from a focus point, for eight compass points, in a circular pattern. Participants responded by pressing the arrow key on the keyboard that matched the direction the target was facing. The results show a strong correlation between critical flicker fusion thresholds and scores on the reading-word, nonsense-word, and non-linguistic decoding measures. The data suggests that the functional elements of the visual system involved with temporal modulation and spatial processing may affect the ease with which people read

    From patterned response dependency to structured covariate dependency: categorical-pattern-matching

    Get PDF
    Data generated from a system of interest typically consists of measurements from an ensemble of subjects across multiple response and covariate features, and is naturally represented by one response-matrix against one covariate-matrix. Likely each of these two matrices simultaneously embraces heterogeneous data types: continuous, discrete and categorical. Here a matrix is used as a practical platform to ideally keep hidden dependency among/between subjects and features intact on its lattice. Response and covariate dependency is individually computed and expressed through mutliscale blocks via a newly developed computing paradigm named Data Mechanics. We propose a categorical pattern matching approach to establish causal linkages in a form of information flows from patterned response dependency to structured covariate dependency. The strength of an information flow is evaluated by applying the combinatorial information theory. This unified platform for system knowledge discovery is illustrated through five data sets. In each illustrative case, an information flow is demonstrated as an organization of discovered knowledge loci via emergent visible and readable heterogeneity. This unified approach fundamentally resolves many long standing issues, including statistical modeling, multiple response, renormalization and feature selections, in data analysis, but without involving man-made structures and distribution assumptions. The results reported here enhance the idea that linking patterns of response dependency to structures of covariate dependency is the true philosophical foundation underlying data-driven computing and learning in sciences.Comment: 32 pages, 10 figures, 3 box picture

    Digital Engineering Effectiveness

    Get PDF
    Excerpt from the Proceedings of the Nineteenth Annual Acquisition Research SymposiumThe 2018 release of the DoD’s Digital Engineering (DE) strategy and the success of applying DE methods in the mechanical and electrical engineering domains motivate application of DE methods in other product development workflows, such as systems and/or software engineer-ing. The expected benefits of this are improved communication and traceability with reduced rework and risk. Organizations have demonstrated advantages of DE methods many times over by using model-based design and analysis methods, such as Finite Element Analysis (FEA) or SPICE (Simulation Program with Integrated Circuit Emphasis), to conduct detailed evaluations earlier in the process (i.e., shifting left). However, other domains such as embedded computing resources for cyber physical systems (CPS) have not yet effectively demonstrated how to in-corporate relevant DE methods into their development workflows. Although there is broad sup-port for SysML and there has been significant advancement in specific tools, e.g., MathWorks¼, ANSYS¼, and Dassault tool offerings, and standards like Modelica and AADL, the DE benefits to CPS engineering have not been broadly realized. In this paper, we will explore why CPS devel-opers have been slow to embrace DE, how DE methods should be tailored to achieve their stakeholders’ goals, and how to measure the effectiveness of DE-enabled workflows.Approved for public release; distribution is unlimited

    Exploring Blockchain Adoption Supply Chains: Opportunities and Challenges

    Get PDF
    Excerpt from the Proceedings of the Nineteenth Annual Acquisition Research SymposiumThe 2018 release of the DoD’s Digital Engineering (DE) strategy and the success of applying DE methods in the mechanical and electrical engineering domains motivate application of DE methods in other product development workflows, such as systems and/or software engineer-ing. The expected benefits of this are improved communication and traceability with reduced rework and risk. Organizations have demonstrated advantages of DE methods many times over by using model-based design and analysis methods, such as Finite Element Analysis (FEA) or SPICE (Simulation Program with Integrated Circuit Emphasis), to conduct detailed evaluations earlier in the process (i.e., shifting left). However, other domains such as embedded computing resources for cyber physical systems (CPS) have not yet effectively demonstrated how to in-corporate relevant DE methods into their development workflows. Although there is broad sup-port for SysML and there has been significant advancement in specific tools, e.g., MathWorks¼, ANSYS¼, and Dassault tool offerings, and standards like Modelica and AADL, the DE benefits to CPS engineering have not been broadly realized. In this paper, we will explore why CPS devel-opers have been slow to embrace DE, how DE methods should be tailored to achieve their stakeholders’ goals, and how to measure the effectiveness of DE-enabled workflows.Approved for public release; distribution is unlimited

    Governance and Competence

    Get PDF
    Transaction cost economics faces serious problems concerning the way it deals, or fails to deal, with bounded rationality, the efficiency of outcomes, trust, innovation, learning and the nature of knowledge. The competence view yields an alternative perspective on the purpose and boundaries of the firm. However, the competence view cannot ignore issues of governance, and in spite of serious criticism, transaction cost economics yields useful concepts to deal with it. This article aims to contribute to the development of theory and empirical research that connects governance and competence perspectives.governance;learning;organization;inter-organizational relations;inter-firm alliances
    • 

    corecore