4,283 research outputs found

    Flight crew aiding for recovery from subsystem failures

    Get PDF
    Some of the conceptual issues associated with pilot aiding systems are discussed and an implementation of one component of such an aiding system is described. It is essential that the format and content of the information the aiding system presents to the crew be compatible with the crew's mental models of the task. It is proposed that in order to cooperate effectively, both the aiding system and the flight crew should have consistent information processing models, especially at the point of interface. A general information processing strategy, developed by Rasmussen, was selected to serve as the bridge between the human and aiding system's information processes. The development and implementation of a model-based situation assessment and response generation system for commercial transport aircraft are described. The current implementation is a prototype which concentrates on engine and control surface failure situations and consequent flight emergencies. The aiding system, termed Recovery Recommendation System (RECORS), uses a causal model of the relevant subset of the flight domain to simulate the effects of these failures and to generate appropriate responses, given the current aircraft state and the constraints of the current flight phase. Since detailed information about the aircraft state may not always be available, the model represents the domain at varying levels of abstraction and uses the less detailed abstraction levels to make inferences when exact information is not available. The structure of this model is described in detail

    Rethinking Explanation

    Get PDF
    This volume is a product of the international research project Theory of Explanation,which was funded by the Joint Committee for Nordic Research Councils for the Humanities and the Social Sciences (NOS-HS). The project started in 2001 and operated for a period of three years by organizing a number of workshops on scientific explanation in Norway, Iceland, Sweden and Finland. The workshops included presentations by people involved in the project and by invited guests. Both groups are represented in this volume, which brings together some of the papers presented in these meetings. The central theme of the research project was scientific explanation, but it was approached from many different angles. This plurality of approaches is also visible in the present volume. The authors share a joint interest in explanation, but not the same theoretical or methodological assumptions. As a whole, this volume shows that, although the theory of explanation has been a major industry within philosophy of science, there are still both conceptual problems to be solved and fresh philosophical ideas to explore. The papers in this volume have been divided into two broad groups. Part 1 consists of papers dealing with general issues in the theory of explanation, while the papers in Part 2 focus on some more specific problems

    Still Looking for Audience Costs

    Get PDF
    A pair of recent studies, motivated largely by limitations in the research designs of previous projects, offers evidence the authors interpret as contradicting audience cost theory. Although we share the authors’ ambivalence about audience costs, we are not convinced by their evidence. What one seeks in looking for audience costs is evidence of a causal mechanism, not just of a causal effect. Historical case studies can be better suited to detecting causal mechanisms than quantitative methods, and these two studies claim to be examining causal mechanisms. Yet process tracing is much less effective in assessing audience costs than Trachtenberg and others believe. After outlining relevant problems, we encourage scholars to theorize about and test more carefully key micro-foundations of audience cost theory

    Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models

    Full text link
    In this perspective paper, we first comprehensively review existing evaluations of Large Language Models (LLMs) using both standardized tests and ability-oriented benchmarks. We pinpoint several problems with current evaluation methods that tend to overstate the capabilities of LLMs. We then articulate what artificial general intelligence should encompass beyond the capabilities of LLMs. We propose four characteristics of generally intelligent agents: 1) they can perform unlimited tasks; 2) they can generate new tasks within a context; 3) they operate based on a value system that underpins task generation; and 4) they have a world model reflecting reality, which shapes their interaction with the world. Building on this viewpoint, we highlight the missing pieces in artificial general intelligence, that is, the unity of knowing and acting. We argue that active engagement with objects in the real world delivers more robust signals for forming conceptual representations. Additionally, knowledge acquisition isn't solely reliant on passive input but requires repeated trials and errors. We conclude by outlining promising future research directions in the field of artificial general intelligence

    Hearing meanings: the revenge of context

    Get PDF
    According to the perceptual view of language comprehension, listeners typically recover high-level linguistic properties such as utterance meaning without inferential work. The perceptual view is subject to the Objection from Context: since utterance meaning is massively context-sensitive, and context-sensitivity requires cognitive inference, the perceptual view is false. In recent work, Berit Brogaard provides a challenging reply to this objection. She argues that in language comprehension context-sensitivity is typically exercised not through inferences, but rather through top-down perceptual modulations or perceptual learning. This paper provides a complete formulation of the Objection from Context and evaluates Brogaards reply to it. Drawing on conceptual considerations and empirical examples, we argue that the exercise of context-sensitivity in language comprehension does, in fact, typically involve inference

    Big Data as a Technology-to-think-with for Scientific Literacy

    Get PDF
    This research aimed to identify indications of scientific literacy resulting from a didactic and investigative interaction with Google Trends Big Data software by first-year students from a high-school in Novo Hamburgo, Southern Brazil. Both teaching strategies and research interpretations lie on four theoretical backgrounds. Firstly, Bunge's epistemology, which provides a thorough characterization of Science that was central to our study. Secondly, the conceptual framework of scientific literacy of Fives et al. that makes our teaching focus precise and concise, as well as supports one of our methodological tool: the SLA (scientific literacy assessment). Thirdly, the "crowdledge" construct from dos Santos, which gives meaning to our study when as it makes the development of scientific literacy itself versatile for paying attention on sociotechnological and epistemological contemporary phenomena. Finally, the learning principles from Papert's Constructionism inspired our educational activities. Our educational actions consisted of students, divided into two classes, investigating phenomena chose by them. A triangulation process to integrate quantitative and qualitative methods on the assessments results was done. The experimental design consisted in post-tests only and the experimental variable was the way of access to the world. The experimental group interacted with the world using analyses of temporal and regional plots of interest of terms or topics searched on Google. The control class did 'placebo' interactions with the world through on-site observations of bryophytes, fungus or whatever in the schoolyard. As general results of our research, a constructionist environment based on Big Data analysis showed itself as a richer strategy to develop scientific literacy, compared to a free schoolyard exploration.Comment: 23 pages, 2 figures, 8 table

    Shortcuts in Employment Discrimination Law

    Get PDF
    Are employment discrimination plaintiffs viewed by society and by judges with an increased skepticism? This article urges that the same actor inference, the stray comment doctrine, and strict temporal nexus requirements, as courts have applied them, make up a larger and dangerous trend in the area of employment discrimination jurisprudence- that of courts reverting to special, judge-made shortcuts to curtail or even bypass analysis necessary to justify the disposal or proper adjudication of a case. This shorthand across different doctrines reveals a willingness of the judiciary to proxy monolithic assumptions for the individualized reasoned analyses mandated by the relevant antidiscrimination legislation. This article contrasts the shortcuts trend in employment discrimination jurisprudence with those presumptions and inferences that have traditionally been afforded to plaintiffs suing under traditional tort law. It also explores the potential root causes of the skepticism and hostility with which judges have regarded employment discrimination plaintiffs, as opposed to the way in which they have regarded traditional tort plaintiffs
    • …
    corecore