78,440 research outputs found

    Reasoned modelling critics: turning failed proofs into modelling guidance

    No full text
    The activities of formal modelling and reasoning are closely related. But while the rigour of building formal models brings significant benefits, formal reasoning remains a major barrier to the wider acceptance of formalism within design. Here we propose reasoned modelling critics — an approach which aims to abstract away from the complexities of low-level proof obligations, and provide high-level modelling guidance to designers when proofs fail. Inspired by proof planning critics, the technique combines proof-failure analysis with modelling heuristics. Here, we present the details of our proposal, implement them in a prototype and outline future plans

    Decision-making and problem-solving methods in automation technology

    Get PDF
    The state of the art in the automation of decision making and problem solving is reviewed. The information upon which the report is based was derived from literature searches, visits to university and government laboratories performing basic research in the area, and a 1980 Langley Research Center sponsored conferences on the subject. It is the contention of the authors that the technology in this area is being generated by research primarily in the three disciplines of Artificial Intelligence, Control Theory, and Operations Research. Under the assumption that the state of the art in decision making and problem solving is reflected in the problems being solved, specific problems and methods of their solution are often discussed to elucidate particular aspects of the subject. Synopses of the following major topic areas comprise most of the report: (1) detection and recognition; (2) planning; and scheduling; (3) learning; (4) theorem proving; (5) distributed systems; (6) knowledge bases; (7) search; (8) heuristics; and (9) evolutionary programming

    Detect the unexpected: a science for surveillance

    Get PDF
    Purpose – The purpose of this paper is to outline a strategy for research development focused on addressing the neglected role of visual perception in real life tasks such as policing surveillance and command and control settings. Approach – The scale of surveillance task in modern control room is expanding as technology increases input capacity at an accelerating rate. The authors review recent literature highlighting the difficulties that apply to modern surveillance and give examples of how poor detection of the unexpected can be, and how surprising this deficit can be. Perceptual phenomena such as change blindness are linked to the perceptual processes undertaken by law-enforcement personnel. Findings – A scientific programme is outlined for how detection deficits can best be addressed in the context of a multidisciplinary collaborative agenda between researchers and practitioners. The development of a cognitive research field specifically examining the occurrence of perceptual “failures” provides an opportunity for policing agencies to relate laboratory findings in psychology to their own fields of day-to-day enquiry. Originality/value – The paper shows, with examples, where interdisciplinary research may best be focussed on evaluating practical solutions and on generating useable guidelines on procedure and practice. It also argues that these processes should be investigated in real and simulated context-specific studies to confirm the validity of the findings in these new applied scenarios

    On the role of Prognostics and Health Management in advanced maintenance systems

    Get PDF
    The advanced use of the Information and Communication Technologies is evolving the way that systems are managed and maintained. A great number of techniques and methods have emerged in the light of these advances allowing to have an accurate and knowledge about the systems’ condition evolution and remaining useful life. The advances are recognized as outcomes of an innovative discipline, nowadays discussed under the term of Prognostics and Health Management (PHM). In order to analyze how maintenance will change by using PHM, a conceptual model is proposed built upon three views. The model highlights: (i) how PHM may impact the definition of maintenance policies; (ii) how PHM fits within the Condition Based Maintenance (CBM) and (iii) how PHM can be integrated into Reliability Centered Maintenance (RCM) programs. The conceptual model is the research finding of this review note and helps to discuss the role of PHM in advanced maintenance systems.EU Framework Programme Horizon 2020, 645733 - Sustain-Owner - H2020-MSCA-RISE-201

    Increasing resilience of ATM networks using traffic monitoring and automated anomaly analysis

    Get PDF
    Systematic network monitoring can be the cornerstone for the dependable operation of safety-critical distributed systems. In this paper, we present our vision for informed anomaly detection through network monitoring and resilience measurements to increase the operators' visibility of ATM communication networks. We raise the question of how to determine the optimal level of automation in this safety-critical context, and we present a novel passive network monitoring system that can reveal network utilisation trends and traffic patterns in diverse timescales. Using network measurements, we derive resilience metrics and visualisations to enhance the operators' knowledge of the network and traffic behaviour, and allow for network planning and provisioning based on informed what-if analysis

    Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World

    Get PDF
    This report documents the program and the outcomes of GI-Dagstuhl Seminar 16394 "Software Performance Engineering in the DevOps World". The seminar addressed the problem of performance-aware DevOps. Both, DevOps and performance engineering have been growing trends over the past one to two years, in no small part due to the rise in importance of identifying performance anomalies in the operations (Ops) of cloud and big data systems and feeding these back to the development (Dev). However, so far, the research community has treated software engineering, performance engineering, and cloud computing mostly as individual research areas. We aimed to identify cross-community collaboration, and to set the path for long-lasting collaborations towards performance-aware DevOps. The main goal of the seminar was to bring together young researchers (PhD students in a later stage of their PhD, as well as PostDocs or Junior Professors) in the areas of (i) software engineering, (ii) performance engineering, and (iii) cloud computing and big data to present their current research projects, to exchange experience and expertise, to discuss research challenges, and to develop ideas for future collaborations

    Negative Results in Computer Vision: A Perspective

    Full text link
    A negative result is when the outcome of an experiment or a model is not what is expected or when a hypothesis does not hold. Despite being often overlooked in the scientific community, negative results are results and they carry value. While this topic has been extensively discussed in other fields such as social sciences and biosciences, less attention has been paid to it in the computer vision community. The unique characteristics of computer vision, particularly its experimental aspect, call for a special treatment of this matter. In this paper, I will address what makes negative results important, how they should be disseminated and incentivized, and what lessons can be learned from cognitive vision research in this regard. Further, I will discuss issues such as computer vision and human vision interaction, experimental design and statistical hypothesis testing, explanatory versus predictive modeling, performance evaluation, model comparison, as well as computer vision research culture

    On the testability of WCAG 2.0 for beginners

    Get PDF
    Web accessibility for people with disabilities is a highly visible area of research in the field of ICT accessibility, including many policy activities across many countries. The commonly accepted guidelines for web accessibility (WCAG 1.0) were published in 1999 and have been extensively used by designers, evaluators and legislators. W3C-WAI published a new version of these guidelines (WCAG 2.0) in December 2008. One of the main goals of WCAG 2.0 was testability, that is, WCAG 2.0 should be either machine testable or reliably human testable. In this paper we present an educational experiment performed during an intensive web accessibility course. The goal of the experiment was to assess the testability of the 25 level-A success criteria of WCAG 2.0 by beginners. To do this, the students had to manually evaluate the accessibility of the same web page. The result was that only eight success criteria could be considered to be reliably human testable when evaluators were beginners. We also compare our experiment with a similar study published recently. Our work is not a conclusive experiment, but it does suggest some parts of WCAG 2.0 to which special attention should be paid when training accessibility evaluator

    Reviews

    Get PDF
    Sally Brown, Steve Armstrong and Gail Thompson (eds.), Motivating Students, London: Kogan Page, 1998. ISBN: 0–7494–2494‐X. Paperback, 214 pages. £18.99
    • 

    corecore