785,863 research outputs found

    DRLViz: Understanding Decisions and Memory in Deep Reinforcement Learning

    Full text link
    We present DRLViz, a visual analytics interface to interpret the internal memory of an agent (e.g. a robot) trained using deep reinforcement learning. This memory is composed of large temporal vectors updated when the agent moves in an environment and is not trivial to understand due to the number of dimensions, dependencies to past vectors, spatial/temporal correlations, and co-correlation between dimensions. It is often referred to as a black box as only inputs (images) and outputs (actions) are intelligible for humans. Using DRLViz, experts are assisted to interpret decisions using memory reduction interactions, and to investigate the role of parts of the memory when errors have been made (e.g. wrong direction). We report on DRLViz applied in the context of video games simulators (ViZDoom) for a navigation scenario with item gathering tasks. We also report on experts evaluation using DRLViz, and applicability of DRLViz to other scenarios and navigation problems beyond simulation games, as well as its contribution to black box models interpretability and explainability in the field of visual analytics

    Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR

    Get PDF
    There has been much discussion of the right to explanation in the EU General Data Protection Regulation, and its existence, merits, and disadvantages. Implementing a right to explanation that opens the black box of algorithmic decision-making faces major legal and technical barriers. Explaining the functionality of complex algorithmic decision-making systems and their rationale in specific cases is a technically challenging problem. Some explanations may offer little meaningful information to data subjects, raising questions around their value. Explanations of automated decisions need not hinge on the general public understanding how algorithmic systems function. Even though such interpretability is of great importance and should be pursued, explanations can, in principle, be offered without opening the black box. Looking at explanations as a means to help a data subject act rather than merely understand, one could gauge the scope and content of explanations according to the specific goal or action they are intended to support. From the perspective of individuals affected by automated decision-making, we propose three aims for explanations: (1) to inform and help the individual understand why a particular decision was reached, (2) to provide grounds to contest the decision if the outcome is undesired, and (3) to understand what would need to change in order to receive a desired result in the future, based on the current decision-making model. We assess how each of these goals finds support in the GDPR. We suggest data controllers should offer a particular type of explanation, unconditional counterfactual explanations, to support these three aims. These counterfactual explanations describe the smallest change to the world that can be made to obtain a desirable outcome, or to arrive at the closest possible world, without needing to explain the internal logic of the system

    Stay by thy neighbor? Social organization determines the efficiency of biodiversity markets with spatial incentives

    Full text link
    Market-based conservation instruments, such as payments, auctions or tradable permits, are environmental policies that create financial incentives for landowners to engage in voluntary conservation on their land. But what if ecological processes operate across property boundaries and land use decisions on one property influence ecosystem functions on neighboring sites? This paper examines how to account for such spatial externalities when designing market-based conservation instruments. We use an agent-based model to analyze different spatial metrics and their implications on land use decisions in a dynamic cost environment. The model contains a number of alternative submodels which differ in incentive design and social interactions of agents, the latter including coordinating as well as cooperating behavior of agents. We find that incentive design and social interactions have a strong influence on the spatial allocation and the costs of the conservation market.Comment: 11 pages, 6 figure

    Modelling the Product Development performance of Colombian Companies

    Get PDF
    Organised by: Cranfield UniversityThis paper presents the general model of the Product Development Process (PDP) in the Metal mechanics Industry in Barranquilla-Colombia, since this sector contributes significantly to the productivity of this industrial city. This case study counted on a five-company sample. The main goal was to model the current conditions of the PDP according to the Concurrent Engineering philosophy. The companies were selected according to their productive profile, in order to contrast differences regarding the structure of their productive processes, conformation of multidisciplinary teams, integration of different areas, customers and suppliers to the PDP; human resources, information, technology and marketing constraints.Mori Seiki – The Machine Tool Compan

    BriskStream: Scaling Data Stream Processing on Shared-Memory Multicore Architectures

    Full text link
    We introduce BriskStream, an in-memory data stream processing system (DSPSs) specifically designed for modern shared-memory multicore architectures. BriskStream's key contribution is an execution plan optimization paradigm, namely RLAS, which takes relative-location (i.e., NUMA distance) of each pair of producer-consumer operators into consideration. We propose a branch and bound based approach with three heuristics to resolve the resulting nontrivial optimization problem. The experimental evaluations demonstrate that BriskStream yields much higher throughput and better scalability than existing DSPSs on multi-core architectures when processing different types of workloads.Comment: To appear in SIGMOD'1

    Expert Elicitation for Reliable System Design

    Full text link
    This paper reviews the role of expert judgement to support reliability assessments within the systems engineering design process. Generic design processes are described to give the context and a discussion is given about the nature of the reliability assessments required in the different systems engineering phases. It is argued that, as far as meeting reliability requirements is concerned, the whole design process is more akin to a statistical control process than to a straightforward statistical problem of assessing an unknown distribution. This leads to features of the expert judgement problem in the design context which are substantially different from those seen, for example, in risk assessment. In particular, the role of experts in problem structuring and in developing failure mitigation options is much more prominent, and there is a need to take into account the reliability potential for future mitigation measures downstream in the system life cycle. An overview is given of the stakeholders typically involved in large scale systems engineering design projects, and this is used to argue the need for methods that expose potential judgemental biases in order to generate analyses that can be said to provide rational consensus about uncertainties. Finally, a number of key points are developed with the aim of moving toward a framework that provides a holistic method for tracking reliability assessment through the design process.Comment: This paper commented in: [arXiv:0708.0285], [arXiv:0708.0287], [arXiv:0708.0288]. Rejoinder in [arXiv:0708.0293]. Published at http://dx.doi.org/10.1214/088342306000000510 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Psychometrics in Practice at RCEC

    Get PDF
    A broad range of topics is dealt with in this volume: from combining the psychometric generalizability and item response theories to the ideas for an integrated formative use of data-driven decision making, assessment for learning and diagnostic testing. A number of chapters pay attention to computerized (adaptive) and classification testing. Other chapters treat the quality of testing in a general sense, but for topics like maintaining standards or the testing of writing ability, the quality of testing is dealt with more specifically.\ud All authors are connected to RCEC as researchers. They present one of their current research topics and provide some insight into the focus of RCEC. The selection of the topics and the editing intends that the book should be of special interest to educational researchers, psychometricians and practitioners in educational assessment

    Selecting and implementing overview methods: implications from five exemplar overviews

    Get PDF
    This is the final version of the article. Available from BioMed Central via the DOI in this record.Background Overviews of systematic reviews are an increasingly popular method of evidence synthesis; there is a lack of clear guidance for completing overviews and a number of methodological challenges. At the UK Cochrane Symposium 2016, methodological challenges of five overviews were explored. Using data from these five overviews, practical implications to support methodological decision making of authors writing protocols for future overviews are proposed. Methods Methods, and their justification, from the five exemplar overviews were tabulated and compared with areas of debate identified within current literature. Key methodological challenges and implications for development of overview protocols were generated and synthesised into a list, discussed and refined until there was consensus. Results Methodological features of three Cochrane overviews, one overview of diagnostic test accuracy and one mixed methods overview have been summarised. Methods of selection of reviews and data extraction were similar. Either the AMSTAR or ROBIS tool was used to assess quality of included reviews. The GRADE approach was most commonly used to assess quality of evidence within the reviews. Eight key methodological challenges were identified from the exemplar overviews. There was good agreement between our findings and emerging areas of debate within a recent published synthesis. Implications for development of protocols for future overviews were identified. Conclusions Overviews are a relatively new methodological innovation, and there are currently substantial variations in the methodological approaches used within different overviews. There are considerable methodological challenges for which optimal solutions are not necessarily yet known. Lessons learnt from five exemplar overviews highlight a number of methodological decisions which may be beneficial to consider during the development of an overview protocol.The overview conducted by Pollock [19] was supported by a project grant from the Chief Scientist Office of the Scottish Government. The overview conducted by McClurg [21] was supported by a project grant by the Physiotherapy Research Foundation. The overview by Hunt [22] was supported as part of doctoral programme funding by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care South West Peninsula (PenCLAHRC). The overview conducted by Estcourt [20] was supported by an NIHR Cochrane Programme Grant for the Safe and Appropriate Use of Blood Components. The overview conducted by Brunton [23] was commissioned by the Department of Health as part of an ongoing programme of work on health policy research synthesis. Alex Pollock is employed by the Nursing, Midwifery and Allied Health Professions (NMAHP) Research Unit, which is supported by the Chief Scientist Office of the Scottish Government. Pauline Campbell is supported by the Chief Nurses Office of the Scottish Government
    corecore