1,513 research outputs found

    Influence of State-Variable Constraints on Partially Observable Monte Carlo Planning

    Get PDF
    Online planning methods for partially observable Markov decision processes (POMDPs) have re- cently gained much interest. In this paper, we pro- pose the introduction of prior knowledge in the form of (probabilistic) relationships among dis- crete state-variables, for online planning based on the well-known POMCP algorithm. In particu- lar, we propose the use of hard constraint net- works and probabilistic Markov random fields to formalize state-variable constraints and we extend the POMCP algorithm to take advantage of these constraints. Results on a case study based on Rock- sample show that the usage of this knowledge pro- vides significant improvements to the performance of the algorithm. The extent of this improvement depends on the amount of knowledge encoded in the constraints and reaches the 50% of the average discounted return in the most favorable cases that we analyzed

    Standardization of a Volumetric Displacement Measurement for Two-Body Abrasion Scratch Test Data Analysis

    Get PDF
    A limitation has been identified in the existing test standards used for making controlled, two-body abrasion scratch measurements based solely on the width of the resultant score on the surface of the material. A new, more robust method is proposed for analyzing a surface scratch that takes into account the full three-dimensional profile of the displaced material. To accomplish this, a set of four volume displacement metrics are systematically defined by normalizing the overall surface profile to statistically denote the area of relevance, termed the Zone of Interaction (ZOI). From this baseline, depth of the trough and height of the ploughed material are factored into the overall deformation assessment. Proof of concept data were collected and analyzed to demonstrate the performance of this proposed methodology. This technique takes advantage of advanced imaging capabilities that now allow resolution of the scratched surface to be quantified in greater detail than was previously achievable. A quantified understanding of fundamental particle-material interaction is critical to anticipating how well components can withstand prolonged use in highly abrasive environments, specifically for our intended applications on the surface of the Moon and other planets or asteroids, as well as in similarly demanding, harsh terrestrial setting

    A framework for distributed managing uncertain data in RFID traceability networks

    Get PDF
    The ability to track and trace individual items, especially through large-scale and distributed networks, is the key to realizing many important business applications such as supply chain management, asset tracking, and counterfeit detection. Networked RFID (radio frequency identification), which uses the Internet to connect otherwise isolated RFID systems and software, is an emerging technology to support traceability applications. Despite its promising benefits, there remains many challenges to be overcome before these benefits can be realized. One significant challenge centers around dealing with uncertainty of raw RFID data. In this paper, we propose a novel framework to effectively manage the uncertainty of RFID data in large scale traceability networks. The framework consists of a global object tracking model and a local RFID data cleaning model. In particular, we propose a Markov-based model for tracking objects globally and a particle filter based approach for processing noisy, low-level RFID data locally. Our implementation validates the proposed approach and the experimental results show its effectiveness.Jiangang Ma, Quan Z. Sheng, Damith Ranasinghe, Jen Min Chuah and Yanbo W

    ASCR/HEP Exascale Requirements Review Report

    Full text link
    This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.Comment: 77 pages, 13 Figures; draft report, subject to further revisio
    • …
    corecore