14,068 research outputs found

    2HOT: An Improved Parallel Hashed Oct-Tree N-Body Algorithm for Cosmological Simulation

    Full text link
    We report on improvements made over the past two decades to our adaptive treecode N-body method (HOT). A mathematical and computational approach to the cosmological N-body problem is described, with performance and scalability measured up to 256k (2182^{18}) processors. We present error analysis and scientific application results from a series of more than ten 69 billion (409634096^3) particle cosmological simulations, accounting for 4×10204 \times 10^{20} floating point operations. These results include the first simulations using the new constraints on the standard model of cosmology from the Planck satellite. Our simulations set a new standard for accuracy and scientific throughput, while meeting or exceeding the computational efficiency of the latest generation of hybrid TreePM N-body methods.Comment: 12 pages, 8 figures, 77 references; To appear in Proceedings of SC '1

    Status and Future Perspectives for Lattice Gauge Theory Calculations to the Exascale and Beyond

    Full text link
    In this and a set of companion whitepapers, the USQCD Collaboration lays out a program of science and computing for lattice gauge theory. These whitepapers describe how calculation using lattice QCD (and other gauge theories) can aid the interpretation of ongoing and upcoming experiments in particle and nuclear physics, as well as inspire new ones.Comment: 44 pages. 1 of USQCD whitepapers

    Automated Measurement of Heavy Equipment Greenhouse Gas Emission: The case of Road/Bridge Construction and Maintenance

    Get PDF
    Road/bridge construction and maintenance projects are major contributors to greenhouse gas (GHG) emissions such as carbon dioxide (CO2), mainly due to extensive use of heavy-duty diesel construction equipment and large-scale earthworks and earthmoving operations. Heavy equipment is a costly resource and its underutilization could result in significant budget overruns. A practical way to cut emissions is to reduce the time equipment spends doing non-value-added activities and/or idling. Recent research into the monitoring of automated equipment using sensors and Internet-of-Things (IoT) frameworks have leveraged machine learning algorithms to predict the behavior of tracked entities. In this project, end-to-end deep learning models were developed that can learn to accurately classify the activities of construction equipment based on vibration patterns picked up by accelerometers attached to the equipment. Data was collected from two types of real-world construction equipment, both used extensively in road/bridge construction and maintenance projects: excavators and vibratory rollers. The validation accuracies of the developed models were tested of three different deep learning models: a baseline convolutional neural network (CNN); a hybrid convolutional and recurrent long shortterm memory neural network (LSTM); and a temporal convolutional network (TCN). Results indicated that the TCN model had the best performance, the LSTM model had the second-best performance, and the CNN model had the worst performance. The TCN model had over 83% validation accuracy in recognizing activities. Using deep learning methodologies can significantly increase emission estimation accuracy for heavy equipment and help decision-makers to reliably evaluate the environmental impact of heavy civil and infrastructure projects. Reducing the carbon footprint and fuel use of heavy equipment in road/bridge projects have direct and indirect impacts on health and the economy. Public infrastructure projects can leverage the proposed system to reduce the environmental cost of infrastructure project

    ASCR/HEP Exascale Requirements Review Report

    Full text link
    This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.Comment: 77 pages, 13 Figures; draft report, subject to further revisio

    Working Papers: Astronomy and Astrophysics Panel Reports

    Get PDF
    The papers of the panels appointed by the Astronomy and Astrophysics survey Committee are compiled. These papers were advisory to the survey committee and represent the opinions of the members of each panel in the context of their individual charges. The following subject areas are covered: radio astronomy, infrared astronomy, optical/IR from ground, UV-optical from space, interferometry, high energy from space, particle astrophysics, theory and laboratory astrophysics, solar astronomy, planetary astronomy, computing and data processing, policy opportunities, benefits to the nation from astronomy and astrophysics, status of the profession, and science opportunities

    Enabling adaptive scientific workflows via trigger detection

    Full text link
    Next generation architectures necessitate a shift away from traditional workflows in which the simulation state is saved at prescribed frequencies for post-processing analysis. While the need to shift to in~situ workflows has been acknowledged for some time, much of the current research is focused on static workflows, where the analysis that would have been done as a post-process is performed concurrently with the simulation at user-prescribed frequencies. Recently, research efforts are striving to enable adaptive workflows, in which the frequency, composition, and execution of computational and data manipulation steps dynamically depend on the state of the simulation. Adapting the workflow to the state of simulation in such a data-driven fashion puts extremely strict efficiency requirements on the analysis capabilities that are used to identify the transitions in the workflow. In this paper we build upon earlier work on trigger detection using sublinear techniques to drive adaptive workflows. Here we propose a methodology to detect the time when sudden heat release occurs in simulations of turbulent combustion. Our proposed method provides an alternative metric that can be used along with our former metric to increase the robustness of trigger detection. We show the effectiveness of our metric empirically for predicting heat release for two use cases.Comment: arXiv admin note: substantial text overlap with arXiv:1506.0825

    Towards a Smart World: Hazard Levels for Monitoring of Autonomous Vehicles’ Swarms

    Get PDF
    This work explores the creation of quantifiable indices to monitor the safe operations and movement of families of autonomous vehicles (AV) in restricted highway-like environments. Specifically, this work will explore the creation of ad-hoc rules for monitoring lateral and longitudinal movement of multiple AVs based on behavior that mimics swarm and flock movement (or particle swarm motion). This exploratory work is sponsored by the Emerging Leader Seed grant program of the Mineta Transportation Institute and aims at investigating feasibility of adaptation of particle swarm motion to control families of autonomous vehicles. Specifically, it explores how particle swarm approaches can be augmented by setting safety thresholds and fail-safe mechanisms to avoid collisions in off-nominal situations. This concept leverages the integration of the notion of hazard and danger levels (i.e., measures of the “closeness” to a given accident scenario, typically used in robotics) with the concept of safety distance and separation/collision avoidance for ground vehicles. A draft of implementation of four hazard level functions indicates that safety thresholds can be set up to autonomously trigger lateral and longitudinal motion control based on three main rules respectively based on speed, heading, and braking distance to steer the vehicle and maintain separation/avoid collisions in families of autonomous vehicles. The concepts here presented can be used to set up a high-level framework for developing artificial intelligence algorithms that can serve as back-up to standard machine learning approaches for control and steering of autonomous vehicles. Although there are no constraints on the concept’s implementation, it is expected that this work would be most relevant for highly-automated Level 4 and Level 5 vehicles, capable of communicating with each other and in the presence of a monitoring ground control center for the operations of the swarm

    Planning the Future of U.S. Particle Physics (Snowmass 2013): Chapter 6: Accelerator Capabilities

    Full text link
    These reports present the results of the 2013 Community Summer Study of the APS Division of Particles and Fields ("Snowmass 2013") on the future program of particle physics in the U.S. Chapter 6, on Accelerator Capabilities, discusses the future progress of accelerator technology, including issues for high-energy hadron and lepton colliders, high-intensity beams, electron-ion colliders, and necessary R&D for future accelerator technologies.Comment: 26 page

    Proceedings of the Salford Postgraduate Annual Research Conference (SPARC) 2011

    Get PDF
    These proceedings bring together a selection of papers from the 2011 Salford Postgraduate Annual Research Conference(SPARC). It includes papers from PhD students in the arts and social sciences, business, computing, science and engineering, education, environment, built environment and health sciences. Contributions from Salford researchers are published here alongside papers from students at the Universities of Anglia Ruskin, Birmingham City, Chester,De Montfort, Exeter, Leeds, Liverpool, Liverpool John Moores and Manchester
    corecore