6,661 research outputs found

    Prioritization methodology for roadside and guardrail improvement: Quantitative calculation of safety level and optimization of resources allocation

    Get PDF
    The attention to road safety-related issues has grown fast in recent decades. The experience gained with these themes reveals the importance of considering these aspects in the resource allocation process for roadside and guardrail improvement, which is a complex process often involves conflicting objectives. This work consists on defining an innovative methodology, with the objective of calculating and analysing a numerical risk factor of a road. The method considers geometry, accident rate, traffic of the examined road and four categories of elements/defects where the resources can be allocated to improve the road safety (safety barriers, discrete obstacles, continuous obstacles, and water drainage). The analysis allows the assessment of the hazard index, which could be used in decision-making processes. A case study is presented to analyse roadsides of a 995 km long road network, using the cost-benefit analysis, and to prioritize possible rehabilitation work. The results highlighted that it is suitable to intervene on roads belonging to higher classes of risk, where it is possible to maximize the benefit in terms of safety as consequence of rehabilitation works (i.e., new barrier installation, removal and new barrier installation, and new terminal installation). The proposed method is quantitative; therefore, it avoids providing weak and far from reliable results; moreover, it guarantees a broad vision for the problem, giving a useful tool for road management body

    Many-Task Computing and Blue Waters

    Full text link
    This report discusses many-task computing (MTC) generically and in the context of the proposed Blue Waters systems, which is planned to be the largest NSF-funded supercomputer when it begins production use in 2012. The aim of this report is to inform the BW project about MTC, including understanding aspects of MTC applications that can be used to characterize the domain and understanding the implications of these aspects to middleware and policies. Many MTC applications do not neatly fit the stereotypes of high-performance computing (HPC) or high-throughput computing (HTC) applications. Like HTC applications, by definition MTC applications are structured as graphs of discrete tasks, with explicit input and output dependencies forming the graph edges. However, MTC applications have significant features that distinguish them from typical HTC applications. In particular, different engineering constraints for hardware and software must be met in order to support these applications. HTC applications have traditionally run on platforms such as grids and clusters, through either workflow systems or parallel programming systems. MTC applications, in contrast, will often demand a short time to solution, may be communication intensive or data intensive, and may comprise very short tasks. Therefore, hardware and software for MTC must be engineered to support the additional communication and I/O and must minimize task dispatch overheads. The hardware of large-scale HPC systems, with its high degree of parallelism and support for intensive communication, is well suited for MTC applications. However, HPC systems often lack a dynamic resource-provisioning feature, are not ideal for task communication via the file system, and have an I/O system that is not optimized for MTC-style applications. Hence, additional software support is likely to be required to gain full benefit from the HPC hardware

    JPEG steganography with particle swarm optimization accelerated by AVX

    Get PDF
    Digital steganography aims at hiding secret messages in digital data transmitted over insecure channels. The JPEG format is prevalent in digital communication, and images are often used as cover objects in digital steganography. Optimization methods can improve the properties of images with embedded secret but introduce additional computational complexity to their processing. AVX instructions available in modern CPUs are, in this work, used to accelerate data parallel operations that are part of image steganography with advanced optimizations.Web of Science328art. no. e544

    ASCR/HEP Exascale Requirements Review Report

    Full text link
    This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.Comment: 77 pages, 13 Figures; draft report, subject to further revisio

    21st Century Simulation: Exploiting High Performance Computing and Data Analysis

    Get PDF
    This paper identifies, defines, and analyzes the limitations imposed on Modeling and Simulation by outmoded paradigms in computer utilization and data analysis. The authors then discuss two emerging capabilities to overcome these limitations: High Performance Parallel Computing and Advanced Data Analysis. First, parallel computing, in supercomputers and Linux clusters, has proven effective by providing users an advantage in computing power. This has been characterized as a ten-year lead over the use of single-processor computers. Second, advanced data analysis techniques are both necessitated and enabled by this leap in computing power. JFCOM's JESPP project is one of the few simulation initiatives to effectively embrace these concepts. The challenges facing the defense analyst today have grown to include the need to consider operations among non-combatant populations, to focus on impacts to civilian infrastructure, to differentiate combatants from non-combatants, and to understand non-linear, asymmetric warfare. These requirements stretch both current computational techniques and data analysis methodologies. In this paper, documented examples and potential solutions will be advanced. The authors discuss the paths to successful implementation based on their experience. Reviewed technologies include parallel computing, cluster computing, grid computing, data logging, OpsResearch, database advances, data mining, evolutionary computing, genetic algorithms, and Monte Carlo sensitivity analyses. The modeling and simulation community has significant potential to provide more opportunities for training and analysis. Simulations must include increasingly sophisticated environments, better emulations of foes, and more realistic civilian populations. Overcoming the implementation challenges will produce dramatically better insights, for trainees and analysts. High Performance Parallel Computing and Advanced Data Analysis promise increased understanding of future vulnerabilities to help avoid unneeded mission failures and unacceptable personnel losses. The authors set forth road maps for rapid prototyping and adoption of advanced capabilities. They discuss the beneficial impact of embracing these technologies, as well as risk mitigation required to ensure success

    Analyzing helicopter evasive maneuver effectiveness against rocket-propelled grenades

    Get PDF
    It has long been acknowledged that military helicopters are vulnerable to ground-launched threats, in particular, the RPG-7 rocket-propelled grenade. Current helicopter threat mitigation strategies rely on a combination of operational tactics and selectively placed armor plating, which can help to mitigate but not entirely remove the threat. However, in recent years, a number of active protection systems designed to protect land-based vehicles from rocket and missile fire have been developed. These systems all use a sensor suite to detect, track, and predict the threat trajectory, which is then employed in the computation of an intercept trajectory for a defensive kill mechanism. Although a complete active protection system in its current form is unsuitable for helicopters, in this paper, it is assumed that the active protection system’s track and threat trajectory prediction subsystem could be used offline as a tool to develop tactics and techniques to counter the threat from rocket-propelled grenade attacks. It is further proposed that such a maneuver can be found by solving a pursuit–evasion differential game. Because the first stage in solving this problem is developing the capability to evaluate the game, nonlinear dynamic and spatial models for a helicopter, RPG-7 round, and gunner, and evasion strategies were developed and integrated into a new simulation engine. Analysis of the results from representative vignettes demonstrates that the simulation yields the value of the engagement pursuit–evasion game. It is also shown that, in the majority of cases, survivability can be significantly improved by performing an appropriate evasive maneuver. Consequently, this simulation may be used as an important tool for both designing and evaluating evasive tactics and is the first step in designing a maneuver-based active protection system, leading to improved rotorcraft survivability

    Adapting and Optimizing the Systemic Model of Banking Originated Losses (SYMBOL) Tool to the Multi-core Architecture

    Get PDF
    Currently, multi-core system is a predominant architecture in the computational word. This gives new possibilities to speedup statistical and numerical simulations, but it also introduce many challenges we need to deal with. In order to improve the performance metrics, we need to consider different key points as: core communications, data locality, dependencies, memory size, etc. This paper describes a series of optimization steps done on the SYMBOL model meant to enhance its performance and scalability. SYMBOL is a micro-funded statistical tool which analyses the consequences of bank failures, taking into account the available safety nets, such as deposit guarantee schemes or resolution funds. However, this tool, in its original version, has some computational weakness, because its execution time grows considerably, when we request to run with large input data (e.g. large banking systems) or if we wish to scale up the value of the stopping criterium, i.e. the number of default scenarios to be considered. Our intention is to develop a tool (extendable to other model having similar characteristics) where a set of serial (e.g. deleting redundancies, loop enrolling, etc.) and parallel strategies (e.g. OpenMP, and GPU programming) come together to obtain shorter execution time and scalability. The tool uses automatic configuration to make the best use of available resources on the basis of the characteristics of the input datasets. Experimental results, done varying the size of the input dataset and the stopping criterium, show a considerable improvement one can obtain by using the new tool, with execution time reduction up to 96 % of with respect to the original serial versionJRC.G.1-Financial and Economic Analysi

    Optimizing agent-based transmission models for infectious diseases

    Get PDF

    Fire Safety and Management Awareness

    Get PDF
    To ensure a healthy lifestyle, fire safety and protocols are essential. The population boom, economic crunches, and excessive exploitation of nature have enhanced the possibilities of destruction due to an event of a fire. Computational simulations enacting case studies and incorporation of fire safety protocols in daily routines can help in avoiding such mishaps
    • …
    corecore