157,058 research outputs found

    BarrierPoint: sampled simulation of multi-threaded applications

    Get PDF
    Sampling is a well-known technique to speed up architectural simulation of long-running workloads while maintaining accurate performance predictions. A number of sampling techniques have recently been developed that extend well- known single-threaded techniques to allow sampled simulation of multi-threaded applications. Unfortunately, prior work is limited to non-synchronizing applications (e.g., server throughput workloads); requires the functional simulation of the entire application using a detailed cache hierarchy which limits the overall simulation speedup potential; leads to different units of work across different processor architectures which complicates performance analysis; or, requires massive machine resources to achieve reasonable simulation speedups. In this work, we propose BarrierPoint, a sampling methodology to accelerate simulation by leveraging globally synchronizing barriers in multi-threaded applications. BarrierPoint collects microarchitecture-independent code and data signatures to determine the most representative inter-barrier regions, called barrierpoints. BarrierPoint estimates total application execution time (and other performance metrics of interest) through detailed simulation of these barrierpoints only, leading to substantial simulation speedups. Barrierpoints can be simulated in parallel, use fewer simulation resources, and define fixed units of work to be used in performance comparisons across processor architectures. Our evaluation of BarrierPoint using NPB and Parsec benchmarks reports average simulation speedups of 24.7x (and up to 866.6x) with an average simulation error of 0.9% and 2.9% at most. On average, BarrierPoint reduces the number of simulation machine resources needed by 78x

    Running real time distributed simulations under Linux and CERTI

    Get PDF
    This paper presents some experiments and some results to enforce real time distributed simulations in accordance with the High Level Architecture (HLA). Simulations were run by using CERTI, an open source middleware, as the Run Time Infrastructure (RTI). Models were distributed over computers under various available versions of the 2.6 Linux kernel. Studies and experiments relied on a real case study. The chosen case study was the simulation of an "in formation" flight of observation satellites. This case study brings up some real applicative needs in real time distributed simulations and real configurations of simulators and models. Two simulations of "in formation" flight of satellites were studied. The study consisted in modeling the behaviour of the simulators and in running these models by using various kernel or middleware operating mechanisms and services. Time measurements were performed at each test giving some results on the ability of the simulation to meet its real time requirements

    Spatial-temporal data modelling and processing for personalised decision support

    Get PDF
    The purpose of this research is to undertake the modelling of dynamic data without losing any of the temporal relationships, and to be able to predict likelihood of outcome as far in advance of actual occurrence as possible. To this end a novel computational architecture for personalised ( individualised) modelling of spatio-temporal data based on spiking neural network methods (PMeSNNr), with a three dimensional visualisation of relationships between variables is proposed. In brief, the architecture is able to transfer spatio-temporal data patterns from a multidimensional input stream into internal patterns in the spiking neural network reservoir. These patterns are then analysed to produce a personalised model for either classification or prediction dependent on the specific needs of the situation. The architecture described above was constructed using MatLab© in several individual modules linked together to form NeuCube (M1). This methodology has been applied to two real world case studies. Firstly, it has been applied to data for the prediction of stroke occurrences on an individual basis. Secondly, it has been applied to ecological data on aphid pest abundance prediction. Two main objectives for this research when judging outcomes of the modelling are accurate prediction and to have this at the earliest possible time point. The implications of these findings are not insignificant in terms of health care management and environmental control. As the case studies utilised here represent vastly different application fields, it reveals more of the potential and usefulness of NeuCube (M1) for modelling data in an integrated manner. This in turn can identify previously unknown (or less understood) interactions thus both increasing the level of reliance that can be placed on the model created, and enhancing our human understanding of the complexities of the world around us without the need for over simplification. Read less Keywords Personalised modelling; Spiking neural network; Spatial-temporal data modelling; Computational intelligence; Predictive modelling; Stroke risk predictio

    Spatial-temporal data modelling and processing for personalised decision support

    Get PDF
    The purpose of this research is to undertake the modelling of dynamic data without losing any of the temporal relationships, and to be able to predict likelihood of outcome as far in advance of actual occurrence as possible. To this end a novel computational architecture for personalised ( individualised) modelling of spatio-temporal data based on spiking neural network methods (PMeSNNr), with a three dimensional visualisation of relationships between variables is proposed. In brief, the architecture is able to transfer spatio-temporal data patterns from a multidimensional input stream into internal patterns in the spiking neural network reservoir. These patterns are then analysed to produce a personalised model for either classification or prediction dependent on the specific needs of the situation. The architecture described above was constructed using MatLab© in several individual modules linked together to form NeuCube (M1). This methodology has been applied to two real world case studies. Firstly, it has been applied to data for the prediction of stroke occurrences on an individual basis. Secondly, it has been applied to ecological data on aphid pest abundance prediction. Two main objectives for this research when judging outcomes of the modelling are accurate prediction and to have this at the earliest possible time point. The implications of these findings are not insignificant in terms of health care management and environmental control. As the case studies utilised here represent vastly different application fields, it reveals more of the potential and usefulness of NeuCube (M1) for modelling data in an integrated manner. This in turn can identify previously unknown (or less understood) interactions thus both increasing the level of reliance that can be placed on the model created, and enhancing our human understanding of the complexities of the world around us without the need for over simplification. Read less Keywords Personalised modelling; Spiking neural network; Spatial-temporal data modelling; Computational intelligence; Predictive modelling; Stroke risk predictio

    Temporal and Spatial Turbulent Spectra of MHD Plasma and an Observation of Variance Anisotropy

    Full text link
    The nature of MHD turbulence is analyzed through both temporal and spatial magnetic fluctuation spectra. A magnetically turbulent plasma is produced in the MHD wind-tunnel configuration of the Swarthmore Spheromak Experiment (SSX). The power of magnetic fluctuations is projected into directions perpendicular and parallel to a local mean field; the ratio of these quantities shows the presence of variance anisotropy which varies as a function of frequency. Comparison amongst magnetic, velocity, and density spectra are also made, demonstrating that the energy of the turbulence observed is primarily seeded by magnetic fields created during plasma production. Direct spatial spectra are constructed using multi-channel diagnostics and are used to compare to frequency spectra converted to spatial scales using the Taylor Hypothesis. Evidence for the observation of dissipation due to ion inertial length scale physics is also discussed as well as the role laboratory experiment can play in understanding turbulence typically studied in space settings such as the solar wind. Finally, all turbulence results are shown to compare fairly well to a Hall-MHD simulation of the experiment.Comment: 17 pages, 17 figures, Submitted to Astrophysical Journa

    Wait-Free Global Virtual Time Computation in Shared Memory Time-Warp Systems

    Get PDF
    Global Virtual Time (GVT) is a powerful abstraction used to discriminate what events belong (and what do not belong) to the past history of a parallel/distributed computation. For high performance simulation systems based on the Time Warp synchronization protocol, where concurrent simulation objects are allowed to process their events speculatively and causal consistency is achieved via rollback/recovery techniques, GVT is used to determine which portion of the simulation can be considered as committed. Hence it is the base for actuating memory recovery (e.g. of obsolete logs that were taken in order to support state recoverability) and nonrevocable operations (e.g. I/O). For shared memory implementations of simulation platforms based on the Time Warp protocol, the reference GVT algorithm is the one presented by Fujimoto and Hybinette [1]. However, this algorithm relies on critical sections that make it non-wait-free, and which can hamper scalability. In this article we present a waitfree shared memory GVT algorithm that requires no critical section. Rather, correct coordination across the processes while computing the GVT value is achieved via memory atomic operations, namely compare-and-swap. The price paid by our proposal is an increase in the number of GVT computation phases, as opposed to the single phase required by the proposal in [1]. However, as we show via the results of an experimental study, the wait-free nature of the phases carried out in our GVT algorithm pays-off in reducing the actual cost incurred by the proposal in [1]

    D-SPACE4Cloud: A Design Tool for Big Data Applications

    Get PDF
    The last years have seen a steep rise in data generation worldwide, with the development and widespread adoption of several software projects targeting the Big Data paradigm. Many companies currently engage in Big Data analytics as part of their core business activities, nonetheless there are no tools and techniques to support the design of the underlying hardware configuration backing such systems. In particular, the focus in this report is set on Cloud deployed clusters, which represent a cost-effective alternative to on premises installations. We propose a novel tool implementing a battery of optimization and prediction techniques integrated so as to efficiently assess several alternative resource configurations, in order to determine the minimum cost cluster deployment satisfying QoS constraints. Further, the experimental campaign conducted on real systems shows the validity and relevance of the proposed method

    Cost minimization for unstable concurrent products in multi-stage production line using queueing analysis

    Get PDF
    This research and resulting contribution are results of Assumption University of Thailand. The university partially supports financially the publication.Purpose: The paper copes with the queueing theory for evaluating a muti-stage production line process with concurrent goods. The intention of this article is to evaluate the efficiency of products assembly in the production line. Design/Methodology/Approach: To elevate the efficiency of the assembly line it is required to control the performance of individual stations. The arrival process of concurrent products is piled up before flowing to each station. All experiments are based on queueing network analysis. Findings: The performance analysis for unstable concurrent sub-items in the production line is discussed. The proposed analysis is based on the improvement of the total sub-production time by lessening the queue time in each station. Practical implications: The collected data are number of workers, incoming and outgoing sub-products, throughput rate, and individual station processing time. The front loading place unpacks product items into concurrent sub-items by an operator and automatically sorts them by RFID tag or bar code identifiers. Experiments of the work based on simulation are compared and validated with results from real approximation. Originality/Value: It is an alternative improvement to increase the efficiency of the operation in each station with minimum costs.peer-reviewe

    Tidal Barrier and the Asymptotic Mass of Proto Gas-Giant Planets

    Full text link
    Extrasolar planets found with radial velocity surveys have masses ranging from several Earth to several Jupiter masses. While mass accretion onto protoplanetary cores in weak-line T-Tauri disks may eventually be quenched by a global depletion of gas, such a mechanism is unlikely to have stalled the growth of some known planetary systems which contain relatively low-mass and close-in planets along with more massive and longer period companions. Here, we suggest a potential solution for this conundrum. In general, supersonic infall of surrounding gas onto a protoplanet is only possible interior to both of its Bondi and Roche radii. At a critical mass, a protoplanet's Bondi and Roche radii are equal to the disk thickness. Above this mass, the protoplanets' tidal perturbation induces the formation of a gap. Although the disk gas may continue to diffuse into the gap, the azimuthal flux across the protoplanets' Roche lobe is quenched. Using two different schemes, we present the results of numerical simulations and analysis to show that the accretion rate increases rapidly with the ratio of the protoplanet's Roche to Bondi radii or equivalently to the disk thickness. In regions with low geometric aspect ratios, gas accretion is quenched with relatively low protoplanetary masses. This effect is important for determining the gas-giant planets' mass function, the distribution of their masses within multiple planet systems around solar type stars, and for suppressing the emergence of gas-giants around low mass stars
    • 

    corecore