4,573 research outputs found

    Analytic Performance Modeling and Analysis of Detailed Neuron Simulations

    Full text link
    Big science initiatives are trying to reconstruct and model the brain by attempting to simulate brain tissue at larger scales and with increasingly more biological detail than previously thought possible. The exponential growth of parallel computer performance has been supporting these developments, and at the same time maintainers of neuroscientific simulation code have strived to optimally and efficiently exploit new hardware features. Current state of the art software for the simulation of biological networks has so far been developed using performance engineering practices, but a thorough analysis and modeling of the computational and performance characteristics, especially in the case of morphologically detailed neuron simulations, is lacking. Other computational sciences have successfully used analytic performance engineering and modeling methods to gain insight on the computational properties of simulation kernels, aid developers in performance optimizations and eventually drive co-design efforts, but to our knowledge a model-based performance analysis of neuron simulations has not yet been conducted. We present a detailed study of the shared-memory performance of morphologically detailed neuron simulations based on the Execution-Cache-Memory (ECM) performance model. We demonstrate that this model can deliver accurate predictions of the runtime of almost all the kernels that constitute the neuron models under investigation. The gained insight is used to identify the main governing mechanisms underlying performance bottlenecks in the simulation. The implications of this analysis on the optimization of neural simulation software and eventually co-design of future hardware architectures are discussed. In this sense, our work represents a valuable conceptual and quantitative contribution to understanding the performance properties of biological networks simulations.Comment: 18 pages, 6 figures, 15 table

    GTFRC, a TCP friendly QoS-aware rate control for diffserv assured service

    Get PDF
    This study addresses the end-to-end congestion control support over the DiffServ Assured Forwarding (AF) class. The resulting Assured Service (AS) provides a minimum level of throughput guarantee. In this context, this article describes a new end-to-end mechanism for continuous transfer based on TCP-Friendly Rate Control (TFRC). The proposed approach modifies TFRC to take into account the QoS negotiated. This mechanism, named gTFRC, is able to reach the minimum throughput guarantee whatever the flowā€™s RTT and target rate. Simulation measurements and implementation over a real QoS testbed demonstrate the efficiency of this mechanism either in over-provisioned or exactly-provisioned network. In addition, we show that the gTFRC mechanism can be used in the same DiffServ/AF class with TCP or TFRC flows

    Data communication network at the ASRM facility

    Get PDF
    The main objective of the report is to present the overall communication network structure for the Advanced Solid Rocket Motor (ASRM) facility being built at Yellow Creek near Iuka, Mississippi. This report is compiled using information received from NASA/MSFC, LMSC, AAD, and RUST Inc. As per the information gathered, the overall network structure will have one logical FDDI ring acting as a backbone for the whole complex. The buildings will be grouped into two categories viz. manufacturing critical and manufacturing non-critical. The manufacturing critical buildings will be connected via FDDI to the Operational Information System (OIS) in the main computing center in B 1000. The manufacturing non-critical buildings will be connected by 10BASE-FL to the Business Information System (BIS) in the main computing center. The workcells will be connected to the Area Supervisory Computers (ASCs) through the nearest manufacturing critical hub and one of the OIS hubs. The network structure described in this report will be the basis for simulations to be carried out next year. The Comdisco's Block Oriented Network Simulator (BONeS) will be used for the network simulation. The main aim of the simulations will be to evaluate the loading of the OIS, the BIS, the ASCs, and the network links by the traffic generated by the workstations and workcells throughout the site

    Active Traffic Management Case Study: Phase 1

    Get PDF
    This study developed a systematic approach for using data from multiple sources to provide active traffic management solutions. The feasibility of two active traffic management solutions is analyzed in this report: ramp-metering and real-time crash risk estimation and prediction. Using a combined dataset containing traffic, weather, and crash data, this study assessed crash likelihood on urban freeways and evaluated the economic feasibility of providing a ramp metering solution. A case study of freeway segments in Omaha, Nebraska, was conducted. The impact of rain, snow, congestion, and other factors on crash risk was analyzed using a binary probit model, and one of the major findings from the sensitivity analysis was that a one-mile-per-hour increase in speed is associated with a 7.5% decrease in crash risk. FREEVAL was used to analyze the economic feasibility of the ramp metering implementation strategy. A case study of a 6.3 mile segment on I-80 near downtown Omaha showed that, after applying ramp metering, travel time decreased from 9.3 minutes to 8.1 minutes and crash risk decreased by 37.5% during the rush hours. The benefits of reducing travel time and crash cost can easily offset the cost of implementing ramp metering for this road section. The results from the real-time crash risk prediction models developed for the studied road section are promising. A sensitivity analysis was conducted on different models and different temporal and spatial windows to estimate/predict crash risk. An adaptive boosting (AdaBoost) model using a 10 minute historical window of speeds obtained from 0.25 miles downstream and 0.75 miles upstream was found to be the most accurate estimator of crash risk

    A control theoretic approach for security of cyber-physical systems

    Get PDF
    In this dissertation, several novel defense methodologies for cyber-physical systems have been proposed. First, a special type of cyber-physical system, the RFID system, is considered for which a lightweight mutual authentication and ownership management protocol is proposed in order to protect the data confidentiality and integrity. Then considering the fact that the protection of the data confidentiality and integrity is insufficient to guarantee the security in cyber-physical systems, we turn to the development of a general framework for developing security schemes for cyber-physical systems wherein the cyber system states affect the physical system and vice versa. After that, we apply this general framework by selecting the traffic flow as the cyber system state and a novel attack detection scheme that is capable of capturing the abnormality in the traffic flow in those communication links due to a class of attacks has been proposed. On the other hand, an attack detection scheme that is capable of detecting both sensor and actuator attacks is proposed for the physical system in the presence of network induced delays and packet losses. Next, an attack detection scheme is proposed when the network parameters are unknown by using an optimal Q-learning approach. Finally, this attack detection and accommodation scheme has been further extended to the case where the network is modeled as a nonlinear system with unknown system dynamics --Abstract, page iv

    Application-level optimization of end-to-end data transfer throughput

    Get PDF
    For large-scale distributed applications, effective use of available network throughput and optimization of data transfer speed is crucial for end-to-end application performance. Today, many regional and national optical networking initiatives such as LONI, ESnet and Teragrid provide high speed network connectivity to their users. However, majority of the users fail to obtain even a fraction of the theoretical speeds promised by these networks due to issues such as sub-optimal protocol tuning, disk bottleneck on the sending and/or receiving ends, and processor limitations. This implies that having high speed networks in place is important but not sufficient for the improvement of end-to-end data transfer throughput. Being able to effectively use these high speed networks is becoming more and more important. Optimization of the underlying protocol parameters at the application layer (i.e. opening multiple parallel TCP streams, tuning the TCP buffer size and I/O block size) is one way of improving the network transfer throughput. On the other hand, end-to-end data transfer throughput bottleneck on high performance networking systems occur mostly at the participating storage systems rather than the network. The performance of a storage system heavily depends on the speed of its disk and CPU subsystems. Thus, it is critical to estimate the storage system\u27s bandwidth at both endpoints in addition to the network bandwidth. Disk bottleneck can be eliminated by the use of multiple disks (data striping), and CPU bottleneck can be eliminated by the use of multiple processors (parallelism). In this dissertation, we develop application-level models to predict the best combination of protocol parameters for optimal network performance, including the number of parallel data streams, protocol buffer size; and integration of disk and CPU speed parameters into the performance model to predict the optimal number of disk and CPU striping for the best end-to-end data throughput. These models will be made available to the community for use in data transfer tools, schedulers, and high-level planners
    • ā€¦
    corecore