43 research outputs found

    The AMASS approach for assurance and certification of critical systems

    Get PDF
    Safety-critical systems are subject to rigorous assurance and certification processes to guarantee that they do not pose unreasonable risks to people, property, or the environment. The associated activities are usually complex and time-consuming, thus they need adequate support for their execution. The activities are further becoming more challenging as the systems are evolving towards open, interconnected systems with new features, e.g. Internet connectivity, and new assurance needs, e.g. compliance with several assurance standards for different dependability attributes. This requires the development of novel approaches for cost-effective assurance and certification. With the overall goal of lowering assurance and certification costs in face of rapidly changing features and market needs, the AMASS project has created and consolidated the de-facto European-wide open solution for assurance and certification of critical systems. This has been achieved by establishing a novel holistic and reuse-oriented approach for architecture-driven assurance, multi-concern assurance, and for seamless interoperability between assurance and engineering activities along with third-party activities. This paper introduces the main elements of the AMASS approach and how to use them and benefit from them.The work leading to this paper has received funding from the AMASS project (H2020-ECSEL grant agreement no 692474; Spain’s MINECO ref. PCIN-2015-262)

    A platform for evaluation of multiprocessors in throughput-oriented systems

    No full text
    Multiprocessors are widely used today as a way to achieve high performance using commodity microprocessors. Since different applications place different demands on the system, performance evaluation in the design phase of such systems is important. The most common method is the use of simulators. This thesis presents a simulator of a multiprocessor for throughput-oriented applications, using a distribution-driven simulation method. With this method hardware, and to some extent software, parameters can be studied. A case study is performed, focusing on performance issues for various characteristics of a multiprocessor implementation of a telecom server that today is a uniprocessor. The results show that the key bottlenecks for this particular system lies in parallelisation of the software, which is written with a uniprocessor environment in mind. Also, providing sufficient bus bandwidth is an important factor. Other issues investigated, such as scheduling techniques, three different architectural choices, and degree of memory interleaving, is found to have a smaller impact on performance.Validerat; 20101217 (root

    Techniques to Reduce Thread-Level Speculation Overhead

    No full text
    The traditional single-core processors are being replaced by chip multiprocessors (CMPs) where several processor cores are integrated on a single chip. While this is beneficial for multithreaded applications and multiprogrammed workloads, CMPs do not provide performance improvements for single-threaded applications. Thread-level speculation (TLS) has been proposed as a way to improve single-thread performance on such systems. TLS is a technique where programs are aggressively parallelized at run-time -- threads speculate on data and control dependences but have to be squashed and start over in case of a dependence violation. Unfortunately, various sources of overhead create a major performance problem for TLS. This thesis quantifies the impact of overheads on the performance of TLS systems, and suggests remedies in the form of a number of overhead-reduction techniques. These techniques target run-time parallelization that do not require recompilation of sequential binaries. The main source of parallelism investigated in this work is module continuations, i.e. functions or methods are run in parallel with the code following the call instruction. Loops is another source.Run-length prediction, a technique aimed at reducing the amount of short threads, is introduced. An accurate predictor that avoids short threads, or dynamically unrolls loops to increase thread lengths, is shown to improve speedup for most of the benchmarks applications. Another novel technique is misspeculation prediction, which can remove most of the TLS overhead by reducing the number of misspeculations.The interaction between thread-level parallelism and instruction-level parallelism is studied -- in many cases, both sources can be exploited for additional performance gains, but in some cases there is a trade-off. Communication overhead and memory-level parallelism are found to play an important role. For some applications, prefetching from threads that are squashed contributes more to speedup than parallel execution. Finally, faster inter-thread communication is found to give simulataneous multithreaded (SMT) processors an advantage as the basis for TLS machines

    Techniques to Reduce Thread-Level Speculation Overhead

    No full text
    The traditional single-core processors are being replaced by chip multiprocessors (CMPs) where several processor cores are integrated on a single chip. While this is beneficial for multithreaded applications and multiprogrammed workloads, CMPs do not provide performance improvements for single-threaded applications. Thread-level speculation (TLS) has been proposed as a way to improve single-thread performance on such systems. TLS is a technique where programs are aggressively parallelized at run-time -- threads speculate on data and control dependences but have to be squashed and start over in case of a dependence violation. Unfortunately, various sources of overhead create a major performance problem for TLS. This thesis quantifies the impact of overheads on the performance of TLS systems, and suggests remedies in the form of a number of overhead-reduction techniques. These techniques target run-time parallelization that do not require recompilation of sequential binaries. The main source of parallelism investigated in this work is module continuations, i.e. functions or methods are run in parallel with the code following the call instruction. Loops is another source. Run-length prediction, a technique aimed at reducing the amount of short threads, is introduced. An accurate predictor that avoids short threads, or dynamically unrolls loops to increase thread lengths, is shown to improve speedup for most of the benchmarks applications. Another novel technique is misspeculation prediction, which can remove most of the TLS overhead by reducing the number of misspeculations. The interaction between thread-level parallelism and instruction-level parallelism is studied -- in many cases, both sources can be exploited for additional performance gains, but in some cases there is a trade-off. Communication overhead and memory-level parallelism are found to play an important role. For some applications, prefetching from threads that are squashed contributes more to speedup than parallel execution. Finally, faster inter-thread communication is found to give simulataneous multithreaded (SMT) processors an advantage as the basis for TLS machines

    A platform for evaluation of multiprocessors in throughput-oriented systems

    No full text
    Multiprocessors are widely used today as a way to achieve high performance using commodity microprocessors. Since different applications place different demands on the system, performance evaluation in the design phase of such systems is important. The most common method is the use of simulators. This thesis presents a simulator of a multiprocessor for throughput-oriented applications, using a distribution-driven simulation method. With this method hardware, and to some extent software, parameters can be studied. A case study is performed, focusing on performance issues for various characteristics of a multiprocessor implementation of a telecom server that today is a uniprocessor. The results show that the key bottlenecks for this particular system lies in parallelisation of the software, which is written with a uniprocessor environment in mind. Also, providing sufficient bus bandwidth is an important factor. Other issues investigated, such as scheduling techniques, three different architectural choices, and degree of memory interleaving, is found to have a smaller impact on performance.Validerat; 20101217 (root

    Limits on Speculative Module-level Parallelism in Imperative and Object-oriented Programs on CMP Platforms

    No full text
    This paper considers program modules, e.g. procedures, functions, and methods as the basic method to exploit speculative parallelism in existing codes. We analyze how much inherent and exploitable parallelism exist in a set of C and Java programs on a set of chip-multiprocessor architecture models, and identify what inherent program features, as well as architectural deficiencies, that limit the speedup. Our data complement previous limit studies by indicating that the programming style -- object-oriented versus imperative -- does not seem to have any noticeable impact on the achievable speedup. Further, we show that as few as eight processors are enough to exploit all of the inherent parallelism. However, memory-level data dependence resolution and thread management mechanisms of recent CMP proposals may impose overheads that severely limit the speedup obtained

    Evaluating the Safety Impact of Network Disturbances for Remote Driving with Simulation-Based Human-in-the-Loop Testing

    No full text
    One vital safety aspect of advanced vehicle features is ensuring that the interaction with human users will not cause accidents. For remote driving, the human operator is physically removed from the vehicle, instead controlling it from a remote control station over a wireless network. This work presents a methodology to inject network disturbances into this communication and analyse the effects on vehicle manoeuvrability. A driving simulator, CARLA, was connected to a driving station to allow human-in-the-loop testing. NETEM was used to inject faults to emulate network disturbances. Time-To-Collison (TTC) and Steering Reversal Rate (SRR) were used as the main metrics to assess manoeuvrability. Clear negative effects on the ability to safely control the vehicle were observed on both TTC and SRR for 5% packet loss, and collision analysis shows that 50ms communication delay and 5% packet loss resulted in crashes for our test setup. The presented methodology can be used as part of a safety evaluation or in the design loop of remote driving or remote assistance vehicle features.This work was partly supported by VALU3S project, which has receivedfunding from the ECSEL Joint Undertaking (JU) under grant agreement No876852. The JU receives support from the European Union’s Horizon 2020research and innovation programme and Austria, Czech Republic, Germany,Ireland, Italy, Portugal, Spain, Sweden, Turkey</p
    corecore