21 research outputs found

    Simulation based formal verification of cyber-physical systems

    Get PDF
    Cyber-Physical Systems (CPSs) have become an intrinsic part of the 21st century world. Systems like Smart Grids, Transportation, and Healthcare help us run our lives and businesses smoothly, successfully and safely. Since malfunctions in these CPSs can have serious, expensive, sometimes fatal consequences, System-Level Formal Verification (SLFV) tools are vital to minimise the likelihood of errors occurring during the development process and beyond. Their applicability is supported by the increasingly widespread use of Model Based Design (MBD) tools. MBD enables the simulation of CPS models in order to check for their correct behaviour from the very initial design phase. The disadvantage is that SLFV for complex CPSs is an extremely time-consuming process, which typically requires several months of simulation. Current SLFV tools are aimed at accelerating the verification process with multiple simulators working simultaneously. To this end, they compute all the scenarios in advance in such a way as to split and simulate them in parallel. Furthermore, they compute optimised simulation campaigns in order to simulate common prefixes of these scenarios only once, thus avoiding redundant simulation. Nevertheless, there are still limitations that prevent a more widespread adoption of SLFV tools. Firstly, current tools cannot optimise simulation campaigns from existing datasets with collected scenarios. Secondly, there are currently no methods to predict the time required to complete the SLFV process. This lack of ability to predict the length of the process makes scheduling verification activities highly problematic. In this thesis, we present how we are able to overcome these limitations with the use of a simulation campaign optimiser and an execution time estimator. The optimiser tool is aimed at speeding up the SLFV process by using a data-intensive algorithm to obtain optimised simulation campaigns from existing datasets, that may contain a large quantity of collected scenarios. The estimator tool is able to accurately predict the execution time to simulate a given simulation campaign by using an effective machine-independent method

    Neuro-Fuzzy Algorithm Implemented In Altera’s FPGA For Mobile Robot’s Obstacle Avoidance Mission.

    Get PDF
    This paper presents the designed obstacle avoidance program for mobile robot that incorporates a neuro-fuzzy algorithm using Altera™ Field Programmable Gate Array (FPGA) development DE2 board

    Operational Semantics of the Marte Repetitive Structure Modeling Concepts for Data-Parallel Applications Design

    Get PDF
    International audienceThis paper presents an operational semantics of the repetitive model of computation, which is the basis for the repetitive structure modeling (RSM) package defined in the standard UML Marte profile. It also deals with the semantics of an RSM extension for control-oriented design. The goal of this semantics is to serve as a formal support for i) reasoning about the behavioral properties of models specified in Marte with RSM, and ii) defining correct-by-construction model transformations for the production of executable code in a model-driven engineering framework

    Operational Semantics of the Marte Repetitive Structure Modeling Concepts for Data-Parallel Applications Design

    No full text
    International audienceThis paper presents an operational semantics of the repetitive model of computation, which is the basis for the repetitive structure modeling (RSM) package defined in the standard UML Marte profile. It also deals with the semantics of an RSM extension for control-oriented design. The goal of this semantics is to serve as a formal support for i) reasoning about the behavioral properties of models specified in Marte with RSM, and ii) defining correct-by-construction model transformations for the production of executable code in a model-driven engineering framework

    The parallel computation of morse-smale complexes

    Get PDF
    pre-printTopology-based techniques are useful for multi-scale exploration of the feature space of scalar-valued functions, such as those derived from the output of large-scale simulations. The Morse-Smale (MS) complex, in particular, allows robust identification of gradient-based features, and therefore is suitable for analysis tasks in a wide range of application domains. In this paper, we develop a two-stage algorithm to construct the Morse-Smale complex in parallel, the first stage independently computing local features per block and the second stage merging to resolve global features. Our implementation is based on MPI and a distributed-memory architecture. Through a set of scalability studies on the IBM Blue Gene/P supercomputer, we characterize the performance of the algorithm as block sizes, process counts, merging strategy, and levels of topological simplification are varied, for datasets that vary in feature composition and size. We conclude with a strong scaling study using scientific datasets computed by combustion and hydrodynamics simulations

    Performanz Evaluation von PQC in TLS 1.3 unter variierenden Netzwerkcharakteristiken

    Full text link
    Quantum computers could break currently used asymmetric cryptographic schemes in a few years using Shor's algorithm. They are used in numerous protocols and applications to secure authenticity as well as key agreement, and quantum-safe alternatives are urgently needed. NIST therefore initiated a standardization process. This requires intensive evaluation, also with regard to performance and integrability. Here, the integration into TLS 1.3 plays an important role, since it is used for 90% of all Internet connections. In the present work, algorithms for quantum-safe key exchange during TLS 1.3 handshake were reviewed. The focus is on the influence of dedicated network parameters such as transmission rate or packet loss in order to gain insights regarding the suitability of the algorithms under corresponding network conditions. For the implementation, a framework by Paquin et al. was extended to emulate network scenarios and capture the handshake duration for selected algorithms. It is shown that the evaluated candidates Kyber, Saber and NTRU as well as the alternative NTRU Prime have a very good overall performance and partly undercut the handshake duration of the classical ECDH. The choice of a higher security level or hybrid variants does not make a significant difference here. This is not the case with alternatives such as FrodoKEM, SIKE, HQC or BIKE, which have individual disadvantages and whose respective performance varies greatly depending on the security level and hybrid implementation. This is especially true for the data-intensive algorithm FrodoKEM. In general, the prevailing network characteristics should be taken into account when choosing scheme and variant. Further it becomes clear that the performance of the handshake is influenced by external factors such as TCP mechanisms or MTU, which could compensate for possible disadvantages due to PQC if configured appropriately.Comment: Master's thesis, 160 pages, in Germa

    A Reconfigurable Computing Solution to the Parameterized Vertex Cover Problem

    Get PDF
    Active research has been done in the past two decades in the field of computational intractability. This thesis explores parallel implementations on a RC (reconfigurable computing) platform for FPT (fixed-parameter tractable) algorithms. Reconfigurable hardware implementations of algorithms for solving NP-Complete problems have been of great interest for research in the past few years. However, most of the research that has been done target exact algorithms for solving problems of this nature. Although such implementations have generated good results, it should be kept in mind that the input sizes were small. Moreover, most of these implementations are instance-specific in nature making it mandatory to generate a different circuit for every new problem instance. In this work, we present an efficient and scalable algorithm that breaks out of the conventional instance-specific approach towards a more general parameterized approach to solve such problems. We present approaches based on the theory of fixed-parameter tractability. The prototype problem used as a case study here is the classic vertex cover problem. The hardware implementation has demonstrated speedups of the order of 100x over the software version of the vertex cover problem

    Novel Parallelization Techniques for Computer Graphics Applications

    Get PDF
    Increasingly complex and data-intensive algorithms in computer graphics applications require software engineers to find ways of improving performance and scalability to satisfy the requirements of customers and users. Parallelizing and tailoring each algorithm of each specific application is a time-consuming task and its implementation is domain-specific because it can not be reused outside the specific problem in which the algorithm is defined. Identifying reusable parallelization patterns that can be extrapolated and applied to other different algorithms is an essential task needed in order to provide consistent parallelization improvements and reduce the development time of evolving a sequential algorithm into a parallel one. This thesis focuses on defining general and efficient parallelization techniques and approaches that can be followed in order to parallelize complex 3D graphic algorithms. These parallelization patterns can be easily applied in order to convert most kinds of sequential complex and data-intensive algorithms to parallel ones obtaining consistent optimization results. The main idea in the thesis is to use multi-threading techniques to improve the parallelization and core utilization of 3D algorithms. Most of the 3D algorithms apply similar repetitive independent operations on a vast amount of 3D data. These application characteristics bring the opportunity of applying multi-thread parallelization techniques on such applications. The efficiency of the proposed idea is tested on two common computer graphics algorithms: hidden-line removal and collision detection. Both algorithms are data-intensive algorithms, whose conversions from a sequential to a multithread implementation introduce challenges, due to their complexities and the fact that elements in their data have different sizes and complexities, producing work-load imbalances and asymmetries between processing elements. The results show that the proposed principles and patterns can be easily applied to both algorithms, transforming their sequential to multithread implementations, obtaining consistent optimization results proportional to the number of processing elements. From the work done in this thesis, it is concluded that the suggested parallelization warrants further study and development in order to extend its usage to heterogeneous platforms such as a Graphical Processing Unit (GPU). OpenCL is the most feasible framework to explore in the future due to its interoperability among different platforms
    corecore