1,481 research outputs found

    Optimisation-based verification process of obstacle avoidance systems for unmanned vehicles

    Get PDF
    This thesis deals with safety verification analysis of collision avoidance systems for unmanned vehicles. The safety of the vehicle is dependent on collision avoidance algorithms and associated control laws, and it must be proven that the collision avoidance algorithms and controllers are functioning correctly in all nominal conditions, various failure conditions and in the presence of possible variations in the vehicle and operational environment. The current widely used exhaustive search based approaches are not suitable for safety analysis of autonomous vehicles due to the large number of possible variations and the complexity of algorithms and the systems. To address this topic, a new optimisation-based verification method is developed to verify the safety of collision avoidance systems. The proposed verification method formulates the worst case analysis problem arising the verification of collision avoidance systems into an optimisation problem and employs optimisation algorithms to automatically search the worst cases. Minimum distance to the obstacle during the collision avoidance manoeuvre is defined as the objective function of the optimisation problem, and realistic simulation consisting of the detailed vehicle dynamics, the operational environment, the collision avoidance algorithm and low level control laws is embedded in the optimisation process. This enables the verification process to take into account the parameters variations in the vehicle, the change of the environment, the uncertainties in sensors, and in particular the mismatching between model used for developing the collision avoidance algorithms and the real vehicle. It is shown that the resultant simulation based optimisation problem is non-convex and there might be many local optima. To illustrate and investigate the proposed optimisation based verification process, the potential field method and decision making collision avoidance method are chosen as an obstacle avoidance candidate technique for verification study. Five benchmark case studies are investigated in this thesis: static obstacle avoidance system of a simple unicycle robot, moving obstacle avoidance system for a Pioneer 3DX robot, and a 6 Degrees of Freedom fixed wing Unmanned Aerial Vehicle with static and moving collision avoidance algorithms. It is proven that although a local optimisation method for nonlinear optimisation is quite efficient, it is not able to find the most dangerous situation. Results in this thesis show that, among all the global optimisation methods that have been investigated, the DIviding RECTangle method provides most promising performance for verification of collision avoidance functions in terms of guaranteed capability in searching worst scenarios

    On Optimization-Based Falsification of Cyber-Physical Systems

    Get PDF
    In what is commonly referred to as cyber-physical systems (CPSs), computational and physical resources are closely interconnected. An example is the closed-loop behavior of perception, planning, and control algorithms, executing on a computer and interacting with a physical environment. Many CPSs are safety-critical, and it is thus important to guarantee that they behave according to given specifications that define the correct behavior. CPS models typically include differential equations, state machines, and code written in general-purpose programming languages. This heterogeneity makes it generally not feasible to use analytical methods to evaluate the system’s correctness. Instead, model-based testing of a simulation of the system is more viable. Optimization-based falsification is an approach to, using a simulation model, automatically check for the existence of input signals that make the CPS violate given specifications. Quantitative semantics estimate how far the specification is from being violated for a given scenario. The decision variables in the optimization problems are parameters that determine the type and shape of generated input signals. This thesis contributes to the increased efficiency of optimization-based falsification in four ways. (i) A method for using multiple quantitative semantics during optimization-based falsification. (ii) A direct search approach, called line-search falsification that prioritizes extreme values, which are known to often falsify specifications, and has a good balance between exploration and exploitation of the parameter space. (iii) An adaptation of Bayesian optimization that allows for injecting prior knowledge and uses a special acquisition function for finding falsifying points rather than the global minima. (iv) An investigation of different input signal parameterizations and their coverability of the space and time and frequency domains. The proposed methods have been implemented and evaluated on standard falsification benchmark problems. Based on these empirical studies, we show the efficiency of the proposed methods. Taken together, the proposed methods are important contributions to the falsification of CPSs and in enabling a more efficient falsification process

    Approximate Computing Survey, Part I: Terminology and Software & Hardware Approximation Techniques

    Full text link
    The rapid growth of demanding applications in domains applying multimedia processing and machine learning has marked a new era for edge and cloud computing. These applications involve massive data and compute-intensive tasks, and thus, typical computing paradigms in embedded systems and data centers are stressed to meet the worldwide demand for high performance. Concurrently, the landscape of the semiconductor field in the last 15 years has constituted power as a first-class design concern. As a result, the community of computing systems is forced to find alternative design approaches to facilitate high-performance and/or power-efficient computing. Among the examined solutions, Approximate Computing has attracted an ever-increasing interest, with research works applying approximations across the entire traditional computing stack, i.e., at software, hardware, and architectural levels. Over the last decade, there is a plethora of approximation techniques in software (programs, frameworks, compilers, runtimes, languages), hardware (circuits, accelerators), and architectures (processors, memories). The current article is Part I of our comprehensive survey on Approximate Computing, and it reviews its motivation, terminology and principles, as well it classifies and presents the technical details of the state-of-the-art software and hardware approximation techniques.Comment: Under Review at ACM Computing Survey
    • …
    corecore