20,737 research outputs found

    Automated Test Case Generation as a Many-Objective Optimisation Problem with Dynamic Selection of the Targets

    Get PDF
    The test case generation is intrinsically a multi-objective problem, since the goal is covering multiple test targets (e.g., branches). Existing search-based approaches either consider one target at a time or aggregate all targets into a single fitness function (whole-suite approach). Multi and many-objective optimisation algorithms (MOAs) have never been applied to this problem, because existing algorithms do not scale to the number of coverage objectives that are typically found in real-world software. In addition, the final goal for MOAs is to find alternative trade-off solutions in the objective space, while in test generation the interesting solutions are only those test cases covering one or more uncovered targets. In this paper, we present DynaMOSA (Dynamic Many-Objective Sorting Algorithm), a novel many-objective solver specifically designed to address the test case generation problem in the context of coverage testing. DynaMOSA extends our previous many-objective technique MOSA (Many-Objective Sorting Algorithm) with dynamic selection of the coverage targets based on the control dependency hierarchy. Such extension makes the approach more effective and efficient in case of limited search budget. We carried out an empirical study on 346 Java classes using three coverage criteria (i.e., statement, branch, and strong mutation coverage) to assess the performance of DynaMOSA with respect to the whole-suite approach (WS), its archive-based variant (WSA) and MOSA. The results show that DynaMOSA outperforms WSA in 28% of the classes for branch coverage (+8% more coverage on average) and in 27% of the classes for mutation coverage (+11% more killed mutants on average). It outperforms WS in 51% of the classes for statement coverage, leading to +11% more coverage on average. Moreover, DynaMOSA outperforms its predecessor MOSA for all the three coverage criteria in 19% of the classes with +8% more code coverage on average

    Dorylus: An Ant Colony Based Tool for Automated Test Case Generation

    Get PDF
    Automated test generation to cover all branches within a program is a hard task. We present Dorylus, a test suite generation tool that uses ant colony optimisation, guided by coverage. Dorylus constructs a continuous domain over which it conducts independent, multiple objective search that employs a lightweight, dynamic, path-based input dependency analysis. We compare Dorylus with EvoSuite with respect to both coverage and speed using two corpora. The first benchmark contains string based programs, where our results demonstrate that Dorylus improves over EvoSuite on branch coverage and is 50% faster on average. The second benchmark consists of 936 Java programs from SF110 and suggests Dorylus generalises well as it achieves 79% coverage on average whereas the best performing of three EvoSuite algorithms reaches 89%

    Intelligent systems in manufacturing: current developments and future prospects

    Get PDF
    Global competition and rapidly changing customer requirements are demanding increasing changes in manufacturing environments. Enterprises are required to constantly redesign their products and continuously reconfigure their manufacturing systems. Traditional approaches to manufacturing systems do not fully satisfy this new situation. Many authors have proposed that artificial intelligence will bring the flexibility and efficiency needed by manufacturing systems. This paper is a review of artificial intelligence techniques used in manufacturing systems. The paper first defines the components of a simplified intelligent manufacturing systems (IMS), the different Artificial Intelligence (AI) techniques to be considered and then shows how these AI techniques are used for the components of IMS

    Pipelined genetic propagation

    Get PDF
    © 2015 IEEE.Genetic Algorithms (GAs) are a class of numerical and combinatorial optimisers which are especially useful for solving complex non-linear and non-convex problems. However, the required execution time often limits their application to small-scale or latency-insensitive problems, so techniques to increase the computational efficiency of GAs are needed. FPGA-based acceleration has significant potential for speeding up genetic algorithms, but existing FPGA GAs are limited by the generational approaches inherited from software GAs. Many parts of the generational approach do not map well to hardware, such as the large shared population memory and intrinsic loop-carried dependency. To address this problem, this paper proposes a new hardware-oriented approach to GAs, called Pipelined Genetic Propagation (PGP), which is intrinsically distributed and pipelined. PGP represents a GA solver as a graph of loosely coupled genetic operators, which allows the solution to be scaled to the available resources, and also to dynamically change topology at run-time to explore different solution strategies. Experiments show that pipelined genetic propagation is effective in solving seven different applications. Our PGP design is 5 times faster than a recent FPGA-based GA system, and 90 times faster than a CPU-based GA system

    Automated multi-objective calibration of biological agent-based simulations

    Get PDF
    Computational agent-based simulation (ABS) is increasingly used to complement laboratory techniques in advancing our understanding of biological systems. Calibration, the identification of parameter values that align simulation with biological behaviours, becomes challenging as increasingly complex biological domains are simulated. Complex domains cannot be characterized by single metrics alone, rendering simulation calibration a fundamentally multi-metric optimization problem that typical calibration techniques cannot handle. Yet calibration is an essential activity in simulation-based science; the baseline calibration forms a control for subsequent experimentation and hence is fundamental in the interpretation of results. Here, we develop and showcase a method, built around multi-objective optimization, for calibrating ABSs against complex target behaviours requiring several metrics (termed objectives) to characterize. Multi-objective calibration (MOC) delivers those sets of parameter values representing optimal trade-offs in simulation performance against each metric, in the form of a Pareto front. We use MOC to calibrate a well-understood immunological simulation against both established a priori and previously unestablished target behaviours. Furthermore, we show that simulation-borne conclusions are broadly, but not entirely, robust to adopting baseline parameter values from different extremes of the Pareto front, highlighting the importance of MOC's identification of numerous calibration solutions. We devise a method for detecting overfitting in a multi-objective context, not previously possible, used to save computational effort by terminating MOC when no improved solutions will be found. MOC can significantly impact biological simulation, adding rigour to and speeding up an otherwise time-consuming calibration process and highlighting inappropriate biological capture by simulations that cannot be well calibrated. As such, it produces more accurate simulations that generate more informative biological predictions

    On Quality in Radiotherapy Treatment Plan Optimisation

    Get PDF
    Radiotherapy is one of the essential treatments used in the fight against cancer. The goal of radiotherapy is to deliver a high dose of ionising radiation to the tumour volume and at the same time minimise the effect on healthy tissue by reducing the radiation to critical organs. This contradiction is challenging and has been driving the research and development of the treatments.Over the last two decades, there has been tremendous technical development inradiotherapy. The rapid increase in computational power introduced treatment plan optimisation and intensity-modulated radiotherapy (IMRT). IMRT made it possible to shape the radiation dose distribution closely around the target volume avoiding critical organs to a greater extent. Rotational implementation of IMRT, e.g. Volumetric Modulated Arc Therapy (VMAT) further improved this “dose shaping” ability. With these techniques increasing the ability to produce better treatment plans, there was a need for evaluation tools to compare the treatment plan quality. A plan can be judged by how well it fulfils the prescription and dose-volume constraints, ideally based on treatment outcome. In this work, this is denoted Required Plan Quality, the minimum quality to accept a plan for clinical treatment. If a plan does not fulfil all the dose-volume constraints, there should be a clear priority of which constraints are crucial to achieve. On the other hand, if the constraints are easily fulfilled, there might be a plan of better quality only limited by the treatment systems ability to find and deliver it. This is denoted Attainable Plan Quality in this work– the quality possible to achieve with a given treatment system for a specific patient group.In work described in this thesis, the so-called Pareto front method was used to search for the attainable plan quality to compare different treatment planning systems and optimisation strategies. More specifically, a fall-back planning system for backup planning and an optimiser to find the best possible beam angles. The Pareto method utilises a set of plans to explore the trade-off between target and nearby risk organs.The Pareto plan generation is time-consuming if done manually. The Pareto method was then used in a software that automated the plan generation allowing for a more accurate representation of the trade-off. The software was used to investigate the attainable plan quality for prostate cancer treatments. In the last two publications in this thesis, machine learning approaches were developed to predict a treatment plancloser to the attainable plan quality compared to a manually generated plan.In the thesis, tools have been developed to help move the treatment plan qualityfrom Required Plan Quality towards the Attainable Plan Quality, i.e. the best quality we can achieve with our current system

    Queue scheduling the Alan Cousins Telescope

    Get PDF
    The Alan Cousins Telescope is a 0.75-m automatic photoelectric telescope situated at the South African Astronomical Observatory, in Sutherland. The telescope was designed and built to execute a range of photometry programmes, but is used mainly for the long-term monitoring of variable stars. In addition, there is the potential for target-of-opportunity observations of unanticipated events, such as gamma ray bursts, and anticipated events such as occultations. Ultimately the telescope is intended to be a fully robotic telescope with limited operational support needs. Some advance toward this goal has been made by a full hardware interface to allow queue executions of observations. The next phase is the implementation of an automated scheduler that will generate a queue of valid observations for each night of observation. Queue scheduling algorithms are widely used in astronomy and the aim of this dissertation is to present a strawman scheduler that will generate the nightly observation queue. The main design of the scheduler is based on a merit-based system implemented at the STELLA robotic observatory, paired with the scheduling algorithms used by SOFIA. The main drawback of the telescope is that it does not currently accommodate dynamically changing weather conditions. As a consequence, the main scheduling constraints are observation parameters, instrument ability, and for monitoring type observations, observation time window constraints

    Toolflows for Mapping Convolutional Neural Networks on FPGAs: A Survey and Future Directions

    Get PDF
    In the past decade, Convolutional Neural Networks (CNNs) have demonstrated state-of-the-art performance in various Artificial Intelligence tasks. To accelerate the experimentation and development of CNNs, several software frameworks have been released, primarily targeting power-hungry CPUs and GPUs. In this context, reconfigurable hardware in the form of FPGAs constitutes a potential alternative platform that can be integrated in the existing deep learning ecosystem to provide a tunable balance between performance, power consumption and programmability. In this paper, a survey of the existing CNN-to-FPGA toolflows is presented, comprising a comparative study of their key characteristics which include the supported applications, architectural choices, design space exploration methods and achieved performance. Moreover, major challenges and objectives introduced by the latest trends in CNN algorithmic research are identified and presented. Finally, a uniform evaluation methodology is proposed, aiming at the comprehensive, complete and in-depth evaluation of CNN-to-FPGA toolflows.Comment: Accepted for publication at the ACM Computing Surveys (CSUR) journal, 201
    • …
    corecore