150,525 research outputs found

    TreeToReads - a pipeline for simulating raw reads from phylogenies.

    Get PDF
    BackgroundUsing phylogenomic analysis tools for tracking pathogens has become standard practice in academia, public health agencies, and large industries. Using the same raw read genomic data as input, there are several different approaches being used to infer phylogenetic tree. These include many different SNP pipelines, wgMLST approaches, k-mer algorithms, whole genome alignment and others; each of these has advantages and disadvantages, some have been extensively validated, some are faster, some have higher resolution. A few of these analysis approaches are well-integrated into the regulatory process of US Federal agencies (e.g. the FDA's SNP pipeline for tracking foodborne pathogens). However, despite extensive validation on benchmark datasets and comparison with other pipelines, we lack methods for fully exploring the effects of multiple parameter values in each pipeline that can potentially have an effect on whether the correct phylogenetic tree is recovered.ResultsTo resolve this problem, we offer a program, TreeToReads, which can generate raw read data from mutated genomes simulated under a known phylogeny. This simulation pipeline allows direct comparisons of simulated and observed data in a controlled environment. At each step of these simulations, researchers can vary parameters of interest (e.g., input tree topology, amount of sequence divergence, rate of indels, read coverage, distance of reference genome, etc) to assess the effects of various parameter values on correctly calling SNPs and reconstructing an accurate tree.ConclusionsSuch critical assessments of the accuracy and robustness of analytical pipelines are essential to progress in both research and applied settings

    On minimising the maximum expected verification time

    Get PDF
    Cyber Physical Systems (CPSs) consist of hardware and software components. To verify that the whole (i.e., software + hardware) system meets the given specifications, exhaustive simulation-based approaches (Hardware In the Loop Simulation, HILS) can be effectively used by first generating all relevant simulation scenarios (i.e., sequences of disturbances) and then actually simulating all of them (verification phase). When considering the whole verification activity, we see that the above mentioned verification phase is repeated until no error is found. Accordingly, in order to minimise the time taken by the whole verification activity, in each verification phase we should, ideally, start by simulating scenarios witnessing errors (counterexamples). Of course, to know beforehand the set of such scenarios is not feasible. In this paper we show how to select scenarios so as to minimise the Worst Case Expected Verification Tim

    Using genetic algorithms to generate test sequences for complex timed systems

    Get PDF
    The generation of test data for state based specifications is a computationally expensive process. This problem is magnified if we consider that time con- straints have to be taken into account to govern the transitions of the studied system. The main goal of this paper is to introduce a complete methodology, sup- ported by tools, that addresses this issue by represent- ing the test data generation problem as an optimisa- tion problem. We use heuristics to generate test cases. In order to assess the suitability of our approach we consider two different case studies: a communication protocol and the scientific application BIPS3D. We give details concerning how the test case generation problem can be presented as a search problem and automated. Genetic algorithms (GAs) and random search are used to generate test data and evaluate the approach. GAs outperform random search and seem to scale well as the problem size increases. It is worth to mention that we use a very simple fitness function that can be eas- ily adapted to be used with other evolutionary search techniques

    Aerated blast furnace slag filters for enhanced nitrogen and phosphorus removal from small wastewater treatment plants

    Get PDF
    Rock filters (RF) are a promising alternative technology for natural wastewater treatment for upgrading WSP effluent. However, the application of RF in the removal of eutrophic nutrients, nitrogen and phosphorus, is very limited. Accordingly, the overall objective of this study was to develop a lowcost RF system for the purpose of enhanced nutrient removal from WSP effluents, which would be able to produce effluents which comply with the requirements of the EU Urban Waste Water Treatment Directive (UWWTD) (911271lEEC) and suitable for small communities. Therefore, a combination system comprising a primary facultative pond and an aerated rock filter (ARF) system-either vertically or horizontally loaded-was investigated at the University of Leeds' experimental station at Esholt Wastewater Treatment Works, Bradford, UK. Blast furnace slag (BFS) and limestone were selected for use in the ARF system owing to their high potential for P removal and their low cost. This study involved three major qperiments: (1) a comparison of aerated vertical-flow and horizontal-flow limestone filters for nitrogen removal; (2) a comparison of aerated limestone + blast furnace slag (BFS) filter and aerated BFS filters for nitrogen and phosphorus removal; and (3) a comparison of vertical-flow and horizontal-flow BFS filters for nitrogen and phosphorus removal. The vertical upward-flow ARF system was found to be superior to the horizontal-flow ARF system in terms of nitrogen removal, mostly thiough bacterial nitrification processes in both the aerated limestone and BFS filter studies. The BFS filter medium (whieh is low-cost) showed a much higher potential in removing phosphortls from pond effluent than the limestone medium. As a result, the combination of a vertical upward-flow ARF system and an economical and effective P-removal filter medium, such as BFS, was found to be an ideal optionfor the total nutrient removal of both nitrogen and phosphorus from wastewater. In parallel with these experiments, studies on the aerated BFS filter effective life and major in-filter phosphorus removal pathways were carried out. From the standard batch experiments of Pmax adsorption capacity of BFS, as well as six-month data collection of daily average P-removal, it was found that the effective life of the aerated BFS filter was 6.5 years. Scanning electron microscopy and X-ray diffraction spectrometric analyses on the surface of BFS, particulates and sediment samples revealed that the apparent mechanisms of P-removal in the filter are adsorption on the amorphous oxide phase of the BFS surface and precipitation within the filter

    The pros and cons of using SDL for creation of distributed services

    Get PDF
    In a competitive market for the creation of complex distributed services, time to market, development cost, maintenance and flexibility are key issues. Optimizing the development process is very much a matter of optimizing the technologies used during service creation. This paper reports on the experience gained in the Service Creation projects SCREEN and TOSCA on use of the language SDL for efficient service creation

    Artificial table testing dynamically adaptive systems

    Get PDF
    Dynamically Adaptive Systems (DAS) are systems that modify their behavior and structure in response to changes in their surrounding environment. Critical mission systems increasingly incorporate adaptation and response to the environment; examples include disaster relief and space exploration systems. These systems can be decomposed in two parts: the adaptation policy that specifies how the system must react according to the environmental changes and the set of possible variants to reconfigure the system. A major challenge for testing these systems is the combinatorial explosions of variants and envi-ronment conditions to which the system must react. In this paper we focus on testing the adaption policy and propose a strategy for the selection of envi-ronmental variations that can reveal faults in the policy. Artificial Shaking Table Testing (ASTT) is a strategy inspired by shaking table testing (STT), a technique widely used in civil engineering to evaluate building's structural re-sistance to seismic events. ASTT makes use of artificial earthquakes that simu-late violent changes in the environmental conditions and stresses the system adaptation capability. We model the generation of artificial earthquakes as a search problem in which the goal is to optimize different types of envi-ronmental variations

    Validate implementation correctness using simulation: the TASTE approach

    Get PDF
    High-integrity systems operate in hostile environment and must guarantee a continuous operational state, even if unexpected events happen. In addition, these systems have stringent requirements that must be validated and correctly translated from high-level specifications down to code. All these constraints make the overall development process more time-consuming. This becomes especially complex because the number of system functions keeps increasing over the years. As a result, engineers must validate system implementation and check that its execution conforms to the specifications. To do so, a traditional approach consists in a manual instrumentation of the implementation code to trace system activity while operating. However, this might be error-prone because modifications are not automatic and still made manually. Furthermore, such modifications may have an impact on the actual behavior of the system. In this paper, we present an approach to validate a system implementation by comparing execution against simulation. In that purpose, we adapt TASTE, a set of tools that eases system development by automating each step as much as possible. In particular, TASTE automates system implementation from functional (system functions description with their properties – period, deadline, priority, etc.) and deployment(processors, buses, devices to be used) models. We tailored this tool-chain to create traces during system execution. Generated output shows activation time of each task, usage of communication ports (size of the queues, instant of events pushed/pulled, etc.) and other relevant execution metrics to be monitored. As a consequence, system engineers can check implementation correctness by comparing simulation and execution metrics
    • 

    corecore