440 research outputs found

    Dynamic data flow testing

    Get PDF
    Data flow testing is a particular form of testing that identifies data flow relations as test objectives. Data flow testing has recently attracted new interest in the context of testing object oriented systems, since data flow information is well suited to capture relations among the object states, and can thus provide useful information for testing method interactions. Unfortunately, classic data flow testing, which is based on static analysis of the source code, fails to identify many important data flow relations due to the dynamic nature of object oriented systems. This thesis presents Dynamic Data Flow Testing, a technique which rethinks data flow testing to suit the testing of modern object oriented software. Dynamic Data Flow Testing stems from empirical evidence that we collect on the limits of classic data flow testing techniques. We investigate such limits by means of Dynamic Data Flow Analysis, a dynamic implementation of data flow analysis that computes sound data flow information on program traces. We compare data flow information collected with static analysis of the code with information observed dynamically on execution traces, and empirically observe that the data flow information computed with classic analysis of the source code misses a significant part of information that corresponds to relevant behaviors that shall be tested. In view of these results, we propose Dynamic Data Flow Testing. The technique promotes the synergies between dynamic analysis, static reasoning and test case generation for automatically extending a test suite with test cases that execute the complex state based interactions between objects. Dynamic Data Flow Testing computes precise data flow information of the program with Dynamic Data Flow Analysis, processes the dynamic information to infer new test objectives, which Dynamic Data Flow Testing uses to generate new test cases. The test cases generated by Dynamic Data Flow Testing exercise relevant behaviors that are otherwise missed by both the original test suite and test suites that satisfy classic data flow criteria

    Traffic generator for firewall testing

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Computer Engineering, Izmir, 2009Includes bibliographical references (leaves: 52-56)Text in English; Abstract: Turkish and Englishix, 92 leavesFirewalls lead at the front line of a computer network to restrict unauthorized access. The desired security level is determined by a policy and implemented by a firewall which not only has to be effective but also stable and reliable service is expected. In order to verify the level of security of the system, testing is required. The objective of this thesis is to test a firewall with software testing techniques taking into consideration the nominated policy and the firewall. Iptables software was examined and tested by two different algorithms that were modified according to software testing techniques, and the results were observed. Packets sent through the Firewall Under Test (FUT) are compared to packets passed through the FUT and test results were observed. The security performance of the modified algorithms proved to be successful

    Test generation for high coverage with abstraction refinement and coarsening (ARC)

    Get PDF
    Testing is the main approach used in the software industry to expose failures. Producing thorough test suites is an expensive and error prone task that can greatly benefit from automation. Two challenging problems in test automation are generating test input and evaluating the adequacy of test suites: the first amounts to producing a set of test cases that accurately represent the software behavior, the second requires defining appropriate metrics to evaluate the thoroughness of the testing activities. Structural testing addresses these problems by measuring the amount of code elements that are executed by a test suite. The code elements that are not covered by any execution are natural candidates for generating further test cases, and the measured coverage rate can be used to estimate the thoroughness of the test suite. Several empirical studies show that test suites achieving high coverage rates exhibit a high failure detection ability. However, producing highly covering test suites automatically is hard as certain code elements are executed only under complex conditions while other might be not reachable at all. In this thesis we propose Abstraction Refinement and Coarsening (ARC), a goal oriented technique that combines static and dynamic software analysis to automatically generate test suites with high code coverage. At the core of our approach there is an abstract program model that enables the synergistic application of the different analysis components. In ARC we integrate Dynamic Symbolic Execution (DSE) and abstraction refinement to precisely direct test generation towards the coverage goals and detect infeasible elements. ARC includes a novel coarsening algorithm for improved scalability. We implemented ARC-B, a prototype tool that analyses C programs and produces test suites that achieve high branch coverage. Our experiments show that the approach effectively exploits the synergy between symbolic testing and reachability analysis outperforming state of the art test generation approaches. We evaluated ARC-B on industry relevant software, and exposed previously unknown failures in a safety-critical software component

    Model-based symbolic design space exploration at the electronic system level: a systematic approach

    Get PDF
    In this thesis, a novel, fully systematic approach is proposed that addresses the automated design space exploration at the electronic system level. The problem is formulated as multi-objective optimization problem and is encoded symbolically using Answer Set Programming (ASP). Several specialized solvers are tightly coupled as background theories with the foreground ASP solver under the ASP modulo Theories (ASPmT) paradigm. By utilizing the ASPmT paradigm, the search is executed entirely systematically and the disparate synthesis steps can be coupled to explore the search space effectively.In dieser Arbeit wird ein vollständig systematischer Ansatz präsentiert, der sich mit der Entwurfsraumexploration auf der elektronischen Systemebene befasst. Das Problem wird als multikriterielles Optimierungsproblem formuliert und symbolisch mit Hilfe von Answer Set Programming (ASP) kodiert. Spezialisierte Solver sind im Rahmen des ASP modulo Theories (ASPmT) Paradigmas als Hintergrundtheorien eng mit dem ASP Solver gekoppelt. Durch die Verwendung von ASPmT wird die Suche systematisch ausgeführt und die individuellen Schritte können gekoppelt werden, um den Suchraum effektiv zu durchsuchen

    HARDWARE AND SOFTWARE ARCHITECTURES FOR ENERGY- AND RESOURCE-EFFICIENT SIGNAL PROCESSING SYSTEMS

    Get PDF
    For a large class of digital signal processing (DSP) systems, design and implementation of hardware and software is challenging due to stringent constraints on energy and resource requirements. In this thesis, we develop methods to address this challenge by proposing new constraint-aware system design methods for DSP systems, and energy- and resource-optimized designs of key DSP subsystems that are relevant across various application areas. In addition to general methods for optimizing energy consumption and resource utilization, we present streamlined designs that are specialized to efficiently address platform-dependent constraints. We focus on two specific aspects in development of energy- and resource-optimized design techniques: (1) Application-specific systems and architectures for energy- and resource- efficient design. First, we address challenges in efficient implementation of wireless sensor network building energy monitoring systems (WSNBEMSs). We develop new energy management schemes in order to maximize system lifetime for WSNBEMSs, and demonstrate that system lifetime can be improved significantly without affecting monitoring accuracy. We also present resource efficient, field programmable gate array (FPGA) architecture for implementation of orthogonal frequency division multiplexing (OFDM) systems. We have demonstrated that our design provides at least 8.8% enhancement in terms of resource efficiency compared to Xilinx FFT v7.1 within the same OFDM configuration. (2) Dataflow-based methods for structured design and implementation of energy- and resource- efficient DSP systems. First, we introduce a dataflow-based design approach based on integrating interrupt-based signal acquisition in context of parameterized synchronous dataflow (PSDF) modeling. We demonstrate that by applying our approach, energy- and resource-efficient embedded software can be derived systematically from high level models of dynamic, data-driven applications systems (DDDASs) functional structure. Also, we present an in-depth development of lightweight dataflow-Verilog (LWDF-V), which is an integration of the LWDF programming model with the Verilog hardware description language (HDL), and we demonstrate the utility of LWDF-V for design and implementation of digital systems for signal processing. We emphasize efficient of LWDF with HDLs, and emphasize the application of LWDF-V to design DSP systems with dynamic parameters on FPGA platforms

    Management And Security Of Multi-Cloud Applications

    Get PDF
    Single cloud management platform technology has reached maturity and is quite successful in information technology applications. Enterprises and application service providers are increasingly adopting a multi-cloud strategy to reduce the risk of cloud service provider lock-in and cloud blackouts and, at the same time, get the benefits like competitive pricing, the flexibility of resource provisioning and better points of presence. Another class of applications that are getting cloud service providers increasingly interested in is the carriers\u27 virtualized network services. However, virtualized carrier services require high levels of availability and performance and impose stringent requirements on cloud services. They necessitate the use of multi-cloud management and innovative techniques for placement and performance management. We consider two classes of distributed applications – the virtual network services and the next generation of healthcare – that would benefit immensely from deployment over multiple clouds. This thesis deals with the design and development of new processes and algorithms to enable these classes of applications. We have evolved a method for optimization of multi-cloud platforms that will pave the way for obtaining optimized placement for both classes of services. The approach that we have followed for placement itself is predictive cost optimized latency controlled virtual resource placement for both types of applications. To improve the availability of virtual network services, we have made innovative use of the machine and deep learning for developing a framework for fault detection and localization. Finally, to secure patient data flowing through the wide expanse of sensors, cloud hierarchy, virtualized network, and visualization domain, we have evolved hierarchical autoencoder models for data in motion between the IoT domain and the multi-cloud domain and within the multi-cloud hierarchy

    Data-centric Performance Measurement and Mapping for Highly Parallel Programming Models

    Get PDF
    Modern supercomputers have complex features: many hardware threads, deep memory hierarchies, and many co-processors/accelerators. Productively and effectively designing programs to utilize those hardware features is crucial in gaining the best performance. There are several highly parallel programming models in active development that allow programmers to write efficient code on those architectures. Performance profiling is a very important technique in the development to achieve the best performance. In this dissertation, I proposed a new performance measurement and mapping technique that can associate performance data with program variables instead of code blocks. To validate the applicability of my data-centric profiling idea, I designed and implemented a profiler for PGAS and CUDA. For PGAS, I developed ChplBlamer, for both single-node and multi-node Chapel programs. My tool also provides new features such as data-centric inter-node load imbalance identification. For CUDA, I developed CUDABlamer for GPU-accelerated applications. CUDABlamer also attributes performance data to program variables, which is a feature that was not found in any previous CUDA profilers. Directed by the insights from the tools, I optimized several widely-studied benchmarks and significantly improved program performance by a factor of up to 4x for Chapel and 47x for CUDA kernels

    Worst-case temporal analysis of real-time dynamic streaming applications

    Get PDF
    • …
    corecore