193 research outputs found

    Metamorphic testing: testing the untestable

    Get PDF
    What if we could know that a program is buggy, even if we could not tell whether or not its observed output is correct? This is one of the key strengths of metamorphic testing, a technique where failures are not revealed by checking an individual concrete output, but by checking the relations among the inputs and outputs of multiple executions of the program under test. Two decades after its introduction, metamorphic testing has become a fully-fledged testing technique with successful applications in multiple domains, including online search engines, autonomous machinery, compilers, Web APIs, and deep learning programs, among others. This article serves as a hands-on entry point for newcomers to metamorphic testing, describing examples, possible applications, and current limitations, providing readers with the basics for the application of the technique in their own projects. IEE

    Metamorphic Testing for Software Libraries and Graphics Compilers

    Get PDF
    Metamorphic Testing is a testing technique which mutates existing test cases in semantically equivalent forms, by making use of metamorphic relations, while avoiding the oracle problem. However, these required relations are not readily available for a given system under test. Defining effective metamorphic relations is difficult, and arguably the main obstacle towards adoption of metamorphic testing in production-level software development. One example application is testing graphics compilers, where the approximate and under-specified nature of the domain makes it hard to apply more traditional techniques. We propose an approach with a lower barrier of entry to applying metamorphic testing for a software library. The user must still identify relations that hold over their particular library, but can do so within a development-like environment. We apply methods from the domains of metamorphic testing and fuzzing to produce complex test cases. We consider the user interaction a bonus, as they can control what parts of the target codebase is tested, potentially focusing on less-tested or critical sections of the codebase. We implement our proposed approach in a tool, MF++, which synthesises C++ test cases for a C++ library, defined by user-provided ingredients. We applied MF++ to 7 libraries in the domains of satisfiability modulo theories and Presburger arithmetic,. Our evaluation of MF++ was able to identify 21 bugs in these tools. We additionally provide an automatic reducer for tests generated by MF++, named MF++R. In addition to minimising tests exposing issues, MF++R can also be used to identify incorrect user-provided relations. Additionally, we investigate the combined use of MF++ and MF++R in order to augment code coverage of library test suites. We assess the utility of this application by contributing 21 tests aimed at improving coverage across 3 libraries.Open Acces

    Improving graphics programming with shader tests

    Get PDF
    This paper presents an automated model and a project, Arrakis, for finding defects in shading algorithms for graphics rendering and compute workloads. A key challenge in shading algorithm testing is the lack of an oracle that can determine the quality and the output of a custom shading algorithm; this is crucial in graphics workloads because expensive assets are often wasted on solving these problems. A broad solution, Arrakis is developed, which builds on current graphics technology advances in Vulkan, SPIR-V and SPIRV-X by leveraging the standardization with mappings from SPIR-V and C++. Findings show that utilizing the demonstrated technology can improve quality whilst increasing productivity

    Putting Randomized Compiler Testing into Production (Artifact)

    Get PDF
    This artifact accompanies our experience report for our compiler testing technology transfer project: taking the GraphicsFuzz research project on randomized metamorphic testing of graphics shader compilers, and building the necessary tooling around it to provide a highly automated process for improving the Khronos Vulkan Conformance Test Suite (CTS) with test cases that expose fuzzer-found compiler bugs, or that plug gaps in test coverage. The artifact consists of two Dockerfiles and associated files that can be used to build two Docker containers. The containers include our main tool for performing fuzzing: gfauto. The containers allow the user to fuzz SwiftShader, a software Vulkan implementation, finding 4 bugs. The user will also perform some line coverage analysis of SwiftShader using our tools to synthesize a small test that increases line coverage. Ubuntu, gfauto, SwiftShader, and other dependencies inside the Docker containers are fixed at specific versions, and all random seeds are set to specific values. Thus, all examples should reproduce faithfully on any machine

    The Potential for a GPU-Like Overlay Architecture for FPGAs

    Get PDF
    We propose a soft processor programming model and architecture inspired by graphics processing units (GPUs) that are well-matched to the strengths of FPGAs, namely, highly parallel and pipelinable computation. In particular, our soft processor architecture exploits multithreading, vector operations, and predication to supply a floating-point pipeline of 64 stages via hardware support for up to 256 concurrent thread contexts. The key new contributions of our architecture are mechanisms for managing threads and register files that maximize data-level and instruction-level parallelism while overcoming the challenges of port limitations of FPGA block memories as well as memory and pipeline latency. Through simulation of a system that (i) is programmable via NVIDIA's high-level Cg language, (ii) supports AMD's CTM r5xx GPU ISA, and (iii) is realizable on an XtremeData XD1000 FPGA-based accelerator system, we demonstrate the potential for such a system to achieve 100% utilization of a deeply pipelined floating-point datapath

    scenery: Flexible Virtual Reality Visualization on the Java VM

    Full text link
    Life science today involves computational analysis of a large amount and variety of data, such as volumetric data acquired by state-of-the-art microscopes, or mesh data from analysis of such data or simulations. Visualization is often the first step in making sense of data, and a crucial part of building and debugging analysis pipelines. It is therefore important that visualizations can be quickly prototyped, as well as developed or embedded into full applications. In order to better judge spatiotemporal relationships, immersive hardware, such as Virtual or Augmented Reality (VR/AR) headsets and associated controllers are becoming invaluable tools. In this work we introduce scenery, a flexible VR/AR visualization framework for the Java VM that can handle mesh and large volumetric data, containing multiple views, timepoints, and color channels. scenery is free and open-source software, works on all major platforms, and uses the Vulkan or OpenGL rendering APIs. We introduce scenery's main features and example applications, such as its use in VR for microscopy, in the biomedical image analysis software Fiji, or for visualizing agent-based simulations.Comment: Added IEEE DOI, version published at VIS 201

    Generating renderers

    Get PDF
    Most production renderers developed for the film industry are huge pieces of software that are able to render extremely complex scenes. Unfortunately, they are implemented using the currently available programming models that are not well suited to modern computing hardware like CPUs with vector units or GPUs. Thus, they have to deal with the added complexity of expressing parallelism and using hardware features in those models. Since compilers cannot alone optimize and generate efficient programs for any type of hardware, because of the large optimization spaces and the complexity of the underlying compiler problems, programmers have to rely on compiler-specific hardware intrinsics or write non-portable code. The consequence of these limitations is that programmers resort to writing the same code twice when they need to port their algorithm on a different architecture, and that the code itself becomes difficult to maintain, as algorithmic details are buried under hardware details. Thankfully, there are solutions to this problem, taking the form of Domain-Specific Lan- guages. As their name suggests, these languages are tailored for one domain, and compilers can therefore use domain-specific knowledge to optimize algorithms and choose the best execution policy for a given target hardware. In this thesis, we opt for another way of encoding domain- specific knowledge: We implement a generic, high-level, and declarative rendering and traversal library in a functional language, and later refine it for a target machine by providing partial evaluation annotations. The partial evaluator then specializes the entire renderer according to the available knowledge of the scene: Shaders are specialized when their inputs are known, and in general, all redundant computations are eliminated. Our results show that the generated renderers are faster and more portable than renderers written with state-of-the-art competing libraries, and that in comparison, our rendering library requires less implementation effort.Die meisten in der Filmindustrie zum Einsatz kommenden Renderer sind riesige Softwaresysteme, die in der Lage sind, extrem aufwendige Szenen zu rendern. Leider sind diese mit den aktuell verfĂŒgbaren Programmiermodellen implementiert, welche nicht gut geeignet sind fĂŒr moderne Rechenhardware wie CPUs mit Vektoreinheiten oder GPUs. Deshalb mĂŒssen Entwickler sich mit der zusĂ€tzlichen KomplexitĂ€t auseinandersetzen, Parallelismus und Hardwarefunktionen in diesen Programmiermodellen auszudrĂŒcken. Da Compiler nicht selbstĂ€ndig optimieren und effiziente Programme fĂŒr jeglichen Typ Hardware generieren können, wegen des großen Optimierungsraumes und der KomplexitĂ€t des unterliegenden Kompilierungsproblems, mĂŒssen Programmierer auf Compiler-spezifische Hardware-“Intrinsics” zurĂŒckgreifen, oder nicht portierbaren Code schreiben. Die Konsequenzen dieser Limitierungen sind, dass Programmierer darauf zurĂŒckgreifen den gleichen Code zweimal zu schreiben, wenn sie ihre Algorithmen fĂŒr eine andere Architektur portieren mĂŒssen, und dass der Code selbst schwer zu warten wird, da algorithmische Details unter Hardwaredetails verloren gehen. GlĂŒcklicherweise gibt es Lösungen fĂŒr dieses Problem, in der Form von DSLs. Diese Sprachen sind maßgeschneidert fĂŒr eine DomĂ€ne und Compiler können deshalb DomĂ€nenspezifisches Wissen nutzen, um Algorithmen zu optimieren und die beste AusfĂŒhrungsstrategie fĂŒr eine gegebene Zielhardware zu wĂ€hlen. In dieser Dissertation wĂ€hlen wir einen anderen Weg, DomĂ€nenspezifisches Wissen zu enkodieren: Wir implementieren eine generische, high-level und deklarative Rendering- und Traversierungsbibliothek in einer funktionalen Programmiersprache, und verfeinern sie spĂ€ter fĂŒr eine Zielmaschine durch Bereitstellung von Annotationen fĂŒr die partielle Auswertung. Der “Partial Evaluator” spezialisiert dann den kompletten Renderer, basierend auf dem verfĂŒgbaren Wissen ĂŒber die Szene: Shader werden spezialisiert, wenn ihre Eingaben bekannt sind, und generell werden alle redundanten Berechnungen eliminiert. Unsere Ergebnisse zeigen, dass die generierten Renderer schneller und portierbarer sind, als Renderer geschrieben mit den aktuellen Techniken konkurrierender Bibliotheken und dass, im Vergleich, unsere Rendering Bibliothek weniger Implementierungsaufwand erfordert.This work was supported by the Federal Ministry of Education and Research (BMBF) as part of the Metacca and ProThOS projects as well as by the Intel Visual Computing Institute (IVCI) and Cluster of Excellence on Multimodal Computing and Interaction (MMCI) at Saarland University. Parts of it were also co-funded by the European Union(EU), as part of the Dreamspace project

    A novel distributed architecture for IoT image processing using low-cost devices and open internet standards

    Get PDF
    Industry 4.0 can be defined as the integration of computers and automation to current industrial processes, with addition of smart and autonomous systems leveraged by machine learning techniques. In this scenario, a compact, dependable and fast controller is desired, featuring low energy consumption, easily programming and maintenance, with no mobile parts. Nowadays, computing power in single board computers, e.g. the Raspberry Pi among others, has been increased at a very important rate. In just three generations, Pi computers offer almost a two-fold speed gain, when compared to first models. Its design, an underlying video driver with general capabilities of regular OSes, makes them quite suitable to build image processing systems at very low cost, with no mobile parts and low energy consumption. However, designing such a system for industrial image processing is a tough challenge, since it implies to integrate cameras, image processing libraries, database servers and application software with graphical user interface, in an already resource constrained device. This work presents a new architecture for this kind of systems, by means of open internet standards, using a self-contained, high performance web server to publish a RESTful API and a set of web pages that use latest HTML5 capabilities to manage USB webcams and system data. This proposal also integrates OpenCV as a compiled script on client-side using the new WASM paradigm, with an optimized storage for images using -industry-standard RDBMS and a modular design that can target Windows and Linux as well.Sociedad Argentina de InformĂĄtica e InvestigaciĂłn Operativ
    • 

    corecore