50 research outputs found

    3D application debugging

    Get PDF
    Dissertação de mestrado em Engenharia Informática (área de especialização Computação Gráfica)It is rare for a bugless program to exist, this includes 3D applications with their respective shaders. In particular shaders are harder to debug than common applications, since they are loaded to the GPU and executed in thousands of smaller threads simultaneously. It is not easy to obtain variables values, the application state and it is hard to detect what causes errors. That’s why it is necessary to study and develop debugging environments for these applications. OpenGL in particular has many open source debuggers. A study about the features and usability of Bugle, Apitrace, GLIntercept, glslDevil and VOGL is documented, hopefully helping the reader to select the best tool for his needs. An analysis of the inner workings of each of these tools was also performed. Furthermore, the appendices allow the reader to use this document as a user manual. Being a debugger for an API that is constantly evolving is not an easy task, hence, the issue of upgradeability is highly relevant. This study examines how each tool copes with OpenGL’s evolution, in particular how each tool deals with new extensions and OpenGL versions. Also a study on commercial debuggers from known companies such as AMD and NVIDIA was performed. While it is expected that these debuggers are more capable in general than their open source counterparts, this potential can only be fully explored in the respective graphics hardware. On the downside it is not possible to alter these debuggers and integrate them in another application. The goal of this section is not only to analyse the potential of these proprietary tools, but also to understand the real value of the open source debuggers. Nau 3D engine, developed at Universidade do Minho, is an engine which renders projects written with xml. Being capable of combining rasterization (OpenGL) and ray-tracing (NVIDIA’s Optix) in multipass projects, it is a complex application that greatly benefits from having as much debugging features as possible. Adding debugger features could help immensely all who work with the engine, helping both the engine developers to discover bugs in the source code, and the engine users to find bugs in their own projects. With the knowledge gathered by studying OpenGL debuggers, several debugging features were implemented in Nau 3D engine. Some of these features are configurable, and maintaining the spirit of the original project, this configuration is also written in XML.É muito raro existir um programa sem bugs, incluindo aplicações 3D com shaders. Os shaders em particular são mais complicados que as aplicações "comuns", uma vez os shaders são carregados para a placa gráfica e são executados em milhares de pequenas threads em simultâneo. Não é fácil obter valores de variáveis, estado da aplicação, e descobrir causas de erros. Consequentemente é importante utilizar debuggers especializados para este tipo de ambientes. O OpenGL em particular tem vários debuggers open-source. O estudo realizado sobre a usabilidade e versatilidade deste debuggers é documentado nesta tese. Os debuggers open-source analisados são: Bugle, Apitrace, GLIntercept, glslDevil e VOGL. É também descrito o processo de instalação e utilização em apêndices permitindo que este documento também possa ser usado como um manual de utilizador. Um estudo breve do código dos debuggers mencionados é necessário para entender as bases necessárias para debug de uma solução OpenGL. Este estudo também é necessário para compreender a questão da actualização destes projectos para acompanhar a evolução do OpenGL. Também um estudo dos debuggers comerciais actuais de empresas conhecidas como AMD e NVIDIA é efectuado de forma a conhecer os debuggers comerciais actuais, o procedimento da documentação terá algumas semelhanças com os debuggers open source. Este estudo tem também como objectivo permitir avaliar sobre o valor real dos projectos open-source. Nau é um motor 3D, desenvolvido na Universidade do Minho, para OpenGL que faz renderização de projectos escritos em XML. A possibilidade de ter um debugger interno pode ajudar imensamente todos os que desejem trabalhar com este motor, permitindo aos programadores do motor perceber possíveis bugs a acontecerem dentro do motor e também aos utilizadores do motor encontrarem bugs dos seus projectos. Com base no estudo sobres os debuggers open-source OpenGL, são implementadas funcionalidades de debugging para o motor 3D Nau. Mantendo o espírito original do projecto, estas funcionalidades poderão ser configuras em XML

    State-Based Techniques For Designing, Verifying And Debugging Message Passing Systems

    Get PDF
    Message passing systems support the applications of concurrent events, where independent or semi-independent events occur simultaneously in a nondeterministic fashion. The nature of independence, random interactions and concurrency made the code development of such applications complicated and error-prone. Conventional code development environments or IDEs, such as Microsoft Visual Studio, provide little programming support in this regard. Furthermore, ensuring the correctness of a message passing system is a challenge. Typically, it is important to guarantee that a system meets its desired specifications along its construction process. Model checking is one of the techniques used in software verification which has proven to be effective in discovering hidden design and implementation errors. The required advanced knowledge of formal methods and temporal languages is one of the impediments in adopting model checking by software developers. To integrate model checking environments and conventional IDEs, this dissertation proposes a multi-phase development framework that facilitates designing, verifying, implementing and debugging state-based message passing systems. The techniques and design principles of the proposed framework focus on improving and easing the software development experience. In the first phase, a two-level design methodology is proposed through using abstract high-level communication blocks and hierarchical state-behavioral descriptions that were developed in this research. In the second phase, a new method based on choosing from a pre-determined set of patterns in concurrent communication properties is proposed to facilitate collecting the essential specifications of the system where the atomic propositions are linked with the system design. A complex property can be attained by hierarchically nesting some of these patterns. A procedure to automatically generate formal models in a model checker (MC) language is proposed. Once the model that contains both the design and the properties of the system are generated, a model checker is used to verify the correctness of the proposed system and ensure its compliance with specifications. To help in locating the source of an undesired specification, if any, a procedure to map a counter example generated by the MC to the original design is presented. In the third phase, a skeleton code of the design specification is generated in a general programming language such as Microsoft C\#, Java, etc. moreover, the ability to debug the generated code using a conventional IDE while tracing the debugging process back to the original design was established. Finally, a graphical software tool that supports the proposed framework is developed where SPIN MC is used as a verifier. The tool was used to develop and verify several case studies. The proposed framework and the developed software tool can be considered a key solution for message passing systems design and verification

    実世界入出力を伴うプログラムの画像表現を用いた開発支援手法

    Get PDF
    学位の種別:課程博士University of Tokyo(東京大学

    Towards the Humanisation of Programming Tool Interactions

    Get PDF
    Program analysis tools, from simple static semantic analysis by a compiler, to complex dynamic analyses of data flow and security, have become commonplace in modern day programming. Many of the simpler analyses, such as the afore- mentioned compiler checking or linters designed to enforce code style, may even go unnoticed or unconsidered by most users, ubiquitous as they are. Despite this, and despite the obvious utility that such programming tools can provide, many warnings provided by them go unheeded by programmers most of the time.There are several reasons for this phenomenon: the propensity to produce false positives undermines confidence in the validity of warnings, the tools do not in- tegrate well into the normal workflow of the developer, sometimes the warning message is simply too esoteric for most users to understand, and so on. A com- mon theme can be drawn from these reasons for ignoring the often-times very useful information given by a programming tool: the tool itself is difficult to use.In this thesis, we consider ways in which we can bridge this gap between users and tools. To do this, we draw from observations about the way in which we interact with each other in the most basic human-to-human context. Applying these lessons to a human-tool interaction allow us to examine ways in which tools may be deficient, and investigate methods for making the interaction more natural and human-like.We explore this issue by framing the interaction as a "conversation" between a human and their development environment. We then present a new programming tool, Progger, built using design principles driven by the "conversational lens" which we use to look at these interactions. After this, we present a user study using a novel low-cost methodology, aimed at evaluating the efficacy of the Progger tool. From the results of this user study, we present a new, more streamlined version of Progger, and finally investigate the way in which it can be used to direct the users attention when conducting a code comprehension exercise

    Contribution Barriers to Open Source Projects

    Get PDF
    Contribution barriers are properties of Free/Libre and Open Source Software (FLOSS) projects that may prevent newcomers from contributing. Contribution barriers can be seen as forces that oppose the motivations of newcomers. While there is extensive research on the motivation of FLOSS developers, little is known about contribution barriers. However, a steady influx of new developers is connected to the success of a FLOSS project. The first part of this thesis adds two surveys to the existing research that target contribution barriers and motivations of newcomers. The first exploratory survey provides the indications to formulate research hypotheses for the second main survey with 117 responses from newcomers in the two FLOSS projects Mozilla and GNOME. The results lead to an assessment of the importance of the identified contribution barriers and to a new model of the joining process that allows the identification of subgroups of newcomers affected by specific contribution barriers. The second part of the thesis uses the pattern concept to externalize knowledge about techniques lowering contribution barriers. This includes a complete categorization of the existing work on FLOSS patterns and the first empirical evaluation of these FLOSS patterns and their relationships. The thesis contains six FLOSS patterns that lower specific important contribution barriers identified in the surveys. Wikis are web-based systems that allow its users to modify the wiki's contents. They found on wiki principles with which they minimize contribution barriers. The last part of the thesis explores whether a wiki, whose content is usually natural text, can also be used for software development. Such a Wiki Development Environment (WikiDE) must fulfill the requirements of both an Integrated Development Environment (IDE) and a wiki. The simultaneous compliance of both sets of requirements imposes special challenges. The thesis describes an adapted contribution process supported by an architecture concept that solves these challenges. Two components of a WikiDE are discussed in detail. Each of them helps to lower a contribution barrier. A Proof of Concept (PoC) realization demonstrates the feasibility of the concept

    Trace-based Performance Analysis for Hardware Accelerators

    Get PDF
    This thesis presents how performance data from hardware accelerators can be included in event logs. It extends the capabilities of trace-based performance analysis to also monitor and record data from this novel parallelization layer. The increasing awareness to power consumption of computing devices has led to an interest in hybrid computing architectures as well. High-end computers, workstations, and mobile devices start to employ hardware accelerators to offload computationally intense and parallel tasks, while at the same time retaining a highly efficient scalar compute unit for non-parallel tasks. This execution pattern is typically asynchronous so that the scalar unit can resume other work while the hardware accelerator is busy. Performance analysis tools provided by the hardware accelerator vendors cover the situation of one host using one device very well. Yet, they do not address the needs of the high performance computing community. This thesis investigates ways to extend existing methods for recording events from highly parallel applications to also cover scenarios in which hardware accelerators aid these applications. After introducing a generic approach that is suitable for any API based acceleration paradigm, the thesis derives a suggestion for a generic performance API for hardware accelerators and its implementation with NVIDIA CUPTI. In a next step the visualization of event logs containing data from execution streams on different levels of parallelism is discussed. In order to overcome the limitations of classic performance profiles and timeline displays, a graph-based visualization using Parallel Performance Flow Graphs (PPFGs) is introduced. This novel technical approach is using program states in order to display similarities and differences between the potentially very large number of event streams and, thus, enables a fast way to spot load imbalances. The thesis concludes with the in-depth analysis of a case-study of PIConGPU---a highly parallel, multi-hybrid plasma physics simulation---that benefited greatly from the developed performance analysis methods.Diese Dissertation zeigt, wie der Ablauf von Anwendungsteilen, die auf Hardwarebeschleuniger ausgelagert wurden, als Programmspur mit aufgezeichnet werden kann. Damit wird die bekannte Technik der Leistungsanalyse von Anwendungen mittels Programmspuren so erweitert, dass auch diese neue Parallelitätsebene mit erfasst wird. Die Beschränkungen von Computersystemen bezüglich der elektrischen Leistungsaufnahme hat zu einer steigenden Anzahl von hybriden Computerarchitekturen geführt. Sowohl Hochleistungsrechner, aber auch Arbeitsplatzcomputer und mobile Endgeräte nutzen heute Hardwarebeschleuniger um rechenintensive, parallele Programmteile auszulagern und so den skalaren Hauptprozessor zu entlasten und nur für nicht parallele Programmteile zu verwenden. Dieses Ausführungsschema ist typischerweise asynchron: der Skalarprozessor kann, während der Hardwarebeschleuniger rechnet, selbst weiterarbeiten. Die Leistungsanalyse-Werkzeuge der Hersteller von Hardwarebeschleunigern decken den Standardfall (ein Host-System mit einem Hardwarebeschleuniger) sehr gut ab, scheitern aber an einer Unterstützung von hochparallelen Rechnersystemen. Die vorliegende Dissertation untersucht, in wie weit auch multi-hybride Anwendungen die Aktivität von Hardwarebeschleunigern aufzeichnen können. Dazu wird die vorhandene Methode zur Erzeugung von Programmspuren für hochparallele Anwendungen entsprechend erweitert. In dieser Untersuchung wird zuerst eine allgemeine Methodik entwickelt, mit der sich für jede API-gestützte Hardwarebeschleunigung eine Programmspur erstellen lässt. Darauf aufbauend wird eine eigene Programmierschnittstelle entwickelt, die es ermöglicht weitere leistungsrelevante Daten aufzuzeichnen. Die Umsetzung dieser Schnittstelle wird am Beispiel von NVIDIA CUPTI darstellt. Ein weiterer Teil der Arbeit beschäftigt sich mit der Darstellung von Programmspuren, welche Aufzeichnungen von den unterschiedlichen Parallelitätsebenen enthalten. Um die Einschränkungen klassischer Leistungsprofile oder Zeitachsendarstellungen zu überwinden, wird mit den parallelen Programmablaufgraphen (PPFGs) eine neue graphenbasisierte Darstellungsform eingeführt. Dieser neuartige Ansatz zeigt eine Programmspur als eine Folge von Programmzuständen mit gemeinsamen und unterchiedlichen Abläufen. So können divergierendes Programmverhalten und Lastimbalancen deutlich einfacher lokalisiert werden. Die Arbeit schließt mit der detaillierten Analyse von PIConGPU -- einer multi-hybriden Simulation aus der Plasmaphysik --, die in großem Maße von den in dieser Arbeit entwickelten Analysemöglichkeiten profiert hat

    Applications Development for the Computational Grid

    Get PDF

    Exploration of RVC applications using an ARM multicore processor = Exploración de aplicaciones RVC empleando un procesador ARM multinúcleo

    Get PDF
    Single core capabilities have reached their maximum clock speed; new multicore architectures provide an alternative way to tackle this issue instead. The design of decoding applications running on top of these multicore platforms and their optimization to exploit all system computational power is crucial to obtain best results. Since the development at the integration level of printed circuit boards are increasingly difficult to optimize due to physical constraints and the inherent increase in power consumption, development of multiprocessor architectures is becoming the new Holy Grail. In this sense, it is crucial to develop applications that can run on the new multi-core architectures and find out distributions to maximize the potential use of the system. Today most of commercial electronic devices, available in the market, are composed of embedded systems. These devices incorporate recently multi-core processors. Task management onto multiple core/processors is not a trivial issue, and a good task/actor scheduling can yield to significant improvements in terms of efficiency gains and also processor power consumption. Scheduling of data flows between the actors that implement the applications aims to harness multi-core architectures to more types of applications, with an explicit expression of parallelism into the application. On the other hand, the recent development of the MPEG Reconfigurable Video Coding (RVC) standard allows the reconfiguration of the video decoders. RVC is a flexible standard compatible with MPEG developed codecs, making it the ideal tool to integrate into the new multimedia terminals to decode video sequences. With the new versions of the Open RVC-CAL Compiler (Orcc), a static mapping of the actors that implement the functionality of the application can be done once the application executable has been generated. This static mapping must be done for each of the different cores available on the working platform. It has been chosen an embedded system with a processor with two ARMv7 cores. This platform allows us to obtain the desired tests, get as much improvement results from the execution on a single core, and contrast both with a PC-based multiprocessor system. Las posibilidades ofrecidas por el aumento de la velocidad de la frecuencia de reloj de sistemas de un solo procesador están siendo agotadas. Las nuevas arquitecturas multiprocesador proporcionan una vía de desarrollo alternativa en este sentido. El diseño y optimización de aplicaciones de descodificación de video que se ejecuten sobre las nuevas arquitecturas permiten un mejor aprovechamiento y favorecen la obtención de mayores rendimientos. Hoy en día muchos de los dispositivos comerciales que se están lanzando al mercado están integrados por sistemas embebidos, que recientemente están basados en arquitecturas multinúcleo. El manejo de las tareas de ejecución sobre este tipo de arquitecturas no es una tarea trivial, y una buena planificación de los actores que implementan las funcionalidades puede proporcionar importantes mejoras en términos de eficiencia en el uso de la capacidad de los procesadores y, por ende, del consumo de energía. Por otro lado, el reciente desarrollo del estándar de Codificación de Video Reconfigurable (RVC), permite la reconfiguración de los descodificadores de video. RVC es un estándar flexible y compatible con anteriores codecs desarrollados por MPEG. Esto hace de RVC el estándar ideal para ser incorporado en los nuevos terminales multimedia que se están comercializando. Con el desarrollo de las nuevas versiones del compilador específico para el desarrollo de lenguaje RVC-CAL (Orcc), en el que se basa MPEG RVC, el mapeo estático, para entornos basados en multiprocesador, de los actores que integran un descodificador es posible. Se ha elegido un sistema embebido con un procesador con dos núcleos ARMv7. Esta plataforma nos permitirá llevar a cabo las pruebas de verificación y contraste de los conceptos estudiados en este trabajo, en el sentido del desarrollo de descodificadores de video basados en MPEG RVC y del estudio de la planificación y mapeo estático de los mismos

    Chatterbox

    Get PDF
    In the market, there are not many options or tools for people that once could speak. However, due to an accident or stroke, not only lost their voices and even some of their movements, but also sign language is not an option either. The issue is that whichever communication alternative that is available is either too expensive to use or too complicated for their actual situation such as google voice, notepads, general writing tools, etc. as each of them requires a little bit of work to manage. Therefore we intended to ease speech-impaired people’s lives by designing and developing an application that speaks for them by only one touch. It is a user friendly application, with different interfaces divided according to its category such as Emergency, Fun, Greetings, Feelings and Daily Chat. Each interface displays its own set of buttons with different sentences, and each button plays out its corresponding sentence as soon as it is tapped. The sentences were recorded by us, applying different intonations so, it would give the application a better humanized form. We hope to make their lives easier, and give them back the possibility of expressing themselves in an easier way and keep up a steady conversation
    corecore