47 research outputs found

    The Medium of Visualization for Software Comprehension

    Get PDF
    Although abundant studies have shown how visualization can help software developers to understand software systems, visualization is still not a common practice since developers (i) have little support to find a proper visualization for their needs, and once they find a suitable visualization tool, they (ii) are unsure of its effectiveness. We aim to offer support for identifying proper visualizations, and to increase the effectiveness of visualization techniques. In this dissertation, we characterize proposed software visualizations. To fill the gap between proposed visualizations and their practical application, we encapsulate such characteristics in an ontology, and propose a meta-visualization approach to find suitable visualizations. Amongst others characteristics of software visualizations, we identify that the medium used to display them can be a means to increase the effectiveness of visualization techniques for particular comprehension tasks.We implement visualization prototypes and validate our thesis via experiments. We found that even though developers using a physical 3D model medium required the least time to deal with tasks that involve identifying outliers, they perceived the least difficulty when visualizing systems based on the standard computer screen medium. Moreover, developers using immersive virtual reality obtained the highest recollection. We conclude that the effectiveness of software visualizations that use the city metaphor to support comprehension tasks can be increased when city visualizations are rendered in an appropriate medium. Furthermore, that visualization of software visualizations can be a suitable means for exploring their multiple characteristics that can be properly encapsulated in an ontology

    Development of a Powerwall-based solution for the manual flagging of radio astronomy data from eMerlin

    Get PDF
    This project was created with the intention of establishing an optimisation method for the manual flagging of interferometric data of the eMerlin radio astronomy array, using a Powerwall as a visualisation tool. The complexity of this process which is due to the amount of variables and parameters demands a deep understanding of the data treatment. Once the data is achieved by the antennas the signals are correlated. This process generates undesired signals which mostly coming from radio frequency interference. Also when the calibration is performed some values can mislead the expected outcome. Although the flagging is supported with algorithms this method is not one hundred percent accurate. That is why visual inspection is still required. The possibility to use a Powerwall as a visualisation system allows different and new dynamics in terms of the interaction of the analyst with the information required to make the flagging

    Advances in Intelligent Robotics and Collaborative Automation

    Get PDF
    This book provides an overview of a series of advanced research lines in robotics as well as of design and development methodologies for intelligent robots and their intelligent components. It represents a selection of extended versions of the best papers presented at the Seventh IEEE International Workshop on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications IDAACS 2013 that were related to these topics. Its contents integrate state of the art computational intelligence based techniques for automatic robot control to novel distributed sensing and data integration methodologies that can be applied to intelligent robotics and automation systems. The objective of the text was to provide an overview of some of the problems in the field of robotic systems and intelligent automation and the approaches and techniques that relevant research groups within this area are employing to try to solve them.The contributions of the different authors have been grouped into four main sections:• Robots• Control and Intelligence• Sensing• Collaborative automationThe chapters have been structured to provide an easy to follow introduction to the topics that are addressed, including the most relevant references, so that anyone interested in this field can get started in the area

    Advances in Intelligent Robotics and Collaborative Automation

    Get PDF
    This book provides an overview of a series of advanced research lines in robotics as well as of design and development methodologies for intelligent robots and their intelligent components. It represents a selection of extended versions of the best papers presented at the Seventh IEEE International Workshop on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications IDAACS 2013 that were related to these topics. Its contents integrate state of the art computational intelligence based techniques for automatic robot control to novel distributed sensing and data integration methodologies that can be applied to intelligent robotics and automation systems. The objective of the text was to provide an overview of some of the problems in the field of robotic systems and intelligent automation and the approaches and techniques that relevant research groups within this area are employing to try to solve them.The contributions of the different authors have been grouped into four main sections:• Robots• Control and Intelligence• Sensing• Collaborative automationThe chapters have been structured to provide an easy to follow introduction to the topics that are addressed, including the most relevant references, so that anyone interested in this field can get started in the area

    The effect of interior bezel presence and width on magnitude judgement

    Get PDF
    © The Authors, 2014. This is the author's version of the work. It is posted here by permission for your personal use. Not for redistribution. First published in print by Canadian Human-Computer Communications Society, and also in electronic form by ACM, Wallace, J. R., Vogel, D., & Lank, E. (2014). The effect of interior bezel presence and width on magnitude judgement. In Proceedings of Graphics Interface 2014 (pp. 175–182). Montreal, Quebec, Canada: Canadian Information Processing Society.Large displays are often constructed by tiling multiple small displays, creating visual discontinuities from inner bezels that may affect human perception of data. Our work investigates how bezels impact magnitude judgement, a fundamental aspect of perception. Two studies are described which control for bezel presence, bezel width, and user-to-display distance. Our findings form three implications for the design of tiled displays. Bezels wider than 0.5cm introduce a 4-7% increase in judgement error from a distance, which we simplify to a 5% rule of thumb when assessing display hardware. Length judgements made at arm's length are most affected by wider bezels, and are an important use case to consider. At arm's length, bezel compensation techniques provide a limited benefit in terms of judgement accuracy. Copyright held by authors

    Quality-Aware Tooling

    Get PDF
    Programming is a fascinating activity that can yield results capable of changing people lives by automating daily tasks or even completely reimagining how we perform certain activities. Such a great power comes with a handful of challenges, with software maintainability being one of them. Maintainability cannot be validated by executing the program but has to be assessed by analyzing the codebase. This tedious task can be also automated by the means of software development. Programs called static analyzers can process source code and try to detect suspicious patterns. While these programs were proven to be useful, there is also an evidence that they are not used in practice. In this dissertation we discuss the concept of quality-aware tooling —- an approach that seeks a promotion of static analysis by seamlessly integrating it into development tools. We describe our experience of applying quality-aware tooling on a core distribution of a development environment. Our main focus is to provide live quality feedback in the code editor, but we also integrate static analysis into other tools based on our code quality model. We analyzed the attitude of the developers towards the integrated static analysis and assessed the impact of the integration on the development ecosystem. As a result 90% of software developers find the live feedback useful, quality rules received an overhaul to better match the contemporary development practices, and some developers even experimented with a custom analysis implementations. We discovered that live feedback helped developers to avoid dangerous mistakes, saved time, and taught valuable concepts. But most importantly we changed the developers' attitude towards static analysis from viewing it as just another tool to seeing it as an integral part of their toolset

    Scene Reconstruction from Multi-Scale Input Data

    Get PDF
    Geometry acquisition of real-world objects by means of 3D scanning or stereo reconstruction constitutes a very important and challenging problem in computer vision. 3D scanners and stereo algorithms usually provide geometry from one viewpoint only, and several of the these scans need to be merged into one consistent representation. Scanner data generally has lower noise levels than stereo methods and the scanning scenario is more controlled. In image-based stereo approaches, the aim is to reconstruct the 3D surface of an object solely from multiple photos of the object. In many cases, the stereo geometry is contaminated with noise and outliers, and exhibits large variations in scale. Approaches that fuse such data into one consistent surface must be resilient to such imperfections. In this thesis, we take a closer look at geometry reconstruction using both scanner data and the more challenging image-based scene reconstruction approaches. In particular, this work focuses on the uncontrolled setting where the input images are not constrained, may be taken with different camera models, under different lighting and weather conditions, and from vastly different points of view. A typical dataset contains many views that observe the scene from an overview perspective, and relatively few views capture small details of the geometry. What results from these datasets are surface samples of the scene with vastly different resolution. As we will show in this thesis, the multi-resolution, or, "multi-scale" nature of the input is a relevant aspect for surface reconstruction, which has rarely been considered in literature yet. Integrating scale as additional information in the reconstruction process can make a substantial difference in surface quality. We develop and study two different approaches for surface reconstruction that are able to cope with the challenges resulting from uncontrolled images. The first approach implements surface reconstruction by fusion of depth maps using a multi-scale hierarchical signed distance function. The hierarchical representation allows fusion of multi-resolution depth maps without mixing geometric information at incompatible scales, which preserves detail in high-resolution regions. An incomplete octree is constructed by incrementally adding triangulated depth maps to the hierarchy, which leads to scattered samples of the multi-resolution signed distance function. A continuous representation of the scattered data is defined by constructing a tetrahedral complex, and a final, highly-adaptive surface is extracted by applying the Marching Tetrahedra algorithm. A second, point-based approach is based on a more abstract, multi-scale implicit function defined as a sum of basis functions. Each input sample contributes a single basis function which is parameterized solely by the sample's attributes, effectively yielding a parameter-free method. Because the scale of each sample controls the size of the basis function, the method automatically adapts to data redundancy for noise reduction and is highly resilient to the quality-degrading effects of low-resolution samples, thus favoring high-resolution surfaces. Furthermore, we present a robust, image-based reconstruction system for surface modeling: MVE, the Multi-View Environment. The implementation provides all steps involved in the pipeline: Calibration and registration of the input images, dense geometry reconstruction by means of stereo, a surface reconstruction step and post-processing, such as remeshing and texturing. In contrast to other software solutions for image-based reconstruction, MVE handles large, uncontrolled, multi-scale datasets as well as input from more controlled capture scenarios. The reason lies in the particular choice of the multi-view stereo and surface reconstruction algorithms. The resulting surfaces are represented using a triangular mesh, which is a piecewise linear approximation to the real surface. The individual triangles are often so small that they barely contribute any geometric information and can be ill-shaped, which can cause numerical problems. A surface remeshing approach is introduced which changes the surface discretization such that more favorable triangles are created. It distributes the vertices of the mesh according to a density function, which is derived from the curvature of the geometry. Such a mesh is better suited for further processing and has reduced storage requirements. We thoroughly compare the developed methods against the state-of-the art and also perform a qualitative evaluation of the two surface reconstruction methods on a wide range of datasets with different properties. The usefulness of the remeshing approach is demonstrated on both scanner and multi-view stereo data

    HiReD: a high-resolution multi-window visualisation environment for cluster-driven displays

    Get PDF
    High-resolution, wall-size displays often rely on bespoke software for performing interactive data visualisation, leading to interface designs with little or no consistency between displays. This makes adoption for novice users difficult when migrating from desktop environments. However, desktop interface techniques (such as task- and menu- bars) do not scale well and so cannot be relied on to drive the design of large display interfaces. In this paper we present HiReD, a multi-window environment for cluster-driven displays. As well as describing the technical details of the system, we also describe a suite of low-precision interface techniques that aim to provide a familiar desktop environment to the user while overcoming the scalability issues of high-resolution displays. We hope that these techniques, as well as the implementation of HiReD itself, can encourage good practice in the design and development of future interfaces for high-resolution, wall-size displays

    The Effect of Several Tradeoffs in the Implementation of Large Displays on the Performance of the Users of the Displays

    Get PDF
    A large display can be constructed in two different ways: 1. a rectangular grid, or tiling, of many small screens with seams, or bezels, at the boundaries between the screens and 2. one large screen with no bezel inside the screen. The first way costs significantly less than the second way. However, the first way creates a discontinuity in the image because of the bezels, and this discontinuity may impact a user's performance. There are two different ways to implement the first, tiling way of constructing a large display: 1. tiled, multiple projections onto one large screen and 2. tiling of actual LC displays. With the first way, bezels are avoidable, but there is the necessity of continuous, precise coordination of multiple projectors. With the second way, once the displays are mounted, no coordination is necessary, but bezels are unavoidable. While it might seem preferable to avoid bezels and incur higher construction or coordination costs, the reality is that if no user's performance is negatively affected by bezels, then there is no reason not to use the cheaper methods of constructing large displays. Therefore, the aim of this study is to determine how bezels affect a user's task performance. We conducted two controlled experiments in order to determine - how varying the width of bezels affects a user's performance; - how varying the number of bezels affects a user's performance; and - how the choice between tiled, multiple projections and tiling of actual LC displays affects a user's performance. In each experiment, the participants solved a puzzle within a given time. The findings from this study are that user performance is not affected by variation in the width of bezels and by variation in the number of bezels. However, the tiling of actual LC displays is better for user performance than tiled, multiple projections. Therefore, it is more acceptable to use a rectangular grid of actual LC displays to implement a large display

    A study, exploration and development of the interaction of music production techniques in a contemporary desktop setting

    Get PDF
    As with all computer-based technologies, music production is advancing at a rate comparable to ‘Moore’s law’. Developments within the discipline are gathering momentum exponentially; stretching the boundaries of the field, deepening the levels to which mediation can be applied, concatenating previously discrete hardware technologies into the desktop domain, demanding greater insight from practitioners to master these technologies and even defining new genres of music through the increasing potential for sonic creativity to evolve. This DMus project will draw from the implications of the above developments and study the application of technologies currently available in the desktop environment, from emulations of that which was traditionally hardware to the latest spectrally based audio-manipulation tools. It will investigate the interaction of these technologies, and explore creative possibilities that were unattainable only a few years ago – all as exemplified through the production of two contrasting albums of music. In addition, new software will be developed to actively contribute to the evolution of music production as we know it. The focus will be on extended production technique and innovation, through both development and context. The commentary will frame the practical work. It will offer a research context with a number of foci in preference to literal questions, it will qualify the methodology and then form a literature & practice review. It will then present a series of frameworks that analyse music production contexts and technologies in a historical perspective. By setting such a trajectory, the current state-of-the-art can be best placed, and a number of the progressive production techniques associated with the submitted artefacts can then by contextualised. It will terminate with a discussion of the work that moves from the specific to the general
    corecore