417 research outputs found

    A new cohesion metric and restructuring technique for object oriented paradigm

    Get PDF
    When software systems grow large during maintenance, they may lose their quality and become complex to read, understood and maintained. Developing a software system usually requires teams of developers working in concert to provide a finished product in a reasonable amount of time. What that means is many people may read each component of the software system such as a class in object oriented programming environment

    Design of neurophysiological signal analysis software

    Get PDF
    Recenti sviluppi tecnologici permettono registrazioni sempre più accurate dell’attività neuronale. Diversi laboratori hanno sviluppato programmi per l’analisi di tali dati, ma tuttora non esiste alcun software che fornisca un supporto completo per l’analisi e l’elaborazione dei segnali raccolti, condiviso dall’intera comunità scientifica. A tal fine, una simile applicazione è stata recentemente sviluppata presso il NeuroChip Laboratory dell’Università di Padova. Il pacchetto applicativo è stato realizzato seguendo una metologia di programmazione funzionale, ma per facilitarne il riutilizzo ed eventualmente fornire un supporto multi piattaforma è necessario che il software sia realizzato seguendo metologie di design orientato agli oggettiopenEmbargo per motivi di segretezza e di proprietà dei risultati e informazioni sensibil

    Compiler-Directed Transformation for Higher-Order Stencils

    Full text link
    As the cost of data movement increasingly dominates performance, developers of finite-volume and finite-difference solutions for partial differential equations (PDEs) are exploring novel higher-order stencils that increase numerical accuracy and computational intensity. This paper describes a new compiler reordering transformation applied to stencil operators that performs partial sums in buffers, and reuses the partial sums in computing multiple results. This optimization has multiple effect son improving stencil performance that are particularly important to higher-order stencils: exploits data reuse, reduces floating-point operations, and exposes efficient SIMD parallelism to backend compilers. We study the benefit of this optimization in the context of Geometric Multigrid (GMG), a widely used method to solvePDEs, using four different Jacobi smoothers built from 7-, 13-, 27-and 125-point stencils. We quantify performance, speedup, andnumerical accuracy, and use the Roofline model to qualify our results. Ultimately, we obtain over 4× speedup on the smoothers themselves and up to a 3× speedup on the multigrid solver. Finally, we demonstrate that high-order multigrid solvers have the potential of reducing total data movement and energy by several orders of magnitude

    Doctor of Philosophy

    Get PDF
    dissertationComputer programs have complex interactions with their underlying hardware, exhibiting complex behaviors as a result. It is critical to understand these programs, as they serve an importantrole: researchers use them to express new ideas in computer science, while many others derive production value from them. In both cases, program understanding leads to mastery over these functions, adding value to human endeavors. Memory behavior is one of the hallmarks of general program behavior: it represents the critical function of retrieving data for the program to work on; it often reflects the overall actions taken by the program, providing a signature of program behavior; and it is often an important performance bottleneck, as the the memory subsystem is typically much slower than the processor. These reasons justify an investigation into the memory behavior of programs. A memory reference trace is a list of memory transactions performed by a program at runtime, a rich data source capturing the whole of a program's interaction with the memory subsystem, and a clear starting point for investigating program memory behavior. However, such a trace is extremely difficult to interpret by mere inspection, as it consists solely of many, many addresses and operation codes, without any more structure or context. This dissertation proposes to use visualization to construct images and animations of the data within a reference trace, thereby visually transmitting structures and events as encoded in the trace. These visualization approaches are designed with different focuses, meant to expose various aspects of the trace. For instance, the time dimension of the reference traces can be handled either with animation, showing events as they occur, or by laying time out in a spatial dimension, giving a view of the entire history of the trace at once. The approaches also vary in their level of abstraction from the hardware: some are concretely connected to representations of the memory itself, while others are more free-form, using more abstract metaphors to highlight general behaviors and patterns, which in turn characterize the program behavior. Each approach delivers its own set of insights, as demonstrated in this dissertation

    Service Abstractions for Scalable Deep Learning Inference at the Edge

    Get PDF
    Deep learning driven intelligent edge has already become a reality, where millions of mobile, wearable, and IoT devices analyze real-time data and transform those into actionable insights on-device. Typical approaches for optimizing deep learning inference mostly focus on accelerating the execution of individual inference tasks, without considering the contextual correlation unique to edge environments and the statistical nature of learning-based computation. Specifically, they treat inference workloads as individual black boxes and apply canonical system optimization techniques, developed over the last few decades, to handle them as yet another type of computation-intensive applications. As a result, deep learning inference on edge devices still face the ever increasing challenges of customization to edge device heterogeneity, fuzzy computation redundancy between inference tasks, and end-to-end deployment at scale. In this thesis, we propose the first framework that automates and scales the end-to-end process of deploying efficient deep learning inference from the cloud to heterogeneous edge devices. The framework consists of a series of service abstractions that handle DNN model tailoring, model indexing and query, and computation reuse for runtime inference respectively. Together, these services bridge the gap between deep learning training and inference, eliminate computation redundancy during inference execution, and further lower the barrier for deep learning algorithm and system co-optimization. To build efficient and scalable services, we take a unique algorithmic approach of harnessing the semantic correlation between the learning-based computation. Rather than viewing individual tasks as isolated black boxes, we optimize them collectively in a white box approach, proposing primitives to formulate the semantics of the deep learning workloads, algorithms to assess their hidden correlation (in terms of the input data, the neural network models, and the deployment trials) and merge common processing steps to minimize redundancy

    Proceedings of the FEniCS Conference 2017

    Get PDF
    Proceedings of the FEniCS Conference 2017 that took place 12-14 June 2017 at the University of Luxembourg, Luxembourg

    Progressive Network Deployment, Performance, and Control with Software-defined Networking

    Get PDF
    The inflexible nature of traditional computer networks has led to tightly-integrated systems that are inherently difficult to manage and secure. New designs move low-level network control into software creating software-defined networks (SDN). Augmenting an existing network with these enhancements can be expensive and complex. This research investigates solutions to these problems. It is hypothesized that an add-on device, or shim could be used to make a traditional switch behave as an OpenFlow SDN switch while maintaining reasonable performance. A design prototype is found to cause approximately 1.5% reduction in throughput for one ow and less than double increase in latency, showing that such a solution may be feasible. It is hypothesized that a new design built on event-loop and reactive programming may yield a controller that is higher-performing and easier to program. The library node-openflow is found to have performance approaching that of professional controllers, however it exhibits higher variability in response rate. The framework rxdn is found to exceed performance of two comparable controllers by at least 33% with statistical significance in latency mode with 16 simulated switches, but is slower than the library node-openflow or professional controllers (e.g., Libfluid, ONOS, and NOX). Collectively, this work enhances the tools available to researchers, enabling experimentation and development toward more sustainable and secure infrastructur

    Building a Systematic Legacy System Modernization Approach

    Full text link
    A systematic legacy system modernizing approach represents a new approach for modernizing legacy systems. Systematic legacy system modernization has software reuse as an integral part of modernization. We have developed a modernization approach which uses software architecture reconstruction to find reusable components within the legacy system. The practice of software development and modernization continues to shift towards the reuse of components from legacy systems to handle the complexities of software development. Modernization of a legacy system requires reuse of software artefacts from legacy system to conserve the business rules and improve the system’s quality attributes. Software reuse is an integral part of our systematic legacy modernization approach. Software should be considered as an asset and reuse of these assets is essential to increase the return on the development costs. Software reuse ranges from reuse of ideas to algorithms to any documents that are created during the software development life cycle. Software reuse has many potential benefits which include increased software quality, and decreased software development cost and time. Demands for lower software production and maintenance costs, faster delivery of systems and increased quality can only be met by widespread and systematic software reuse. In spite of all these benefits software reuse adoption is not widespread in the software development communities. Software reuse cannot possibly become an engineering discipline so long as issues and concerns have not been clearly understood and dealt with. We have conducted two surveys to understand the issues and concerns of software reuse in the Conventional Software Engineering (CSE) Community and the Software Product Line (SPL) Community where reuse is an integral part of the product development. The quantitative and qualitative analysis of our surveys identified the critical factors which affect and inhibit software engineers and developers adopting software reuse. Software reuse has been talked about in generic terms in software product lines. Though software reuse is a core concept in SPL it has however failed to become a standardized practice. The survey conducted on the SPL Community investigates how software reuse is adopted in SPL so as to provide the necessary degree of support for engineering software product line applications and to identify some of the issues and concerns in software reuse. The identified issues and concerns have helped us to understand the difference between software reuse in the CSE and SPL Communities. It has also given us an indication of how both communities can learn good software reuse practices from each other in order to develop a common software reuse process. Based on the outcome of our surveys we have developed a systematic software reuse process, called the Knowledge Based Software Reuse (KBSR) Process, which incorporates a Repository of reusable software assets to build a systematic legacy system modernization approach. Being able to reuse software artefacts, be it software requirement specification, design, or code, would greatly enhance software productivity and reliability. All of these software artefacts can go in the Knowledge Based Software Reuse Repository and be candidates for reuse
    corecore