1,276 research outputs found

    NiMoToons: a totally graphic workbench for program tuning and experimentation

    Get PDF
    NiMo (Nets In Motion) is a Graphic-Functional-Data Flow language designed to visualize algorithms and their execution in an understandable way. Programs are process networks that evolve showing the full state at each execution step. Processes are polymorphic, higher order and have multiple outputs. The language has a set of primitive processes well suited for stream programming and supports open programs and interactive debugging. The new version of the environment NiMo Toons includes: an also graphic and incremental type inference system, multiple output processes as higher order parameters, symbolic execution, five evaluation modes that can be globally or locally set for each process and dynamically changed, and facilities to measure the used resources (parallelism level, number of steps, number of processes, etc.)Postprint (author’s final draft

    RHmalloc : a very large, highly concurrent dynamic memory manager

    Full text link
    University of Technology, Sydney. Dept. of Software Engineering.Dynamic memory management (DMM) is a fundamental aspect of computing, directly affecting the capability, performance and reliability of virtually every system in existence today. Yet oddly, the fifty year research into DMM has not taken memory capacity into account, having fallen significantly behind hardware trends. Comparatively little research work on scalable DMM has been conducted – of the order of ten papers exist on this topic – all of which focus on CPU scalability only; the largest heap reported in the literature to date is 600MB. By contrast, symmetric multiprocessor (SMP) machines with terabytes of memory are now commercially available. The contribution of our research is the formal exploration, design, construction and proof of a general purpose, high performance dynamic memory manager which scales indefinitely with respect to both CPU and memory – one that can predictably manage a heap of arbitrary size, on any SMP machine with an arbitrary number of CPU’s, without a priori knowledge. We begin by recognizing the scattered, inconsistency of the literature surrounding this topic. Firstly, to ensure clarity, we present a simplified introduction, followed by a catalog of the fundamental techniques. We discuss the melting pot of engineering tradeoffs, so as to establish a sound basis to tackle the issue at hand – large scale DMM. We review both the history and state of the art, from which significant insight into this topic is to be found. We then explore the problem space and suggest a workable solution. Our proposal, known as RHmalloc, is based on the novel perspective that, a highly scalable heap can be viewed as, an unbounded set of finite-sized sub-heaps, where each sub-heap maybe concurrently shared by any number of threads; such that a suitable sub-heap can be found in O(1) time, and an allocation from a suitable sub-heap is also O(1). Testing the design properties of RHmalloc, we show by extrapolation that, RHmalloc will scale to at least 1,024 CPU’s and 1PB; and we theoretically prove that DMM scales indefinitely with respect to both CPU and memory. Most importantly, the approach for the scalability proof alludes to a general analysis and design technique for systems of this nature

    Learning Human Pose Estimation Features with Convolutional Networks

    Full text link
    This paper introduces a new architecture for human pose estimation using a multi- layer convolutional network architecture and a modified learning technique that learns low-level features and higher-level weak spatial models. Unconstrained human pose estimation is one of the hardest problems in computer vision, and our new architecture and learning schema shows significant improvement over the current state-of-the-art results. The main contribution of this paper is showing, for the first time, that a specific variation of deep learning is able to outperform all existing traditional architectures on this task. The paper also discusses several lessons learned while researching alternatives, most notably, that it is possible to learn strong low-level feature detectors on features that might even just cover a few pixels in the image. Higher-level spatial models improve somewhat the overall result, but to a much lesser extent then expected. Many researchers previously argued that the kinematic structure and top-down information is crucial for this domain, but with our purely bottom up, and weak spatial model, we could improve other more complicated architectures that currently produce the best results. This mirrors what many other researchers, like those in the speech recognition, object recognition, and other domains have experienced

    Utilization of timed automata as a verification tool for real-time security protocols

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Computer Engineering, Izmir, 2010Includes bibliographical references (leaves: 85-92)Text in English; Abstract: Turkish and Englishxi, 92 leavesTimed Automata is an extension to the automata-theoretic approach to the modeling of real time systems that introduces time into the classical automata. Since it has been first proposed by Alur and Dill in the early nineties, it has become an important research area and been widely studied in both the context of formal languages and modeling and verification of real time systems. Timed automata use dense time modeling, allowing efficient model checking of time-sensitive systems whose correct functioning depend on the timing properties. One of these application areas is the verification of security protocols. This thesis aims to study the timed automata model and utilize it as a verification tool for security protocols. As a case study, the Neuman-Stubblebine Repeated Authentication Protocol is modeled and verified employing the time-sensitive properties in the model. The flaws of the protocol are analyzed and it is commented on the benefits and challenges of the model
    • …
    corecore