17 research outputs found

    Digital Signal Processing

    Get PDF
    Contains an introduction and reports on twenty research projects.National Science Foundation (Grant ECS 84-07285)U.S. Navy - Office of Naval Research (Contract N00014-81-K-0742)National Science Foundation FellowshipSanders Associates, Inc.U.S. Air Force - Office of Scientific Research (Contract F19628-85-K-0028)Canada, Bell Northern Research ScholarshipCanada, Fonds pour la Formation de Chercheurs et l'Aide a la Recherche Postgraduate FellowshipCanada, Natural Science and Engineering Research Council Postgraduate FellowshipU.S. Navy - Office of Naval Research (Contract N00014-81-K-0472)Fanny and John Hertz Foundation FellowshipCenter for Advanced Television StudiesAmoco Foundation FellowshipU.S. Air Force - Office of Scientific Research (Contract F19628-85-K-0028

    Automated Archaeological Survey of Ancient Irrigation Canals

    No full text
    From the Washington University Senior Honors Thesis Abstracts (WUSHTA), Volume 1, Spring 2009. Published by the Office of Undergraduate Research. Henry Biggs, Director, Office of Undergraduate Research and Associate Dean, College of Arts & Sciences; E. Holly Tasker, Editor. Mentor: Robert Ples

    Fast Dual Ring Queues

    No full text
    In this paper, we present two new FIFO dual queues. Like all dual queues, they arrange for dequeue operations to block when the queue is empty, and to complete in the original order when data becomes available. Compared to alternatives in which dequeues on an empty queue return an error code and force the caller to retry, dual queues provide a valuable guarantee of fairness. Our algorithms, based on Morrison and Afek's LCRQ from PPoPP'13, outperform existing dual queues - notably the one in java.util.concurrent - by a factor of four to six. For both of our algorithms, we present extensions that guarantee lock freedom, albeit at some cost in performance

    Failure-Atomic Persistent Memory Updates via JUSTDO Logging

    No full text

    Concurrency implications of nonvolatile byte-addressable memory

    No full text
    Thesis (Ph. D.)--University of Rochester. Department Computer Science, 2018.In the near future, storage technology advances are expected to provide nonvolatile byte addressable memory (NVM) for general purpose computing. These new technologies provide high density storage and speeds only slightly slower than DRAM, and are consequently presumed by industry to be used as main memory storage. We believe that the common availability of fast NVM storage will have a significant impact on all levels of the computing hierarchy. Such a technology can be leveraged by an assortment of common applications, and will require significant changes to both operating systems and systems library code. Existing software for durable storage is a poor match for NVM, as it both assumes a larger granularity of access and a higher latency overhead. Our thesis is that exploiting this new byte-addressable and nonvolatile technology requires a significant redesign of current systems, and that by designing systems that are tailored to NVM specifically we can realize performance gains. This thesis extends existing system software for understanding and using nonvolatile main memory. In particular, we propose to understand durability as a shared memory construct, instead of an I/O construct, and consequently will focus particularly on concurrent applications. The work covered here builds theoretical and practical infrastructure for using nonvolatile main memory. At the theory level, we explore what it means for a concurrent data structure to be “correct” when its state can reside in nonvolatile memory, propose novel designs and design philosophies for data structures that meet these correctness criteria, and demonstrate that all nonblocking data structures can be easily transformed into persistent, correct, versions of themselves. At the practical level, we explore how to give programmers systems for manipulating persistent memory in a consistent manner, thereby avoiding inconsistencies after a crash. Combining these two ideas, we also explore how to compose data structure operations into larger, consistent operation in persistence

    Linearizability of Persistent Memory Objects under a Full-System-Crash Failure Model

    No full text
    This paper provides a theoretical and practical framework for crash-resilient data structures on a machine with persistent (nonvolatile) memory but transient registers and cache. In contrast to certain prior work, but in keeping with “real world” systems, we assume a full-system failure model, in which all transient state (of all processes) is lost on a crash. We introduce the notion of durable linearizability to govern the safety of concurrent objects under this failure model and a corresponding relaxed, buffered variant which ensures that the persistent state in the event of a crash is consistent but not necessarily up to date. At the implementation level, we present a new “memory persistency model,” explicit epoch persistency, that builds upon and generalizes prior work. Our model captures both hardware buffering and fully relaxed consistency, and subsumes both existing and proposed instruction set architectures. Using the persistency model, we present an automated transform to convert any linearizable, nonblocking concurrent object into one that is also durably linearizable. We also present a design pattern, analogous to linearization points, for the construction of other, more optimized objects. Finally, we discuss generic optimizations that may improve performance while preserving both safety and liveness

    iDO: Compiler-Directed Failure Atomicity for Nonvolatile Memory

    No full text
    This paper presents iDO, a compiler-directed approach to failure atomicity with nonvolatile memory. Unlike most prior work, which instruments each store of persistent data for redo or undo logging, the iDO compiler identifies idempotent instruction sequences, whose re-execution is guaranteed to be side-effect-free, thereby eliminating the need to log every persistent store. Using an extension of prior work on JUSTDO logging, the compiler then arranges, during recovery from failure, to back up each thread to the beginning of the current idempotent region and re-execute to the end of the current failure-Atomic section. This extension transforms JUSTDO logging from a technique of value only on hypothetical future machines with nonvolatile caches into a technique that also significantly outperforms state-of-The art lock-based persistence mechanisms on current hardware during normal execution, while preserving very fast recovery times
    corecore