3,652 research outputs found

    Simple, safe, and efficient memory management using linear pointers

    Full text link
    Efficient and safe memory management is a hard problem. Garbage collection promises automatic memory management but comes with the cost of increased memory footprint, reduced parallelism in multi-threaded programs, unpredictable pause time, and intricate tuning parameters balancing the program's workload and designated memory usage in order for an application to perform reasonably well. Existing research mitigates the above problems to some extent, but programmer error could still cause memory leak by erroneously keeping memory references when they are no longer needed. We need a methodology for programmers to become resource aware, so that efficient, scalable, predictable and high performance programs may be written without the fear of resource leak. Linear logic has been recognized as the formalism of choice for resource tracking. It requires explicit introduction and elimination of resources and guarantees that a resource cannot be implicitly shared or abandoned, hence must be linear. Early languages based on linear logic focused on Curry-Howard correspondence. They began by limiting the expressive powers of the language and then reintroduced them by allowing controlled sharing which is necessary for recursive functions. However, only by deviating from Curry-Howard correspondence could later development actually address programming errors in resource usage. The contribution of this dissertation is a simple, safe, and efficient approach introducing linear resource ownership semantics into C++ (which is still a widely used language after 30 years since inception) through linear pointer, a smart pointer inspired by linear logic. By implementing various linear data structures and a parallel, multi-threaded memory allocator based on these data structures, this work shows that linear pointer is practical and efficient in the real world, and that it is possible to build a memory management stack that is entirely leak free. The dissertation offers some closing remarks on the difficulties a formal system would encounter when reasoning about a concurrent linear data algorithm, and what might be done to solve these problems

    Concurrent Access Algorithms for Different Data Structures: A Research Review

    Get PDF
    Algorithms for concurrent data structure have gained attention in recent years as multi-core processors have become ubiquitous. Several features of shared-memory multiprocessors make concurrent data structures significantly more difficult to design and to verify as correct than their sequential counterparts. The primary source of this additional difficulty is concurrency. This paper provides an overview of the some concurrent access algorithms for different data structures

    Efficient pebbling for list traversal synopses

    Full text link
    We show how to support efficient back traversal in a unidirectional list, using small memory and with essentially no slowdown in forward steps. Using O(logn)O(\log n) memory for a list of size nn, the ii'th back-step from the farthest point reached so far takes O(logi)O(\log i) time in the worst case, while the overhead per forward step is at most ϵ\epsilon for arbitrary small constant ϵ>0\epsilon>0. An arbitrary sequence of forward and back steps is allowed. A full trade-off between memory usage and time per back-step is presented: kk vs. kn1/kkn^{1/k} and vice versa. Our algorithms are based on a novel pebbling technique which moves pebbles on a virtual binary, or tt-ary, tree that can only be traversed in a pre-order fashion. The compact data structures used by the pebbling algorithms, called list traversal synopses, extend to general directed graphs, and have other interesting applications, including memory efficient hash-chain implementation. Perhaps the most surprising application is in showing that for any program, arbitrary rollback steps can be efficiently supported with small overhead in memory, and marginal overhead in its ordinary execution. More concretely: Let PP be a program that runs for at most TT steps, using memory of size MM. Then, at the cost of recording the input used by the program, and increasing the memory by a factor of O(logT)O(\log T) to O(MlogT)O(M \log T), the program PP can be extended to support an arbitrary sequence of forward execution and rollback steps: the ii'th rollback step takes O(logi)O(\log i) time in the worst case, while forward steps take O(1) time in the worst case, and 1+ϵ1+\epsilon amortized time per step.Comment: 27 page

    A systematic analysis of the effects of increasing degrees of serum immunodepletion in terms of depth of coverage and other key aspects in top-down and bottom-up proteomic analyses

    Get PDF
    Immunodepletion of clinical fluids to overcome the dominance by a few very abundant proteins has been explored but studies are few, commonly examining only limited aspects with one analytical platform. We have systematically compared immunodepletion of 6, 14, or 20 proteins using serum from renal transplant patients, analysing reproducibility, depth of coverage, efficiency, and specificity using 2-D DIGE (‘top-down’) and LC-MS/MS (‘bottom-up’). A progressive increase in protein number (≥2 unique peptides) was found from 159 in unfractionated serum to 301 following 20 protein depletion using a relatively high-throughput 1-D-LC-MS/MS approach, including known biomarkers and moderate–lower abundance proteins such as NGAL and cytokine/growth factor receptors. On the contrary, readout by 2-D DIGE demonstrated good reproducibility of immunodepletion, but additional proteins seen tended to be isoforms of existing proteins. Depletion of 14 or 20 proteins followed by LC-MS/MS showed excellent reproducibility of proteins detected and a significant overlap between columns. Using label-free analysis, greater run-to-run variability was seen with the Prot20 column compared with the MARS14 column (median %CVs of 30.9 versus 18.2%, respectively) and a corresponding wider precision profile for the Prot20. These results illustrate the potential of immunodepletion followed by 1-D nano-LC-LTQ Orbitrap Velos analysis in a moderate through-put biomarker discovery process

    Verbal paired associates and the hippocampus: The role of scenes

    Get PDF
    It is widely agreed that patients with bilateral hippocampal damage are impaired at binding pairs of words together. Consequently, the verbal paired associates (VPA) task has become emblematic of hippocampal function. This VPA deficit is not well understood and is particularly difficult for hippocampal theories with a visuospatial bias to explain (e.g., cognitive map and scene construction theories). Resolving the tension among hippocampal theories concerning the VPA could be important for leveraging a fuller understanding of hippocampal function. Notably, VPA tasks typically use high imagery concrete words and so conflate imagery and binding. To determine why VPA engages the hippocampus, we devised an fMRI encoding task involving closely matched pairs of scene words, pairs of object words, and pairs of very low imagery abstract words. We found that the anterior hippocampus was engaged during processing of both scene and object word pairs in comparison to abstract word pairs, despite binding occurring in all conditions. This was also the case when just subsequently remembered stimuli were considered. Moreover, for object word pairs, fMRI activity patterns in anterior hippocampus were more similar to those for scene imagery than object imagery. This was especially evident in participants who were high imagery users and not in mid and low imagery users. Overall, our results show that hippocampal engagement during VPA, even when object word pairs are involved, seems to be evoked by scene imagery rather than binding. This may help to resolve the issue that visuospatial hippocampal theories have in accounting for verbal memory
    corecore