1,892 research outputs found
Extending Coq with Imperative Features and its Application to SAT Verification
This work was supported in part by the french ANR DECERT initiativeInternational audienceCoq has within its logic a programming language that can be used to replace many deduction steps into a single computation, this is the so-called reflection. In this paper, we present two extensions of the evaluation mechanism that preserve its correctness and make it possible to deal with cpu-intensive tasks such as proof checking of SAT traces
Collaborative Categorization on the Web
Collaborative categorization is an emerging direction for research and innovative
applications. Arguably, collaborative categorization on the Web is an especially
promising emerging form of collaborative Web systems because of both, the
widespread use of the conventional Web and the emergence of the Semantic Web
providing with more semantic information on Web data. This paper discusses this issue
and proposes two approaches: collaborative categorization via category merging and
collaborative categorization proper. The main advantage of the first approach is that it
can be rather easily realized and implemented using existing systems such as Web
browsers and mail clients. A prototype system for collaborative Web usage that uses
category merging for collaborative categorization is described and the results of field
experiments using it are reported. The second approach, called collaborative
categorization proper, however, is more general and scales better. The data structure and
user interface aspects of an approach to collaborative categorization proper are
discussed
Redesigning Transaction Processing Systems for Non-Volatile Memory
Department of Computer Science and EngineeringTransaction Processing Systems are widely used because they make the user be able to manage
their data more efficiently. However, they suffer performance bottleneck due to the redundant
I/O for guaranteeing data consistency. In addition to the redundant I/O, slow storage device
makes the performance more degraded. Leveraging non-volatile memory is one of the promising
solutions the performance bottleneck in Transaction Processing Systems. However, since the
I/O granularity of legacy storage devices and non-volatile memory is not equal, traditional
Transaction Processing System cannot fully exploit the performance of persistent memory.
The goal of this dissertation is to fully exploit non-volatile memory for improving the performance
of Transaction Processing Systems.
Write amplification between Transaction Processing System is pointed out as a performance
bottleneck. As first approach, we redesigned Transaction Processing Systems to minimize the
redundant I/O between the Transaction Processing Systems. We present LS-MVBT that integrates
recovery information into the main database file to remove temporary files for recovery.
The LS-MVBT also employs five optimizations to reduce the write traffics in single fsync() calls.
We also exploit the persistent memory to reduce the performance bottleneck from slow storage
devices. However, since the traditional recovery method is for slow storage devices, we develop
byte-addressable differential logging, user-level heap manager, and transaction-aware persistence
to fully exploit the persistent memory. To minimize the redundant I/O for guarantee data consistency,
we present the failure-atomic slotted paging with persistent buffer cache.
Redesigning indexing structure is the second approach to exploit the non-volatile memory
fully. Since the B+-tree is originally designed for block granularity, It generates excessive I/O
traffics in persistent memory. To mitigate this traffic, we develop cache line friendly B+-tree
which aligns its node size to cache line size. It can minimize the write traffic. Moreover, with
hardware transactional memory, it can update its single node atomically without any additional
redundant I/O for guaranteeing data consistency. It can also adapt Failure-Atomic Shift and
Failure-Atomic In-place Rebalancing to eliminate unnecessary I/O.
Furthermore, We improved the persistent memory manager that exploit traditional memory
heap structure with free-list instead of segregated lists for small memory allocations to minimize
the memory allocation overhead.
Our performance evaluation shows that our improved version that consider I/O granularity
of non-volatile memory can efficiently reduce the redundant I/O traffic and improve the
performance by large of a margin.ope
Near-Optimal Dispersion on Arbitrary Anonymous Graphs
Given an undirected, anonymous, port-labeled graph of n memory-less nodes, m edges, and degree ?, we consider the problem of dispersing k ? n robots (or tokens) positioned initially arbitrarily on one or more nodes of the graph to exactly k different nodes of the graph, one on each node. The objective is to simultaneously minimize time to achieve dispersion and memory requirement at each robot. If all k robots are positioned initially on a single node, depth first search (DFS) traversal solves this problem in O(min{m,k?}) time with ?(log(k+?)) bits at each robot. However, if robots are positioned initially on multiple nodes, the best previously known algorithm solves this problem in O(min{m,k?}? log ?) time storing ?(log(k+?)) bits at each robot, where ? ? k/2 is the number of multiplicity nodes in the initial configuration. In this paper, we present a novel multi-source DFS traversal algorithm solving this problem in O(min{m,k?}) time with ?(log(k+?)) bits at each robot, improving the time bound of the best previously known algorithm by O(log ?) and matching asymptotically the single-source DFS traversal bounds. This is the first algorithm for dispersion that is optimal in both time and memory in arbitrary anonymous graphs of constant degree, ? = O(1). Furthermore, the result holds in both synchronous and asynchronous settings
Wear Leveling Revisited
Wear leveling - a technology designed to balance the write counts among memory cells regardless of the requested accesses - is vital in prolonging the lifetime of certain computer memory devices, especially the type of next-generation non-volatile memory, known as phase change memory (PCM). Although researchers have been working extensively on wear leveling, almost all existing studies mainly focus on the practical aspects and lack rigorous mathematical analyses. The lack of theory is particularly problematic for security-critical applications. We address this issue by revisiting wear leveling from a theoretical perspective. First, we completely determine the problem parameter regime for which Security Refresh - one of the most well-known existing wear leveling schemes for PCM - works effectively by providing a positive result and a matching negative result. In particular, Security Refresh is not competitive for the practically relevant regime of large-scale memory. Then, we propose a novel scheme that achieves better lifetime, time/space overhead, and wear-free space for the relevant regime not covered by Security Refresh. Unlike existing studies, we give rigorous theoretical lifetime analyses, which is necessary to assess and control the security risk.Peer reviewe
CHERI: A hybrid capability-system architecture for scalable software compartmentalization
CHERI extends a conventional RISC Instruction-
Set Architecture, compiler, and operating system to support
fine-grained, capability-based memory protection to mitigate
memory-related vulnerabilities in C-language TCBs. We describe
how CHERI capabilities can also underpin a hardware-software
object-capability model for application compartmentalization
that can mitigate broader classes of attack. Prototyped as an
extension to the open-source 64-bit BERI RISC FPGA softcore
processor, FreeBSD operating system, and LLVM compiler,
we demonstrate multiple orders-of-magnitude improvement in
scalability, simplified programmability, and resulting tangible
security benefits as compared to compartmentalization based on
pure Memory-Management Unit (MMU) designs. We evaluate
incrementally deployable CHERI-based compartmentalization
using several real-world UNIX libraries and applications.We thank our colleagues Ross Anderson, Ruslan Bukin,
Gregory Chadwick, Steve Hand, Alexandre Joannou, Chris
Kitching, Wojciech Koszek, Bob Laddaga, Patrick Lincoln,
Ilias Marinos, A Theodore Markettos, Ed Maste, Andrew W.
Moore, Alan Mujumdar, Prashanth Mundkur, Colin Rothwell,
Philip Paeps, Jeunese Payne, Hassen Saidi, Howie Shrobe, and
Bjoern Zeeb, our anonymous reviewers, and shepherd Frank
Piessens, for their feedback and assistance. This work is part of
the CTSRD and MRC2 projects sponsored by the Defense Advanced
Research Projects Agency (DARPA) and the Air Force
Research Laboratory (AFRL), under contracts FA8750-10-C-
0237 and FA8750-11-C-0249. The views, opinions, and/or
findings contained in this paper are those of the authors and
should not be interpreted as representing the official views
or policies, either expressed or implied, of the Department
of Defense or the U.S. Government. We acknowledge the EPSRC
REMS Programme Grant [EP/K008528/1], Isaac Newton
Trust, UK Higher Education Innovation Fund (HEIF), Thales
E-Security, and Google, Inc.This is the author accepted manuscript. The final version is available at http://dx.doi.org/10.1109/SP.2015.
- …