67,113 research outputs found
EbbRT: a framework for building per-application library operating systems
Efficient use of high speed hardware requires operating system components be customized to the application work- load. Our general purpose operating systems are ill-suited for this task. We present EbbRT, a framework for constructing per-application library operating systems for cloud applications. The primary objective of EbbRT is to enable high-performance in a tractable and maintainable fashion. This paper describes the design and implementation of EbbRT, and evaluates its ability to improve the performance of common cloud applications. The evaluation of the EbbRT prototype demonstrates memcached, run within a VM, can outperform memcached run on an unvirtualized Linux. The prototype evaluation also demonstrates an 14% performance improvement of a V8 JavaScript engine benchmark, and a node.js webserver that achieves a 50% reduction in 99th percentile latency compared to it run on Linux
Link-time smart card code hardening
This paper presents a feasibility study to protect smart card software against fault-injection attacks by means of link-time code rewriting. This approach avoids the drawbacks of source code hardening, avoids the need for manual assembly writing, and is applicable in conjunction with closed third-party compilers. We implemented a range of cookbook code hardening recipes in a prototype link-time rewriter and evaluate their coverage and associated overhead to conclude that this approach is promising. We demonstrate that the overhead of using an automated link-time approach is not significantly higher than what can be obtained with compile-time hardening or with manual hardening of compiler-generated assembly code
Recommended from our members
Trade Promotion Authority (Fast-Track): Labor Issues (Including H.R. 3005 and H.R. 3019)
CRS ReportCRSTradePromotionAuthorityLabor_20Issues1201.pdf: 211 downloads, before Oct. 1, 2020
The Ecosystem Approach to Fisheries: Issues, Terminology, Principles, Institutional Foundations, Implementation and Outlook
Ecosystems are complex and dynamic natural units that produce goods and services beyond those of benefit to fisheries. Because fisheries have a direct impact on the ecosystem, which is also impacted by other human activities, they need to be managed in an ecosystem context. The meaning of the terms 'ecosystem management', 'ecosystem based management', 'ecosystem approach to fisheries'(EAF), etc., are still not universally defined and progressively evolving. The justification of EAF is evident in the characteristics of an exploited ecosystem and the impacts resulting from fisheries and other activities. The rich set of international agreements of relevance to EAF contains a large number of principles and conceptual objectives. Both provide a fundamental guidance and a significant challenge for the implementation of EAF. The available international instruments also provide the institutional foundations for EAF. The FAO Code of Conduct for Responsible Fisheries is particularly important in this respect and contains provisions for practically all aspects of the approach. One major difficulty in defining EAF lies precisely in turning the available concepts and principles into operational objectives from which an EAF management plan would more easily be developed. The paper discusses these together with the types of action needed to achieve them. Experience in EAF implementation is still limited but some issues are already apparent, e.g. in added complexity, insufficient capacity, slow implementation, need for a pragmatic approach, etc. It is argued, in conclusion, that the future of EAF and fisheries depends on the way in which the two fundamental concepts of fisheries management and ecosystem management, and their respective stakeholders, will join efforts or collide
CUP: Comprehensive User-Space Protection for C/C++
Memory corruption vulnerabilities in C/C++ applications enable attackers to
execute code, change data, and leak information. Current memory sanitizers do
no provide comprehensive coverage of a program's data. In particular, existing
tools focus primarily on heap allocations with limited support for stack
allocations and globals. Additionally, existing tools focus on the main
executable with limited support for system libraries. Further, they suffer from
both false positives and false negatives.
We present Comprehensive User-Space Protection for C/C++, CUP, an LLVM
sanitizer that provides complete spatial and probabilistic temporal memory
safety for C/C++ program on 64-bit architectures (with a prototype
implementation for x86_64). CUP uses a hybrid metadata scheme that supports all
program data including globals, heap, or stack and maintains the ABI. Compared
to existing approaches with the NIST Juliet test suite, CUP reduces false
negatives by 10x (0.1%) compared to the state of the art LLVM sanitizers, and
produces no false positives. CUP instruments all user-space code, including
libc and other system libraries, removing them from the trusted code base
Recommended from our members
Exploiting iteration-level parallelism in dataflow programs
The term "dataflow" generally encompasses three distinct aspects of computation - a data-driven model of computation, a functional/declarative programming language, and a special-purpose multiprocessor architecture. In this paper we decouple the language and architecture issues by demonstrating that declarative programming is a suitable vehicle for the programming of conventional distributed-memory multiprocessors.This is achieved by appling several transformations to the compiled declarative program to achieve iteration-level (rather than instruction-level) parallelism. The transformations first group individual instructions into sequential light-weight processes, and then insert primitives to: (1) cause array allocation to be distributed over multiple processors, (2) cause computation to follow the data distribution by inserting an index filtering mechanism into a given loop and spawning a copy of it on all PEs; the filter causes each instance of that loop to operate on a different subrange of the index variable.The underlying model of computation is a dataflow/von Neumann hybrid in that exection within a process is control-driven while the creation, blocking, and activation of processes is data-driven.The performance of this process-oriented dataflow system (PODS) is demonstrated using the hydrodynamics simulation benchmark called SIMPLE, where a 19-fold speedup on a 32-processor architecture has been achieved
- …