1,473 research outputs found

    VXA: A Virtual Architecture for Durable Compressed Archives

    Full text link
    Data compression algorithms change frequently, and obsolete decoders do not always run on new hardware and operating systems, threatening the long-term usability of content archived using those algorithms. Re-encoding content into new formats is cumbersome, and highly undesirable when lossy compression is involved. Processor architectures, in contrast, have remained comparatively stable over recent decades. VXA, an archival storage system designed around this observation, archives executable decoders along with the encoded content it stores. VXA decoders run in a specialized virtual machine that implements an OS-independent execution environment based on the standard x86 architecture. The VXA virtual machine strictly limits access to host system services, making decoders safe to run even if an archive contains malicious code. VXA's adoption of a "native" processor architecture instead of type-safe language technology allows reuse of existing "hand-optimized" decoders in C and assembly language, and permits decoders access to performance-enhancing architecture features such as vector processing instructions. The performance cost of VXA's virtualization is typically less than 15% compared with the same decoders running natively. The storage cost of archived decoders, typically 30-130KB each, can be amortized across many archived files sharing the same compression method.Comment: 14 pages, 7 figures, 2 table

    The potential of programmable logic in the middle: cache bleaching

    Full text link
    Consolidating hard real-time systems onto modern multi-core Systems-on-Chip (SoC) is an open challenge. The extensive sharing of hardware resources at the memory hierarchy raises important unpredictability concerns. The problem is exacerbated as more computationally demanding workload is expected to be handled with real-time guarantees in next-generation Cyber-Physical Systems (CPS). A large body of works has approached the problem by proposing novel hardware re-designs, and by proposing software-only solutions to mitigate performance interference. Strong from the observation that unpredictability arises from a lack of fine-grained control over the behavior of shared hardware components, we outline a promising new resource management approach. We demonstrate that it is possible to introduce Programmable Logic In-the-Middle (PLIM) between a traditional multi-core processor and main memory. This provides the unique capability of manipulating individual memory transactions. We propose a proof-of-concept system implementation of PLIM modules on a commercial multi-core SoC. The PLIM approach is then leveraged to solve long-standing issues with cache coloring. Thanks to PLIM, colored sparse addresses can be re-compacted in main memory. This is the base principle behind the technique we call Cache Bleaching. We evaluate our design on real applications and propose hypervisor-level adaptations to showcase the potential of the PLIM approach.Accepted manuscrip

    Dynamic and Transparent Analysis of Commodity Production Systems

    Full text link
    We propose a framework that provides a programming interface to perform complex dynamic system-level analyses of deployed production systems. By leveraging hardware support for virtualization available nowadays on all commodity machines, our framework is completely transparent to the system under analysis and it guarantees isolation of the analysis tools running on its top. Thus, the internals of the kernel of the running system needs not to be modified and the whole platform runs unaware of the framework. Moreover, errors in the analysis tools do not affect the running system and the framework. This is accomplished by installing a minimalistic virtual machine monitor and migrating the system, as it runs, into a virtual machine. In order to demonstrate the potentials of our framework we developed an interactive kernel debugger, nicknamed HyperDbg. HyperDbg can be used to debug any critical kernel component, and even to single step the execution of exception and interrupt handlers.Comment: 10 pages, To appear in the 25th IEEE/ACM International Conference on Automated Software Engineering, Antwerp, Belgium, 20-24 September 201

    Effectiveness of segment routing technology in reducing the bandwidth and cloud resources provisioning times in network function virtualization architectures

    Get PDF
    Network Function Virtualization is a new technology allowing for a elastic cloud and bandwidth resource allocation. The technology requires an orchestrator whose role is the service and resource orchestration. It receives service requests, each one characterized by a Service Function Chain, which is a set of service functions to be executed according to a given order. It implements an algorithm for deciding where both to allocate the cloud and bandwidth resources and to route the SFCs. In a traditional orchestration algorithm, the orchestrator has a detailed knowledge of the cloud and network infrastructures and that can lead to high computational complexity of the SFC Routing and Cloud and Bandwidth resource Allocation (SRCBA) algorithm. In this paper, we propose and evaluate the effectiveness of a scalable orchestration architecture inherited by the one proposed within the European Telecommunications Standards Institute (ETSI) and based on the functional separation of an NFV orchestrator in Resource Orchestrator (RO) and Network Service Orchestrator (NSO). Each cloud domain is equipped with an RO whose task is to provide a simple and abstract representation of the cloud infrastructure. These representations are notified of the NSO that can apply a simplified and less complex SRCBA algorithm. In addition, we show how the segment routing technology can help to simplify the SFC routing by means of an effective addressing of the service functions. The scalable orchestration solution has been investigated and compared to the one of a traditional orchestrator in some network scenarios and varying the number of cloud domains. We have verified that the execution time of the SRCBA algorithm can be drastically reduced without degrading the performance in terms of cloud and bandwidth resource costs

    Virtual Laboratories in Cloud Infrastructure of Educational Institutions

    Full text link
    Modern educational institutions widely used virtual laboratories and cloud technologies. In practice must deal with security, processing speed and other tasks. The paper describes the experience of the construction of an experimental stand cloud computing and network management. Models and control principles set forth herein.Comment: 3 pages, Published in: 2014 2nd International Conference on Emission Electronics (ICEE), Saint-Petersburg, Russi

    Lockdown: Dynamic Control-Flow Integrity

    Full text link
    Applications written in low-level languages without type or memory safety are especially prone to memory corruption. Attackers gain code execution capabilities through such applications despite all currently deployed defenses by exploiting memory corruption vulnerabilities. Control-Flow Integrity (CFI) is a promising defense mechanism that restricts open control-flow transfers to a static set of well-known locations. We present Lockdown, an approach to dynamic CFI that protects legacy, binary-only executables and libraries. Lockdown adaptively learns the control-flow graph of a running process using information from a trusted dynamic loader. The sandbox component of Lockdown restricts interactions between different shared objects to imported and exported functions by enforcing fine-grained CFI checks. Our prototype implementation shows that dynamic CFI results in low performance overhead.Comment: ETH Technical Repor
    • …
    corecore