2,841 research outputs found
BPM, Agile, and Virtualization Combine to Create Effective Solutions
The rate of change in business and government is accelerating. A number of
techniques for addressing that change have emerged independently to provide for
automated solutions in this environment. This paper will examine three of the
most popular of these technologies-business process management, the agile
software development movement, and infrastructure virtualization-to expose the
commonalities in these approaches and how, when used together, their combined
effect results in rapidly deployed, more successful solutions
CVA6 RISC-V Virtualization: Architecture, Microarchitecture, and Design Space Exploration
Virtualization is a key technology used in a wide range of applications, from
cloud computing to embedded systems. Over the last few years, mainstream
computer architectures were extended with hardware virtualization support,
giving rise to a set of virtualization technologies (e.g., Intel VT, Arm VE)
that are now proliferating in modern processors and SoCs. In this article, we
describe our work on hardware virtualization support in the RISC-V CVA6 core.
Our contribution is multifold and encompasses architecture, microarchitecture,
and design space exploration. In particular, we highlight the design of a set
of microarchitectural enhancements (i.e., G-Stage Translation Lookaside Buffer
(GTLB), L2 TLB) to alleviate the virtualization performance overhead. We also
perform a Design Space Exploration (DSE) and accompanying post-layout
simulations (based on 22nm FDX technology) to assess Performance, Power ,and
Area (PPA). Further, we map design variants on an FPGA platform (Genesys 2) to
assess the functional performance-area trade-off. Based on the DSE, we select
an optimal design point for the CVA6 with hardware virtualization support. For
this optimal hardware configuration, we collected functional performance
results by running the MiBench benchmark on Linux atop Bao hypervisor for a
single-core configuration. We observed a performance speedup of up to 16%
(approx. 12.5% on average) compared with virtualization-aware non-optimized
design at the minimal cost of 0.78% in area and 0.33% in power. Finally, all
work described in this article is publicly available and open-sourced for the
community to further evaluate additional design configurations and software
stacks
TrustShadow: Secure Execution of Unmodified Applications with ARM TrustZone
The rapid evolution of Internet-of-Things (IoT) technologies has led to an
emerging need to make it smarter. A variety of applications now run
simultaneously on an ARM-based processor. For example, devices on the edge of
the Internet are provided with higher horsepower to be entrusted with storing,
processing and analyzing data collected from IoT devices. This significantly
improves efficiency and reduces the amount of data that needs to be transported
to the cloud for data processing, analysis and storage. However, commodity OSes
are prone to compromise. Once they are exploited, attackers can access the data
on these devices. Since the data stored and processed on the devices can be
sensitive, left untackled, this is particularly disconcerting.
In this paper, we propose a new system, TrustShadow that shields legacy
applications from untrusted OSes. TrustShadow takes advantage of ARM TrustZone
technology and partitions resources into the secure and normal worlds. In the
secure world, TrustShadow constructs a trusted execution environment for
security-critical applications. This trusted environment is maintained by a
lightweight runtime system that coordinates the communication between
applications and the ordinary OS running in the normal world. The runtime
system does not provide system services itself. Rather, it forwards requests
for system services to the ordinary OS, and verifies the correctness of the
responses. To demonstrate the efficiency of this design, we prototyped
TrustShadow on a real chip board with ARM TrustZone support, and evaluated its
performance using both microbenchmarks and real-world applications. We showed
TrustShadow introduces only negligible overhead to real-world applications.Comment: MobiSys 201
An Analysis of Storage Virtualization
Investigating technologies and writing expansive documentation on their capabilities is like hitting a moving target. Technology is evolving, growing, and expanding what it can do each and every day. This makes it very difficult when trying to snap a line and investigate competing technologies. Storage virtualization is one of those moving targets. Large corporations develop software and hardware solutions that try to one up the competition by releasing firmware and patch updates to include their latest developments. Some of their latest innovations include differing RAID levels, virtualized storage, data compression, data deduplication, file deduplication, thin provisioning, new file system types, tiered storage, solid state disk, and software updates to coincide these technologies with their applicable hardware. Even data center environmental considerations like reusable energies, data center environmental characteristics, and geographic locations are being used by companies both small and large to reduce operating costs and limit environmental impacts. Companies are even moving to an entire cloud based setup to limit their environmental impact as it could be cost prohibited to maintain your own corporate infrastructure. The trifecta of integrating smart storage architectures to include storage virtualization technologies, reducing footprint to promote energy savings, and migrating to cloud based services will ensure a long-term sustainable storage subsystem
File system metadata virtualization
The advance of computing systems has brought new ways to use and access the stored data that push the architecture of traditional file systems to its limits, making them inadequate to handle the new needs. Current challenges affect both the performance of high-end computing systems and its usability from the applications perspective. On one side, high-performance computing equipment is rapidly developing into large-scale aggregations of computing elements in the form of clusters, grids or clouds. On the other side, there is a widening range of scientific and commercial applications that seek to exploit these new computing facilities. The requirements of such applications are also heterogeneous, leading to dissimilar patterns of use of the underlying file systems. Data centres have tried to compensate this situation by providing several file systems to fulfil distinct requirements. Typically, the different file systems are mounted on different branches of a directory tree, and the preferred use of each branch is publicised to users. A similar approach is being used in personal computing devices. Typically, in a personal computer, there is a visible and clear distinction between the portion of the file system name space dedicated to local storage, the part corresponding to remote file systems and, recently, the areas linked to cloud services as, for example, directories to keep data synchronized across devices, to be shared with other users, or to be remotely backed-up. In practice, this approach compromises the usability of the file systems and the possibility of exploiting all the potential benefits. We consider that this burden can be alleviated by determining applicable features on a per-file basis, and not associating them to the location in a static, rigid name space. Moreover, usability would be further increased by providing multiple dynamic name spaces that could be adapted to specific application needs. This thesis contributes to this goal by proposing a mechanism to decouple the user view of the storage from its underlying structure. The mechanism consists in the virtualization of file system metadata (including both the name space and the object attributes) and the interposition of a sensible layer to take decisions on where and how the files should be stored in order to benefit from the underlying file system features, without incurring on usability or performance penalties due to inadequate usage. This technique allows to present multiple, simultaneous virtual views of the name space and the file system object attributes that can be adapted to specific application needs without altering the underlying storage configuration. The first contribution of the thesis introduces the design of a metadata virtualization framework that makes possible the above-mentioned decoupling; the second contribution consists in a method to improve file system performance in large-scale systems by using such metadata virtualization framework; finally, the third contribution consists in a technique to improve the usability of cloud-based storage systems in personal computing devices.Postprint (published version
- …