894 research outputs found

    TechNews digests: Jan - Nov 2009

    Get PDF
    TechNews is a technology, news and analysis service aimed at anyone in the education sector keen to stay informed about technology developments, trends and issues. TechNews focuses on emerging technologies and other technology news. TechNews service : digests september 2004 till May 2010 Analysis pieces and News combined publish every 2 to 3 month

    Intel TDX Demystified: A Top-Down Approach

    Full text link
    Intel Trust Domain Extensions (TDX) is a new architectural extension in the 4th Generation Intel Xeon Scalable Processor that supports confidential computing. TDX allows the deployment of virtual machines in the Secure-Arbitration Mode (SEAM) with encrypted CPU state and memory, integrity protection, and remote attestation. TDX aims to enforce hardware-assisted isolation for virtual machines and minimize the attack surface exposed to host platforms, which are considered to be untrustworthy or adversarial in the confidential computing's new threat model. TDX can be leveraged by regulated industries or sensitive data holders to outsource their computations and data with end-to-end protection in public cloud infrastructure. This paper aims to provide a comprehensive understanding of TDX to potential adopters, domain experts, and security researchers looking to leverage the technology for their own purposes. We adopt a top-down approach, starting with high-level security principles and moving to low-level technical details of TDX. Our analysis is based on publicly available documentation and source code, offering insights from security researchers outside of Intel

    Security, Performance and Energy Trade-offs of Hardware-assisted Memory Protection Mechanisms

    Full text link
    The deployment of large-scale distributed systems, e.g., publish-subscribe platforms, that operate over sensitive data using the infrastructure of public cloud providers, is nowadays heavily hindered by the surging lack of trust toward the cloud operators. Although purely software-based solutions exist to protect the confidentiality of data and the processing itself, such as homomorphic encryption schemes, their performance is far from being practical under real-world workloads. The performance trade-offs of two novel hardware-assisted memory protection mechanisms, namely AMD SEV and Intel SGX - currently available on the market to tackle this problem, are described in this practical experience. Specifically, we implement and evaluate a publish/subscribe use-case and evaluate the impact of the memory protection mechanisms and the resulting performance. This paper reports on the experience gained while building this system, in particular when having to cope with the technical limitations imposed by SEV and SGX. Several trade-offs that provide valuable insights in terms of latency, throughput, processing time and energy requirements are exhibited by means of micro- and macro-benchmarks.Comment: European Commission Project: LEGaTO - Low Energy Toolset for Heterogeneous Computing (EC-H2020-780681

    SoK: Confidential Quartet - Comparison of Platforms for Virtualization-Based Confidential Computing

    Get PDF
    Confidential computing allows processing sensitive workloads in securely isolated spaces. Following earlier adop- tion of process-based approaches to isolation, vendors are now enabling hardware and firmware support for virtualization-based confidential computing on several server platforms. Due to variations in the technology stack, threat model, implemen-tation and functionality, the available solutions offer somewhat different capabilities, trade-offs and security guarantees. In this paper we review, compare and contextualize four virtualization-based confidential computing technologies for enterprise server platforms - AMD SEV, ARM CCA, IBM PEF and Intel TDX

    Performance metrics for consolidated servers

    Get PDF
    In spite of the widespread adoption of virtualization and consol- idation, there exists no consensus with respect to how to bench- mark consolidated servers that run multiple guest VMs on the same physical hardware. For example, VMware proposes VMmark which basically computes the geometric mean of normalized throughput values across the VMs; Intel uses vConsolidate which reports a weighted arithmetic average of normalized throughput values. These benchmarking methodologies focus on total system through- put (i.e., across all VMs in the system), and do not take into account per-VM performance. We argue that a benchmarking methodology for consolidated servers should quantify both total system through- put and per-VM performance in order to provide a meaningful and precise performance characterization. We therefore present two performance metrics, Total Normalized Throughput (TNT) to characterize total system performance, and Average Normalized Reduced Throughput (ANRT) to characterize per-VM performance. We compare TNT and ANRT against VMmark using published performance numbers, and report several cases for which the VM- mark score is misleading. This is, VMmark says one platform yields better performance than another, however, TNT and ANRT show that both platforms represent different trade-offs in total system throughput versus per-VM performance. Or, even worse, in a cou- ple cases we observe that VMmark yields opposite conclusions than TNT and ANRT, i.e., VMmark says one system performs better than another one which is contradicted by TNT/ANRT performance characterization

    Development of a virtualization systems architecture course for the information sciences and technologies department at the Rochester Institute of Technology (RIT)

    Get PDF
    Virtualization is a revolutionary technology that has changed the way computing is performed in data centers. By converting traditionally siloed computing assets to shared pools of resources, virtualization provides a considerable number of advantages such as more efficient use of physical server resources, more efficient use of datacenter space, reduced energy consumption, simplified system administration, simplified backup and disaster recovery, and a host of other advantages. Due to the considerable number of advantages, companies and organizations of various sizes have either migrated their workloads to virtualized environments or are considering virtualization of their workloads. As per Gartner Magic Quadrant for x86 Server Virtualization Infrastructure 2013 , roughly two-third of x86 server workloads are virtualized [1]. The need for virtualization solutions by companies and organizations has increased the demand for qualified virtualization professionals for planning, designing, implementing, and maintaining virtualized infrastructure of different scales. Although universities are the main source for educating IT professionals, the field of information technology is so dynamic and changing so rapidly that not all universities can keep pace with the change. As a result, providing the latest technology that is being used in the information technology industry in the curriculums of universities is a big advantage for information technology universities. Taking into consideration the trend toward virtualization in computing environments and the great demand for virtualization professionals in the industry, the faculty of Information Sciences and Technologies department at RIT decided to prepare a graduate course in the master\u27s program in Networking and System Administration entitled Virtualization Systems Architecture , which better prepares students to a find a career in the field of enterprise computing. This research is composed of five chapters. It starts by briefly going through the history of computer virtualization and exploring when and why it came into existence and how it evolved. The second chapter of the research goes through the challenges in virtualization of the x86 platform architecture and the solutions used to overcome the challenges. In the third chapter, various types of hypervisors are discussed and the advantages and disadvantages of each one are discussed. In the fourth chapter, the architecture and features of the two leading virtualization solutions are explored. Then in the final chapter, the research goes through the contents of the Virtualization Systems Architecture course

    An innovative approach to performance metrics calculus in cloud computing environments: a guest-to-host oriented perspective

    Get PDF
    In virtualized systems, the task of profiling and resource monitoring is not straight-forward. Many datacenters perform CPU overcommittment using hypervisors, running multiple virtual machines on a single computer where the total number of virtual CPUs exceeds the total number of physical CPUs available. From a customer point of view, it could be indeed interesting to know if the purchased service levels are effectively respected by the cloud provider. The innovative approach to performance profiling described in this work is based on the use of virtual performance counters, only recently made available by some hypervisors to their virtual machines, to implement guest-wide profiling. Although it isn't possible for the virtual machine to access Virtual Machine Monitor, with this method it is able to gather interesting informations to deduce the state of resource overcommittment of the virtualization host where it is executed. Tests have been carried out inside the compute nodes of FIWARE Genoa Node, an instance of a widely distributed federated community cloud, based on OpenStack and KVM. AgiLab-DITEN, the laboratory I belonged to and where I conducted my studies, together with TnT-Lab\u2013DITEN and CNIT-GE-Unit designed, installed and configured the whole Genoa Node, that was hosted on DITEN-UniGE equipment rooms. All the software measuring instruments, operating systems and programs used in this research are publicly available and free, and can be easily installed in a micro instance of virtual machine, rapidly deployable also in public clouds

    A Server Consolidation Solution

    Get PDF
    Advances in server architecture has enabled corporations the ability to strategically redesign their data centers in order to realign the system infrastructure to business needs. The architectural design of physically and logically consolidating servers into fewer and smaller hardware platforms can reduce data center overhead costs, while adding quality of service. In order for the organization to take advantage of the architectural opportunity a server consolidation project was proposed that utilized blade technology coupled with the virtualization of servers. Physical consolidation reduced the data center facility requirements, while server virtualization reduced the number of required hardware platforms. With the constant threat of outsourcing, coupled with the explosive growth of the organization, the IT managers were challenged to provide increased system services and functionality to a larger user community, while maintaining the same head count. A means of reducing overhead costs associated with the in-house data center was to reduce the required facility and hardware resources. The reduction in the data center footprint required less real estate, electricity, fire suppression infrastructure, and HVAC utilities. In addition, since the numerous stand alone servers were consolidated onto a standard platform system administration became more agile to business opportunities.
    • …
    corecore