19 research outputs found

    Freecursive ORAM: [Nearly] Free Recursion and Integrity Verification for Position-based Oblivious RAM

    Get PDF
    Oblivious RAM (ORAM) is a cryptographic primitive that hides memory access patterns as seen by untrusted storage. Recently, ORAM has been architected into secure processors. A big challenge for hardware ORAM schemes is how to efficiently manage the Position Map (PosMap), a central component in modern ORAM algorithms. Implemented naively, the PosMap causes ORAM to be fundamentally unscalable in terms of on-chip area. On the other hand, a technique called Recursive ORAM fixes the area problem yet significantly increases ORAM's performance overhead. To address this challenge, we propose three new mechanisms. We propose a new ORAM structure called the PosMap Lookaside Buffer (PLB) and PosMap compression techniques to reduce the performance overhead from Recursive ORAM empirically (the latter also improves the construction asymptotically). Through simulation, we show that these techniques reduce the memory bandwidth overhead needed to support recursion by 95%, reduce overall ORAM bandwidth by 37% and improve overall SPEC benchmark performance by 1.27x. We then show how our PosMap compression techniques further facilitate an extremely efficient integrity verification scheme for ORAM which we call PosMap MAC (PMMAC). For a practical parameterization, PMMAC reduces the amount of hashing needed for integrity checking by >= 68x relative to prior schemes and introduces only 7% performance overhead. We prototype our mechanisms in hardware and report area and clock frequency for a complete ORAM design post-synthesis and post-layout using an ASIC flow in a 32~nm commercial process. With 2 DRAM channels, the design post-layout runs at 1~GHz and has a total area of .47~mm2. Depending on PLB-specific parameters, the PLB accounts for 10% to 26% area. PMMAC costs 12% of total design area. Our work is the first to prototype Recursive ORAM or ORAM with any integrity scheme in hardware.Qatar Computing Research Institute (QCRI-CSAIL Parternship)National Science Foundation (U.S.)American Society for Engineering Education. National Defense Science and Engineering Graduate Fellowshi

    Foundations and Technological Landscape of Cloud Computing

    Get PDF
    The cloud computing paradigm has brought the benefits of utility computing to a global scale. It has gained paramount attention in recent years. Companies are seriously considering to adopt this new paradigm and expecting to receive significant benefits. In fact, the concept of cloud computing is not a revolution in terms of technology; it has been established based on the solid ground of virtualization, distributed system, and web services. To comprehend cloud computing, its foundations and technological landscape need to be adequately understood. This paper provides a comprehensive review on the building blocks of cloud computing and relevant technological aspects. It focuses on four key areas including architecture, virtualization, data management, and security issues

    Foundations and Technological Landscape of Cloud Computing

    Get PDF

    Systems Support for Trusted Execution Environments

    Get PDF
    Cloud computing has become a default choice for data processing by both large corporations and individuals due to its economy of scale and ease of system management. However, the question of trust and trustoworthy computing inside the Cloud environments has been long neglected in practice and further exacerbated by the proliferation of AI and its use for processing of sensitive user data. Attempts to implement the mechanisms for trustworthy computing in the cloud have previously remained theoretical due to lack of hardware primitives in the commodity CPUs, while a combination of Secure Boot, TPMs, and virtualization has seen only limited adoption. The situation has changed in 2016, when Intel introduced the Software Guard Extensions (SGX) and its enclaves to the x86 ISA CPUs: for the first time, it became possible to build trustworthy applications relying on a commonly available technology. However, Intel SGX posed challenges to the practitioners who discovered the limitations of this technology, from the limited support of legacy applications and integration of SGX enclaves into the existing system, to the performance bottlenecks on communication, startup, and memory utilization. In this thesis, our goal is enable trustworthy computing in the cloud by relying on the imperfect SGX promitives. To this end, we develop and evaluate solutions to issues stemming from limited systems support of Intel SGX: we investigate the mechanisms for runtime support of POSIX applications with SCONE, an efficient SGX runtime library developed with performance limitations of SGX in mind. We further develop this topic with FFQ, which is a concurrent queue for SCONE's asynchronous system call interface. ShieldBox is our study of interplay of kernel bypass and trusted execution technologies for NFV, which also tackles the problem of low-latency clocks inside enclave. The two last systems, Clemmys and T-Lease are built on a more recent SGXv2 ISA extension. In Clemmys, SGXv2 allows us to significantly reduce the startup time of SGX-enabled functions inside a Function-as-a-Service platform. Finally, in T-Lease we solve the problem of trusted time by introducing a trusted lease primitive for distributed systems. We perform evaluation of all of these systems and prove that they can be practically utilized in existing systems with minimal overhead, and can be combined with both legacy systems and other SGX-based solutions. In the course of the thesis, we enable trusted computing for individual applications, high-performance network functions, and distributed computing framework, making a <vision of trusted cloud computing a reality

    A flexible fine-grained adaptive framework for parallel mobile hybrid cloud applications

    Get PDF
    Mobile devices have become ubiquitous and provide ever richer content and functionality. At the same time, applications are also becoming more complex and require ever increasing amount of computational power and energy. With cloud computing providing unlimited elastic on-demand resources, supporting mobile devices with cloud allows overcoming limitations of mobile devices. This is generally known as Mobile Cloud Computing (MCC) and can be achieved through code offloading that selects computationally or data intensive parts of an application, outsources them to more-resourceful spaces and brings back the final results. While code offloading has been widely studied in the past within the context of distributed systems and grid computing, applying it to current mobile applications requires significant amount of manual changes to existing application codes. An alternative is to outsource the entire application process or the whole virtual machine in which the application is running. This solution assumes running the same code on a more-resourceful system is more efficient, but it is coarse-grained and requires significant amount of data to be transferred. Furthermore, requirements and expectations from mobile applications vary considerably by different users using wide range of mobile devices in various environmental conditions. This diversity in requirements and expectations creates wide range of target offloading goals, ranging from maximizing application performance to minimizing mobile energy consumption. The increased dynamicity and complexity of mobile cloud applications requires open systems that interact with the environment while addressing application-specific constraints, user expectations and hardware limitations. Our goal is to facilitate mobile cloud application development by masking all the complexity of mobile-to-cloud code offloading without requiring application developers to rewrite their code or perform additional manual work. Our focus is on separating the application logic, to be developed by programmers, from the application component configuration and distribution, to be adjusted transparently and dynamically at run-time. Our framework is fine-grained, supporting mobile application configuration and distribution at the granularity of individual components; it is flexible, allowing organizations, application developers, or end-users easily adjust target offloading goal or define policy-driven restrictions on offloading budget, execution quality, or privacy and move around of components without modifying the existing application codes; and it is adaptive, addressing the dynamicity in run-time conditions and end-user contexts. It further supports component distribution in a hybrid cloud environment consisting of multiple public and private cloud spaces. Finally, it provides a new code offloading model that supports fully parallel program execution, where application components located at mobile device and multiple cloud spaces are executed independently but concurrently. The proposed solution can be divided into three main parts: First, a light-weight monitoring system, called Monitor, to capture dynamic environmental parameters and end-user context, profile application resource usage and communications, as well as monitoring availability and performance of cloud resources. Profiling energy consumption per specific application components is primary of importance and requires design and development of a fine-grained automatic energy consumption model, as most mobile devices do not provide any tool for direct measurement of consumed energy and different applications with arbitrary number of components might be running at any time. Second, we design and implement two independent performance-based and energy-based models to enable transparent automatic configuration and distribution of application code and data components that address specific organization, application, and end-user requirements. These models leverage dynamic information from the Monitor on run-time parameters, energy and resource usage of different components, and application characteristics to optimize application performance or mobile energy consumption with respect to a predefined policy. Finally, we design and develop a proof-of-concept framework called IMCM, Illinois Mobile Cloud Management, that embodies the described components to enable fine-grained adaptive application component configuration and distribution, while providing flexibility in terms of adjusting desired target optimization goal or defining additional policy-driven constraints on offloading budget, quality of service per resource, and privacy. Evaluations are carried out using a suite of benchmark applications, including computationally-intensive, I/O-intensive, communication-intensive and combined multi-purpose applications. Compared to sequential execution on a mobile device, these empirical benchmarks using IMCM framework result in speedups or energy-savings factor of over 50 times

    Design and evaluation of information flow signature for secure computation of applications

    Get PDF
    This thesis presents an architectural solution that provides secure and reliable execution of an application that computes critical data, in spite of potential hardware and software vulnerabilities. The technique does not require source code of or specifications about the malicious library function(s) called during execution of an application. The solution is based on the concept of Information Flow Signatures (IFS). The technique uses both a model-checker-based symbolic fault injection analysis tool called SymPLFIED to generate an IFS for an application or operating system, and runtime signature checking at the level of hardware to protect the integrity of critical data. The runtime checking is implemented in the IFS module. Reliable computation of data is ensured by the critical value re-computation (CVR) module. Prototype implementation of the signature checking and reliability module on a soft processor within an FPGA incurs no performance overhead and about 12% chip area overhead. The security module itself incurs about 7.5% chip area overhead. Performance evaluations indicate that the IFS module incurs as little as 3-4% overhead compared to 88-100% overhead when the runtime checking is implemented as a part of software. Preliminary testing indicates that the technique can provide 100% coverage for insider attacks that manifest as memory corruption and change the architectural state of the processor. Hence the IFS and CVR implementation offers a flexible, low-overhead, high-coverage method for ensuring reliable and secure computing

    Data Oblivious ISA Extensions for Side Channel-Resistant and High Performance Computing

    Get PDF
    Blocking microarchitectural (digital) side channels is one of the most pressing challenges in hardware security today. Recently, there has been a surge of effort that attempts to block these leakages by writing programs data obliviously. In this model, programs are written to avoid placing sensitive data-dependent pressure on shared resources. Despite recent efforts, however, running data oblivious programs on modern machines today is insecure and low performance. First, writing programs obliviously assumes certain instructions in today\u27s ISAs will not leak privacy, whereas today\u27s ISAs and hardware provide no such guarantees. Second, writing programs to avoid data-dependent behavior is inherently high performance overhead. This paper tackles both the security and performance aspects of this problem by proposing a Data Oblivious ISA extension (OISA). On the security side, we present ISA design principles to block microarchitectural side channels, and embody these ideas in a concrete ISA capable of safely executing existing data oblivious programs. On the performance side, we design the OISA with support for efficient memory oblivious computation, and with safety features that allow modern hardware optimizations, e.g., out-of-order speculative execution, to remain enabled in the common case. We provide a complete hardware prototype of our ideas, built on top of the RISC-V out-of-order, speculative BOOM processor, and prove that the OISA can provide the advertised security through a formal analysis of an abstract BOOM-style machine. We evaluate area overhead of hardware mechanisms needed to support our prototype, and provide performance experiments showing how the OISA speeds up a variety of existing data oblivious codes (including ``constant time\u27\u27 cryptography and memory oblivious data structures), in addition to improving their security and portability

    Cross-VM network attacks & their countermeasures within cloud computing environments

    Get PDF
    Cloud computing is a contemporary model in which the computing resources are dynamically scaled-up and scaled-down to customers, hosted within large-scale multi-tenant systems. These resources are delivered as improved, cost-effective and available upon request to customers. As one of the main trends of IT industry in modern ages, cloud computing has extended momentum and started to transform the mode enterprises build and offer IT solutions. The primary motivation in using cloud computing model is cost-effectiveness. These motivations can compel Information and Communication Technologies (ICT) organizations to shift their sensitive data and critical infrastructure on cloud environments. Because of the complex nature of underlying cloud infrastructure, the cloud environments are facing a large number of challenges of misconfigurations, cyber-attacks, root-kits, malware instances etc which manifest themselves as a serious threat to cloud environments. These threats noticeably decline the general trustworthiness, reliability and accessibility of the cloud. Security is the primary concern of a cloud service model. However, a number of significant challenges revealed that cloud environments are not as much secure as one would expect. There is also a limited understanding regarding the offering of secure services in a cloud model that can counter such challenges. This indicates the significance of the fact that what establishes the threat in cloud model. One of the main threats in a cloud model is of cost-effectiveness, normally cloud providers reduce cost by sharing infrastructure between multiple un-trusted VMs. This sharing has also led to several problems including co-location attacks. Cloud providers mitigate co-location attacks by introducing the concept of isolation. Due to this, a guest VM cannot interfere with its host machine, and with other guest VMs running on the same system. Such isolation is one of the prime foundations of cloud security for major public providers. However, such logical boundaries are not impenetrable. A myriad of previous studies have demonstrated how co-resident VMs could be vulnerable to attacks through shared file systems, cache side-channels, or through compromising of hypervisor layer using rootkits. Thus, the threat of cross-VM attacks is still possible because an attacker uses one VM to control or access other VMs on the same hypervisor. Hence, multiple methods are devised for strategic VM placement in order to exploit co-residency. Despite the clear potential for co-location attacks for abusing shared memory and disk, fine grained cross-VM network-channel attacks have not yet been demonstrated. Current network based attacks exploit existing vulnerabilities in networking technologies, such as ARP spoofing and DNS poisoning, which are difficult to use for VM-targeted attacks. The most commonly discussed network-based challenges focus on the fact that cloud providers place more layers of isolation between co-resided VMs than in non-virtualized settings because the attacker and victim are often assigned to separate segmentation of virtual networks. However, it has been demonstrated that this is not necessarily sufficient to prevent manipulation of a victim VM’s traffic. This thesis presents a comprehensive method and empirical analysis on the advancement of co-location attacks in which a malicious VM can negatively affect the security and privacy of other co-located VMs as it breaches the security perimeter of the cloud model. In such a scenario, it is imperative for a cloud provider to be able to appropriately secure access to the data such that it reaches to the appropriate destination. The primary contribution of the work presented in this thesis is to introduce two innovative attack models in leading cloud models, impersonation and privilege escalation, that successfully breach the security perimeter of cloud models and also propose countermeasures that block such types of attacks. The attack model revealed in this thesis, is a combination of impersonation and mirroring. This experimental setting can exploit the network channel of cloud model and successfully redirects the network traffic of other co-located VMs. The main contribution of this attack model is to find a gap in the contemporary network cloud architecture that an attacker can exploit. Prior research has also exploited the network channel using ARP poisoning, spoofing but all such attack schemes have been countered as modern cloud providers place more layers of security features than in preceding settings. Impersonation relies on the already existing regular network devices in order to mislead the security perimeter of the cloud model. The other contribution presented of this thesis is ‘privilege escalation’ attack in which a non-root user can escalate a privilege level by using RoP technique on the network channel and control the management domain through which attacker can manage to control the other co-located VMs which they are not authorized to do so. Finally, a countermeasure solution has been proposed by directly modifying the open source code of cloud model that can inhibit all such attacks

    Execution Environments for Running Legacy Applications in Multi-Party Trust Settings

    Get PDF
    Applications often assume that the same party owns all of the application’s resources, and that these resources require the same level of privacy. This assumption no longer holds when organizations outsource applications to a third-party cloud, or when the application requires access to not only public content, but private configuration, such as authentication and keying material. The result of this broken assumption is that applications either must be re-written to accommodate each new security posture, or used as-is, accepting that one party exposes private data to another. In this dissertation, I argue the following thesis: it is possible to run legacy application binaries with confidentiality and integrity guarantees that reflect a multi-party trust setting. I support this thesis through the design, implementation, and evaluation of two distinct application-level virtualization layers that handle trust concerns on behalf of the application: conclaves and SecureMigration. Conclaves assume the availability of Intel SGX secure hardware enclaves and extend prior work in developing runtimes that execute legacy applications within an enclave. In contrast, SecureMigration does not use secure hardware, but rather composes information flow control with process migration to execute a process across multiple physical machines owned and operated by distinct principals, while shielding each principal’s sensitive portion of the process from its peers

    The implementation of Robotic Process Automation (RPA) technology and its impact on efficiencies of organizational business processes

    Get PDF
    I was tasked with setting up and managing a new Robotic Process Automation (RPA) and Intelligent Automation team that will implement Robotic Process Automation in our Credit Cards business unit in my organization, one of the major banks in the United States. The goal was to use RPA and the infusion of Artificial Intelligence to automate the multitude of operations processes that were currently worked manually by employees, to reduces cost, eliminate, or minimize risks and improve the effectiveness and efficiencies of the processes and impact on the organizational performance. Robotic Process Automation is a technology that uses software to build Robots that emulate human activity in interacting with digital systems and computer applications. To help me better understand how RPA can impact the efficiencies of processes, I decided to focus my research on the research question: Has the implementation of RPA and AI improved efficiencies in the business processes in my organization? Research Approach Upon completing a literature review and gaining a good understanding of existing knowledge on my research topic, I conducted a qualitative research and utilized a Participatory Action Research (PAR) approach with participants within the organization that were native to the process and case study. I collaborated with the participants to address the research question in which numerous iterations resulted in the findings of determinants and measurements that impacted the efficiency of the process in the case study and research. Findings The research found numerous determinants and measurements that conclusively demonstrated that Robotic Process Automation and the infusion of Artificial Intelligence improved organizational processes. Several determinants and measurements that were found in the case study were consistent with the literature, including, FTE (Full time Effort), cost reduction, faster processing time, risk reduction and improved quality of the process. While new determinants and measurements were found in the study such as bot maintenance and bot availability and fixes. Implications As a result of the findings that the determinants and measurements showed that RPA improved the efficiencies of processes, there were implications to the organization in that the RPA brought a change to the organization that is enabled by RPA and AI Information Technology (Brannick & Coghlan, 2007), which changes the way it operates with organizational processes, including companywide RPA expansion and bots and humans collaborating, which is also a change to organizational culture. There were also implications to organizational practice with the knowledge gained from the Participatory Action Research going forward. The PAR also impacted my professional practice in that I gained knowledge and experience that positively changed my mindset as a scholar-practitioner in my daily decision making and practice
    corecore