60 research outputs found

    Poster: Enabling Flexible Edge-assisted XR

    Full text link
    Extended reality (XR) is touted as the next frontier of the digital future. XR includes all immersive technologies of augmented reality (AR), virtual reality (VR), and mixed reality (MR). XR applications obtain the real-world context of the user from an underlying system, and provide rich, immersive, and interactive virtual experiences based on the user's context in real-time. XR systems process streams of data from device sensors, and provide functionalities including perceptions and graphics required by the applications. These processing steps are computationally intensive, and the challenge is that they must be performed within the strict latency requirements of XR. This poses limitations on the possible XR experiences that can be supported on mobile devices with limited computing resources. In this XR context, edge computing is an effective approach to address this problem for mobile users. The edge is located closer to the end users and enables processing and storing data near them. In addition, the development of high bandwidth and low latency network technologies such as 5G facilitates the application of edge computing for latency-critical use cases [4, 11]. This work presents an XR system for enabling flexible edge-assisted XR.Comment: extended abstract of 2 pages, 1 figure, 2 table

    FleXR: A System Enabling Flexibly Distributed Extended Reality

    Full text link
    Extended reality (XR) applications require computationally demanding functionalities with low end-to-end latency and high throughput. To enable XR on commodity devices, a number of distributed systems solutions enable offloading of XR workloads on remote servers. However, they make a priori decisions regarding the offloaded functionalities based on assumptions about operating factors, and their benefits are restricted to specific deployment contexts. To realize the benefits of offloading in various distributed environments, we present a distributed stream processing system, FleXR, which is specialized for real-time and interactive workloads and enables flexible distributions of XR functionalities. In building FleXR, we identified and resolved several issues of presenting XR functionalities as distributed pipelines. FleXR provides a framework for flexible distribution of XR pipelines while streamlining development and deployment phases. We evaluate FleXR with three XR use cases in four different distribution scenarios. In the results, the best-case distribution scenario shows up to 50% less end-to-end latency and 3.9x pipeline throughput compared to alternatives.Comment: 11 pages, 11 figures, conference pape

    TGh: A TEE/GC Hybrid Enabling Confidential FaaS Platforms

    Full text link
    Trusted Execution Environments (TEEs) suffer from performance issues when executing certain management instructions, such as creating an enclave, context switching in and out of protected mode, and swapping cached pages. This is especially problematic for short-running, interactive functions in Function-as-a-Service (FaaS) platforms, where existing techniques to address enclave overheads are insufficient. We find FaaS functions can spend more time managing the enclave than executing application instructions. In this work, we propose a TEE/GC hybrid (TGh) protocol to enable confidential FaaS platforms. TGh moves computation out of the enclave onto the untrusted host using garbled circuits (GC), a cryptographic construction for secure function evaluation. Our approach retains the security guarantees of enclaves while avoiding the performance issues associated with enclave management instructions

    Poster: Making Edge-assisted LiDAR Perceptions Robust to Lossy Point Cloud Compression

    Full text link
    Real-time light detection and ranging (LiDAR) perceptions, e.g., 3D object detection and simultaneous localization and mapping are computationally intensive to mobile devices of limited resources and often offloaded on the edge. Offloading LiDAR perceptions requires compressing the raw sensor data, and lossy compression is used for efficiently reducing the data volume. Lossy compression degrades the quality of LiDAR point clouds, and the perception performance is decreased consequently. In this work, we present an interpolation algorithm improving the quality of a LiDAR point cloud to mitigate the perception performance loss due to lossy compression. The algorithm targets the range image (RI) representation of a point cloud and interpolates points at the RI based on depth gradients. Compared to existing image interpolation algorithms, our algorithm shows a better qualitative result when the point cloud is reconstructed from the interpolated RI. With the preliminary results, we also describe the next steps of the current work.Comment: extended abstract of 2 pages, 2 figures, 1 tabl

    TGh: A TEE/GC Hybrid Enabling Confidential FaaS Platforms

    Get PDF
    Trusted Execution Environments (TEEs) suffer from performance issues when executing certain management instructions, such as creating an enclave, context switching in and out of protected mode, and swapping cached pages. This is especially problematic for short-running, interactive functions in Function-as-a-Service (FaaS) platforms, where existing techniques to address enclave overheads are insufficient. We find FaaS functions can spend more time managing the enclave than executing application instructions. In this work, we propose a TEE/GC hybrid (TGh) protocol to enable confidential FaaS platforms. TGh moves computation out of the enclave onto the untrusted host using garbled circuits (GC), a cryptographic construction for secure function evaluation. Our approach retains the security guarantees of enclaves while avoiding the performance issues associated with enclave management instructions

    Interactive Use of Cloud Services: Amazon SQS and S3

    Get PDF
    Abstract-Interactive use of cloud services is of keen interest to science end users, including for storing and accessing shared data sets. This paper evaluates the viability of interactively using two important cloud services offered by Amazon: SQS (Simple Queue Service) and S3 (Simple Storage Service). Specifically, we first measure the send-to-receive message latencies of SQS and then determine and devise rate controls to obtain suitable latencies and latency variations. Second, for S3, when transferring data into the cloud, we determine that increased parallelism in TransferManager can significantly improve upload performance, achieving up to 4 times improvements with careful elimination of upload bottlenecks

    Cellule: Lightweight Execution Environment for Accelerator-based Systems

    Get PDF
    The increasing prevalence of accelerators is changing the high performance computing (HPC) landscape to one in which future platforms will consist of heterogeneous multi-core chips comprised of both general purpose and specialized cores. Coupled with this trend is increased support for virtualization, which can abstract underlying hardware to aid in dynamically managing its use by HPC applications while at the same time, provide lightweight, efficient, and specialized execution environments (SEE) for applications to maximally exploit the hardware. This paper describes the Cellule architecture which uses virtualization to create high performance, low noise SEEs for accelerators. The paper describes important properties of Cellule and illustrates its advantages with an implementation on the IBM Cell processor. With compute-intensive workloads, performance improvements of up to 60% are attained when using Cellule’s SEE vs. the current Linux-based runtime, resulting in a system architecture that is suitable for future accelerators and specialized cores irrespective of whether they are on-chip or off-chip. A key principle, coordinated resource management for accelerator and general purpose resources, is shown to extend beyond Cell, using experimental results obtained on a different accelerator platform
    • …
    corecore