4,165 research outputs found

    Re-designing Dynamic Content Delivery in the Light of a Virtualized Infrastructure

    Get PDF
    We explore the opportunities and design options enabled by novel SDN and NFV technologies, by re-designing a dynamic Content Delivery Network (CDN) service. Our system, named MOSTO, provides performance levels comparable to that of a regular CDN, but does not require the deployment of a large distributed infrastructure. In the process of designing the system, we identify relevant functions that could be integrated in the future Internet infrastructure. Such functions greatly simplify the design and effectiveness of services such as MOSTO. We demonstrate our system using a mixture of simulation, emulation, testbed experiments and by realizing a proof-of-concept deployment in a planet-wide commercial cloud system.Comment: Extended version of the paper accepted for publication in JSAC special issue on Emerging Technologies in Software-Driven Communication - November 201

    Public Key Encryption Supporting Plaintext Equality Test and User-Specified Authorization

    Get PDF
    In this paper we investigate a category of public key encryption schemes which supports plaintext equality test and user-specified authorization. With this new primitive, two users, who possess their own public/private key pairs, can issue token(s) to a proxy to authorize it to perform plaintext equality test from their ciphertexts. We provide a formal formulation for this primitive, and present a construction with provable security in our security model. To mitigate the risks against the semi-trusted proxies, we enhance the proposed cryptosystem by integrating the concept of computational client puzzles. As a showcase, we construct a secure personal health record application based on this primitive

    A Framework for Developing Real-Time OLAP algorithm using Multi-core processing and GPU: Heterogeneous Computing

    Full text link
    The overwhelmingly increasing amount of stored data has spurred researchers seeking different methods in order to optimally take advantage of it which mostly have faced a response time problem as a result of this enormous size of data. Most of solutions have suggested materialization as a favourite solution. However, such a solution cannot attain Real- Time answers anyhow. In this paper we propose a framework illustrating the barriers and suggested solutions in the way of achieving Real-Time OLAP answers that are significantly used in decision support systems and data warehouses

    Accelerating Large-Scale Data Analysis by Offloading to High-Performance Computing Libraries using Alchemist

    Full text link
    Apache Spark is a popular system aimed at the analysis of large data sets, but recent studies have shown that certain computations---in particular, many linear algebra computations that are the basis for solving common machine learning problems---are significantly slower in Spark than when done using libraries written in a high-performance computing framework such as the Message-Passing Interface (MPI). To remedy this, we introduce Alchemist, a system designed to call MPI-based libraries from Apache Spark. Using Alchemist with Spark helps accelerate linear algebra, machine learning, and related computations, while still retaining the benefits of working within the Spark environment. We discuss the motivation behind the development of Alchemist, and we provide a brief overview of its design and implementation. We also compare the performances of pure Spark implementations with those of Spark implementations that leverage MPI-based codes via Alchemist. To do so, we use data science case studies: a large-scale application of the conjugate gradient method to solve very large linear systems arising in a speech classification problem, where we see an improvement of an order of magnitude; and the truncated singular value decomposition (SVD) of a 400GB three-dimensional ocean temperature data set, where we see a speedup of up to 7.9x. We also illustrate that the truncated SVD computation is easily scalable to terabyte-sized data by applying it to data sets of sizes up to 17.6TB.Comment: Accepted for publication in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, London, UK, 201

    Digital Twins

    Get PDF
    Progression of the field depends on convergence of information technology, operational technology and protocol-agnostic telecommunications. Making sense of the data, ability to curate data and perform data analytics at the edge (or mist rather than in the fog or cloud) is key to value. Delivering engines to the edge are crucial for analytics at the edge when latency is critical. The confluence of these and other factors may chart the future path for Digital Twins. The number of unknown unknowns and the known unknowns in this process makes it imperative to create global infrastructures and organize groups to pursue the development of fundamental building blocks and new ideas through research.Multiple forms of digital transformation are imminent. Digital Twins represent one concept. It is gaining momentum because it may offer real-time transparency. Rapid diffusion of digital duplicates faces hurdles due to lack of semantic interoperability between architectures, standards and ontologies. The technologies necessary for automated discovery are in short supply
    corecore