24 research outputs found

    ElfStore: A Resilient Data Storage Service for Federated Edge and Fog Resources

    Full text link
    Edge and fog computing have grown popular as IoT deployments become wide-spread. While application composition and scheduling on such resources are being explored, there exists a gap in a distributed data storage service on the edge and fog layer, instead depending solely on the cloud for data persistence. Such a service should reliably store and manage data on fog and edge devices, even in the presence of failures, and offer transparent discovery and access to data for use by edge computing applications. Here, we present Elfstore, a first-of-its-kind edge-local federated store for streams of data blocks. It uses reliable fog devices as a super-peer overlay to monitor the edge resources, offers federated metadata indexing using Bloom filters, locates data within 2-hops, and maintains approximate global statistics about the reliability and storage capacity of edges. Edges host the actual data blocks, and we use a unique differential replication scheme to select edges on which to replicate blocks, to guarantee a minimum reliability and to balance storage utilization. Our experiments on two IoT virtual deployments with 20 and 272 devices show that ElfStore has low overheads, is bound only by the network bandwidth, has scalable performance, and offers tunable resilience.Comment: 24 pages, 14 figures, To appear in IEEE International Conference on Web Services (ICWS), Milan, Italy, 201

    Trusted data path protecting shared data in virtualized distributed systems

    Get PDF
    When sharing data across multiple sites, service applications should not be trusted automatically. Services that are suspected of faulty, erroneous, or malicious behaviors, or that run on systems that may be compromised, should not be able to gain access to protected data or entrusted with the same data access rights as others. This thesis proposes a context flow model that controls the information flow in a distributed system. Each service application along with its surrounding context in a distributed system is treated as a controllable principal. This thesis defines a trust-based access control model that controls the information exchange between these principals. An online monitoring framework is used to evaluate the trustworthiness of the service applications and the underlining systems. An external communication interception runtime framework enforces trust-based access control transparently for the entire system.Ph.D.Committee Chair: Karsten Schwan; Committee Member: Douglas M. Blough; Committee Member: Greg Eisenhauer; Committee Member: Mustaque Ahamad; Committee Member: Wenke Le

    Efficient Data Collection in Multimedia Vehicular Sensing Platforms

    Full text link
    Vehicles provide an ideal platform for urban sensing applications, as they can be equipped with all kinds of sensing devices that can continuously monitor the environment around the travelling vehicle. In this work we are particularly concerned with the use of vehicles as building blocks of a multimedia mobile sensor system able to capture camera snapshots of the streets to support traffic monitoring and urban surveillance tasks. However, cameras are high data-rate sensors while wireless infrastructures used for vehicular communications may face performance constraints. Thus, data redundancy mitigation is of paramount importance in such systems. To address this issue in this paper we exploit sub-modular optimisation techniques to design efficient and robust data collection schemes for multimedia vehicular sensor networks. We also explore an alternative approach for data collection that operates on longer time scales and relies only on localised decisions rather than centralised computations. We use network simulations with realistic vehicular mobility patterns to verify the performance gains of our proposed schemes compared to a baseline solution that ignores data redundancy. Simulation results show that our data collection techniques can ensure a more accurate coverage of the road network while significantly reducing the amount of transferred data

    SoC-Cluster as an Edge Server: an Application-driven Measurement Study

    Full text link
    Huge electricity consumption is a severe issue for edge data centers. To this end, we propose a new form of edge server, namely SoC-Cluster, that orchestrates many low-power mobile system-on-chips (SoCs) through an on-chip network. For the first time, we have developed a concrete SoC-Cluster server that consists of 60 Qualcomm Snapdragon 865 SoCs in a 2U rack. Such a server has been commercialized successfully and deployed in large scale on edge clouds. The current dominant workload on those deployed SoC-Clusters is cloud gaming, as mobile SoCs can seamlessly run native mobile games. The primary goal of this work is to demystify whether SoC-Cluster can efficiently serve more general-purpose, edge-typical workloads. Therefore, we built a benchmark suite that leverages state-of-the-art libraries for two killer edge workloads, i.e., video transcoding and deep learning inference. The benchmark comprehensively reports the performance, power consumption, and other application-specific metrics. We then performed a thorough measurement study and directly compared SoC-Cluster with traditional edge servers (with Intel CPU and NVIDIA GPU) with respect to physical size, electricity, and billing. The results reveal the advantages of SoC-Cluster, especially its high energy efficiency and the ability to proportionally scale energy consumption with various incoming loads, as well as its limitations. The results also provide insightful implications and valuable guidance to further improve SoC-Cluster and land it in broader edge scenarios

    System Abstractions for Scalable Application Development at the Edge

    Get PDF
    Recent years have witnessed an explosive growth of Internet of Things (IoT) devices, which collect or generate huge amounts of data. Given diverse device capabilities and application requirements, data processing takes place across a range of settings, from on-device to a nearby edge server/cloud and remote cloud. Consequently, edge-cloud coordination has been studied extensively from the perspectives of job placement, scheduling and joint optimization. Typical approaches focus on performance optimization for individual applications. This often requires domain knowledge of the applications, but also leads to application-specific solutions. Application development and deployment over diverse scenarios thus incur repetitive manual efforts. There are two overarching challenges to provide system-level support for application development at the edge. First, there is inherent heterogeneity at the device hardware level. The execution settings may range from a small cluster as an edge cloud to on-device inference on embedded devices, differing in hardware capability and programming environments. Further, application performance requirements vary significantly, making it even more difficult to map different applications to already heterogeneous hardware. Second, there are trends towards incorporating edge and cloud and multi-modal data. Together, these add further dimensions to the design space and increase the complexity significantly. In this thesis, we propose a novel framework to simplify application development and deployment over a continuum of edge to cloud. Our framework provides key connections between different dimensions of design considerations, corresponding to the application abstraction, data abstraction and resource management abstraction respectively. First, our framework masks hardware heterogeneity with abstract resource types through containerization, and abstracts away the application processing pipelines into generic flow graphs. Further, our framework further supports a notion of degradable computing for application scenarios at the edge that are driven by multimodal sensory input. Next, as video analytics is the killer app of edge computing, we include a generic data management service between video query systems and a video store to organize video data at the edge. We propose a video data unit abstraction based on a notion of distance between objects in the video, quantifying the semantic similarity among video data. Last, considering concurrent application execution, our framework supports multi-application offloading with device-centric control, with a userspace scheduler service that wraps over the operating system scheduler

    Development of efficient energy storage and power management for autonomous aircraft structural health monitoring systems

    Get PDF
    This thesis investigates the development of an efficient energy storage and power management system for aircraft structural health monitoring (ASHM) applications, powered by specific thermoelectric energy harvesting. For efficient power management, commercially available DC-DC converters are critically analysed. A novel switched-energy storage system is proposed that can be used to extend the run-time of a typical wireless ASHM sensor. Experimental characterisation of a range of different batteries and supercapacitors has been performed, followed by a critical performance analysis of a simple parallel combination of a battery and supercapacitor for run-time extension of a wireless sensor node (WSN). The problems associated with such a simple parallel hybrid system are identified, and a new switched architecture is presented that overcomes these disadvantages

    Perpetual Sensing: Experiences with Energy-Harvesting Sensor Systems

    Full text link
    Industry forecasts project the number of connected devices will outpace the global population by orders of magnitude in the next decade or two. These projections are application driven: smart cities, implantable health monitors, responsive buildings, autonomous robots, driverless cars, and instrumented infrastructure are all expected to be drivers for the growth of networked devices. Achieving this immense scale---potentially trillions of smart and connected sensors and computers, popularly called the "Internet of Things"---raises a host of challenges including operating system design, networking protocols, and orchestration methodologies. However, another critical issue may be the most fundamental: If embedded computers outnumber people by a factor of a thousand, how are we going to keep all of these devices powered? In this dissertation, we show that energy-harvesting operation, by which devices scavenge energy from their surroundings to power themselves after they are deployed, is a viable answer to this question. In particular, we examine a range of energy-harvesting sensor node designs for a specific application: smart buildings. In this application setting, the devices must be small and sleek to be unobtrusively and widely deployed, yet shrinking the devices also reduces their energy budgets as energy storage often dominates their volume. Additionally, energy-harvesting introduces new challenges for these devices due to the intermittent access to power that stems from relying on unpredictable ambient energy sources. To address these challenges, we present several techniques for realizing effective sensors despite the size and energy constraints. First is Monjolo, an energy metering system that exploits rather than attempts to mask the variability in energy-harvesting by using the energy harvester itself as the sensor. Building on Monjolo, we show how simple time synchronization and an application specific sensor can enable accurate, building-scale submetering while remaining energy-harvesting. We also show how energy-harvesting can be the foundation for highly deployable power metering, as well as indoor monitoring and event detection. With these sensors as a guide, we present an architecture for energy-harvesting systems that provides layered abstractions and enables modular component reuse. We also couple these sensors with a generic and reusable gateway platform and an application-layer cloud service to form an easy-to-deploy building sensing toolkit, and demonstrate its effectiveness by performing and analyzing several modest-scale deployments.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/138686/1/bradjc_1.pd

    A Scalable Edge-Centric System Design for Camera Networks to aid Situation Awareness Applications

    Get PDF
    The ubiquity of cameras in our environment coupled with advances in computer vision and machine learning has enabled several novel applications combining sensing, processing, and actuation. Often referred to as situation awareness applications, they span a variety of domains including safety (e.g., surveillance), retail (e.g., drone delivery), and transportation (e.g., assisted/autonomous driving). There is a perfect storm of technology enablers that have come together making it a ripe time for realizing a smart camera system at the edge of the network to aid situation awareness applications. There are two types of smart camera systems, live processing at ingestion time and post-mortem video analysis. Live processing features a more timely response when the queries are known ahead of time. At the same time, post-mortem analysis fits the exploratory analysis where the queries (or the parameters of queries) are not known in advance. Various situation awareness applications can benefit from either type of the smart camera system or even both. There is prior art which are mostly standalone techniques to facilitate camera processing. For example, efficient live camera processing frameworks feature the partition of the video analysis tasks and the placement of these tasks across Edge and Cloud. Databases for building efficient query processing systems on archived videos feature modern techniques (e.g., filters) for accelerating video analytics. This dissertation research has been looking into both types of smart camera systems (i.e., live processing at ingestion time and postmortem exploratory video analysis) for various situation awareness applications. Precisely, this dissertation seeks to fill the void left by prior art by asking these questions: 1. What are the necessary system components for a geo-distributed camera system and how best to architect them for scalability? 2. Given the limited resource capacity of the edge, how best to orchestrate the resources for live camera processing at video ingestion time? 3. How best to leverage traditional database management optimization techniques for post-mortem video analysis? To aid various situation awareness applications, this dissertation proposes a “Scalable-by-Design” approach to designing edge-centric systems for camera networks, efficient resource orchestration for live camera processing at ingestion time, and a postmortem video engine featuring reuse for exploratory video analytics in a scalable edge-centric system for camera networks.Ph.D

    The theory and implementation of a secure system

    Get PDF
    Computer viruses pose a very real threat to this technological age. As our dependence on computers increases so does the incidence of computer virus infection. Like their biological counterparts, complete eradication is virtually impossible. Thus all computer viruses which have been injected into the public domain still exist. This coupled with the fact that new viruses are being discovered every day is resulting in a massive escalation of computer virus incidence. Computer viruses covertly enter the system and systematically take control, corrupt and destroy. New viruses appear each day that circumvent current means of detection, entering the most secure of systems. Anti-Virus software writers find themselves fighting a battle they cannot win: for every hole that is plugged, another leak appears. Presented in this thesis is both method and apparatus for an Anti-Virus System which provides a solution to this serious problem. It prevents the corruption, or destruction of data, by a computer virus or other hostile program, within a computer system. The Anti-Virus System explained in this thesis will guarantee system integrity and virus containment for any given system. Unlike other anti-virus techniques, security can be guaranteed, as at no point can a virus circumvent, or corrupt the action of the Anti-Virus System presented. It requires no hardware modification of the computer or the hard disk, nor software modification of the computer's operating system. Whilst being largely transparent to the user, the System guarantees total protection against the spread of current and future viruses
    corecore