559 research outputs found

    Towards High Precision End-to-End Video Streaming from Drones using Packet Trimming

    Get PDF
    The emergence of a number of network communication facilities such as Network Function Virtualization (NFV), Software Defined Networking (SDN), the Internet of Things (IoT), Unmanned Aerial Vehicles (UAV), and in-network packet processing, holds a potential to meet the low latency, high precision requirements of various future multimedia applications. However, this raises the corresponding issues of how all of these elements can be used together in future networking environments, including newly developed protocols and techniques. This paper describes the architecture of an end-to-end video streaming platform for video surveillance, consisting of a UAV network domain, an edge server implementing in-network packet trimming operations with the use of Big Packet Protocol (BPP), utilization of Scalable Video Coding (SVC) and multiple video clients which connect to a network managed by an SDN controller. A Virtualized Edge Function at the drone edge utilizes SVC and in communication with the Drone Control Unit to manage the transmitted video quality. Experimental results show the potential that future multimedia applications can achieve the required high precision with the use of future network components and the consideration of their interactions

    MadEye: Boosting Live Video Analytics Accuracy with Adaptive Camera Configurations

    Full text link
    Camera orientations (i.e., rotation and zoom) govern the content that a camera captures in a given scene, which in turn heavily influences the accuracy of live video analytics pipelines. However, existing analytics approaches leave this crucial adaptation knob untouched, instead opting to only alter the way that captured images from fixed orientations are encoded, streamed, and analyzed. We present MadEye, a camera-server system that automatically and continually adapts orientations to maximize accuracy for the workload and resource constraints at hand. To realize this using commodity pan-tilt-zoom (PTZ) cameras, MadEye embeds (1) a search algorithm that rapidly explores the massive space of orientations to identify a fruitful subset at each time, and (2) a novel knowledge distillation strategy to efficiently (with only camera resources) select the ones that maximize workload accuracy. Experiments on diverse workloads show that MadEye boosts accuracy by 2.9-25.7% for the same resource usage, or achieves the same accuracy with 2-3.7x lower resource costs.Comment: 19 pages, 16 figure

    On realistic target coverage by autonomous drones

    Get PDF
    Low-cost mini-drones with advanced sensing and maneuverability enable a new class of intelligent sensing systems. To achieve the full potential of such drones, it is necessary to develop new enhanced formulations of both common and emerging sensing scenarios. Namely, several fundamental challenges in visual sensing are yet to be solved including (1) fitting sizable targets in camera frames; (2) positioning cameras at effective viewpoints matching target poses; and (3) accounting for occlusion by elements in the environment, including other targets. In this article, we introduce Argus, an autonomous system that utilizes drones to collect target information incrementally through a two-tier architecture. To tackle the stated challenges, Argus employs a novel geometric model that captures both target shapes and coverage constraints. Recognizing drones as the scarcest resource, Argus aims to minimize the number of drones required to cover a set of targets. We prove this problem is NP-hard, and even hard to approximate, before deriving a best-possible approximation algorithm along with a competitive sampling heuristic which runs up to 100× faster according to large-scale simulations. To test Argus in action, we demonstrate and analyze its performance on a prototype implementation. Finally, we present a number of extensions to accommodate more application requirements and highlight some open problems

    Motion tracking problems in Internet of Things (IoT) and wireless networking

    Get PDF
    The dissertation focuses on inferring various motion patterns of internet-of-things (IoT) devices, by leveraging inertial sensors embedded in these objects, as well as wireless signals emitted (or reflected) from them. For instance, we use a combination of GPS signals and inertial sensors on drones to precisely track its 3D orientation over time, ultimately improving safety against failures and crashes. In another application in sports analytics, we embed sensors and radios inside baseballs and cricket balls and compute their 3D trajectory and spin patterns, even when they move at extremely high speeds. In a third application for wireless networks, we explore the possibility of physically moving wireless infrastructure like Access Points and basestations on robots and drones for enhancing the network performance. While these are diverse applications in drones, sports analytics, and wireless networks, the common theme underlying the research is in the development of the core motion-related building blocks. Specifically, we emphasize the philosophy of "fusion of multi modal sensor data with application specific model” as the design principle for building the next generation of diverse IoT applications. To this end, we draw on theoretical techniques in wireless communication, signal processing, and statistics, but translate them to completely functional systems on real-world platforms
    corecore