12 research outputs found

    The Design and Implementation of a Wireless Video Surveillance System.

    Get PDF
    Internet-enabled cameras pervade daily life, generating a huge amount of data, but most of the video they generate is transmitted over wires and analyzed offline with a human in the loop. The ubiquity of cameras limits the amount of video that can be sent to the cloud, especially on wireless networks where capacity is at a premium. In this paper, we present Vigil, a real-time distributed wireless surveillance system that leverages edge computing to support real-time tracking and surveillance in enterprise campuses, retail stores, and across smart cities. Vigil intelligently partitions video processing between edge computing nodes co-located with cameras and the cloud to save wireless capacity, which can then be dedicated to Wi-Fi hotspots, offsetting their cost. Novel video frame prioritization and traffic scheduling algorithms further optimize Vigil's bandwidth utilization. We have deployed Vigil across three sites in both whitespace and Wi-Fi networks. Depending on the level of activity in the scene, experimental results show that Vigil allows a video surveillance system to support a geographical area of coverage between five and 200 times greater than an approach that simply streams video over the wireless network. For a fixed region of coverage and bandwidth, Vigil outperforms the default equal throughput allocation strategy of Wi-Fi by delivering up to 25% more objects relevant to a user's query

    A Game-theoretic Framework for Revenue Sharing in Edge-Cloud Computing System

    Full text link
    We introduce a game-theoretic framework to ex- plore revenue sharing in an Edge-Cloud computing system, in which computing service providers at the edge of the Internet (edge providers) and computing service providers at the cloud (cloud providers) co-exist and collectively provide computing resources to clients (e.g., end users or applications) at the edge. Different from traditional cloud computing, the providers in an Edge-Cloud system are independent and self-interested. To achieve high system-level efficiency, the manager of the system adopts a task distribution mechanism to maximize the total revenue received from clients and also adopts a revenue sharing mechanism to split the received revenue among computing servers (and hence service providers). Under those system-level mechanisms, service providers attempt to game with the system in order to maximize their own utilities, by strategically allocating their resources (e.g., computing servers). Our framework models the competition among the providers in an Edge-Cloud system as a non-cooperative game. Our simulations and experiments on an emulation system have shown the existence of Nash equilibrium in such a game. We find that revenue sharing mechanisms have a significant impact on the system-level efficiency at Nash equilibria, and surprisingly the revenue sharing mechanism based directly on actual contributions can result in significantly worse system efficiency than Shapley value sharing mechanism and Ortmann proportional sharing mechanism. Our framework provides an effective economics approach to understanding and designing efficient Edge-Cloud computing systems

    ViewMap: Sharing Private In-Vehicle Dashcam Videos

    Get PDF
    Today, search for dashcam video evidences is conducted manually and its procedure does not guarantee privacy. In this paper, we motivate, design, and implement ViewMap, an automated public service system that enables sharing of private dashcam videos under anonymity. ViewMap takes a profile-based approach where each video is represented in a compact form called a view profile (VP), and the anonymized VPs are treated as entities for search, verification, and reward instead of their owners. ViewMap exploits the line-of-sight (LOS) properties of dedicated short-range communications (DSRC) such that each vehicle makes VP links with nearby ones that share the same sight while driving. ViewMap uses such LOS-based VP links to build a map of visibility around a given incident, and identifies VPs whose videos are worth reviewing. Original videos are never transmitted unless they are verified to be taken near the incident and anonymously solicited. ViewMap offers untraceable rewards for the provision of videos whose owners remain anonymous. We demonstrate the feasibility of ViewMap via field experiments on real roads using our DSRC testbeds and trace-driven simulations.We sincerely thank our shepherd Dr. Ranveer Chandra and the anonymous reviewers for their valuable feedback. This work was supported by Samsung Research Funding Center for Future Technology under Project Number SRFC-IT1402-01

    Software defined radio testbed of television white space for video transmission

    Get PDF
    Recently, television white space (TVWS) has grabbed a lot of attention from researchers in the Cognitive Radio (CR) area. This underutilized spectrum is one of the possible solutions for spectrum scarcity problem in wireless communication. Thus, many research works have been carried out in order to find a suitable method to utilize this spectrum in an efficient manner. Nevertheless, the actual hardware implementation on utilizing this spectrum is still lacking. Therefore, in this research, an Orthogonal Frequency Division Multiplexing (OFDM) real-time video transmission is proposed using software defined radio (SDR) platform. Two modulation schemes are used namely Phase-shift keying (PSK) with its Binary-PSK (BPSK) and Quadrature-PSK (QPSK) and Quadrature amplitude modulation (QAM) with 16QAM and 64QAM modes. The free channel used in this work is selected under ultra high frequency (UHF) band based on the energy detection, which is either on channel 54 or channel 56. The proposed system is developed with the physical (PHY) layer design of the transmitter and receiver in GNU Radio and integration of medium access control (MAC) layer functionality. Video capture and display programs are designed based on OpenCV modules. The performance of this design is evaluated based on two types of environment, indoor and outdoor, with packet delivery ratio (PDR) and end-to-end delay (EED) as the performance metrics. Three types of video motion are used in the experimentation which are fast (mobile), medium (foreman) and slow (akiyo). Under allocated bandwidth of 1.0 MHz, optimal performances of PDR and EED for both scenarios are shown. In the indoor scenario, QPSK½ exhibits the best performance with 0.92 of PDR and 24.7 seconds of EED for akiyo. Meanwhile for foreman and mobile, BPSK¾ achieves the best performance with PDR of 0.96 and 0.95 and EED of 33.2 seconds and 35.0 seconds, respectively. In the outdoor scenario, the best performance of PDR is achieved by 16QAM½ with 0.9 and 23.5 seconds of EED for akiyo. For foreman and mobile, QPSK½ exhibits the best performance with 0.94 and 0.9 of PDR and 31.2 seconds and 32.5 seconds of EED, respectively. In conclusion, the proposed design exhibits promising solutions for the OFDM real-time video transmission over TVWS

    Glimpse: Continuous, Real-Time Object Recognition on Mobile Devices

    Get PDF
    Glimpse is a continuous, real-time object recognition system for camera-equipped mobile devices. Glimpse captures full-motion video, locates objects of interest, recognizes and labels them, and tracks them from frame to frame for the user. Because the algorithms for object recognition entail significant computation, Glimpse runs them on server machines. When the latency between the server and mobile device is higher than a frame-time, this approach lowers object recognition accuracy. To regain accuracy, Glimpse uses an active cache of video frames on the mobile device. A subset of the frames in the active cache are used to track objects on the mobile, using (stale) hints about objects that arrive from the server from time to time. To reduce network bandwidth usage, Glimpse computes trigger frames to send to the server for recognizing and labeling. Experiments with Android smartphones and Google Glass over Verizon, AT&T, and a campus Wi-Fi network show that with hardware face detection support (available on many mobile devices), Glimpse achieves precision between 96.4% to 99.8% for continuous face recognition, which improves over a scheme performing hardware face detection and server-side recognition without Glimpse's techniques by between 1.8-2.5×. The improvement in precision for face recognition without hardware detection is between 1.6-5.5×. For road sign recognition, which does not have a hardware detector, Glimpse achieves precision between 75% and 80%; without Glimpse, continuous detection is non-functional (0.2%-1.9% precision)

    Ambient Intelligence for Next-Generation AR

    Full text link
    Next-generation augmented reality (AR) promises a high degree of context-awareness - a detailed knowledge of the environmental, user, social and system conditions in which an AR experience takes place. This will facilitate both the closer integration of the real and virtual worlds, and the provision of context-specific content or adaptations. However, environmental awareness in particular is challenging to achieve using AR devices alone; not only are these mobile devices' view of an environment spatially and temporally limited, but the data obtained by onboard sensors is frequently inaccurate and incomplete. This, combined with the fact that many aspects of core AR functionality and user experiences are impacted by properties of the real environment, motivates the use of ambient IoT devices, wireless sensors and actuators placed in the surrounding environment, for the measurement and optimization of environment properties. In this book chapter we categorize and examine the wide variety of ways in which these IoT sensors and actuators can support or enhance AR experiences, including quantitative insights and proof-of-concept systems that will inform the development of future solutions. We outline the challenges and opportunities associated with several important research directions which must be addressed to realize the full potential of next-generation AR.Comment: This is a preprint of a book chapter which will appear in the Springer Handbook of the Metavers
    corecore