4,069 research outputs found
Cloud Resource Optimization for Processing Multiple Streams of Visual Data
Hundreds of millions of network cameras have been installed throughout the world. Each is capable of providing a vast amount of real-time data. Analyzing the massive data generated by these cameras requires significant computational resources and the demands may vary over time. Cloud computing shows the most promise to provide the needed resources on demand. In this article, we investigate how to allocate cloud resources when analyzing real-time data streams from network cameras. A resource manager considers many factors that affect its decisions, including the types of analysis, the number of data streams, and the locations of the cameras. The manager then selects the most cost-efficient types of cloud instances (e.g. CPU vs. GPGPU) to meet the computational demands for analyzing streams. We evaluate the effectiveness of our approach using Amazon Web Services. Experiments demonstrate more than 50% cost reduction for real workloads
Edge Video Analytics: A Survey on Applications, Systems and Enabling Techniques
Video, as a key driver in the global explosion of digital information, can
create tremendous benefits for human society. Governments and enterprises are
deploying innumerable cameras for a variety of applications, e.g., law
enforcement, emergency management, traffic control, and security surveillance,
all facilitated by video analytics (VA). This trend is spurred by the rapid
advancement of deep learning (DL), which enables more precise models for object
classification, detection, and tracking. Meanwhile, with the proliferation of
Internet-connected devices, massive amounts of data are generated daily,
overwhelming the cloud. Edge computing, an emerging paradigm that moves
workloads and services from the network core to the network edge, has been
widely recognized as a promising solution. The resulting new intersection, edge
video analytics (EVA), begins to attract widespread attention. Nevertheless,
only a few loosely-related surveys exist on this topic. The basic concepts of
EVA (e.g., definition, architectures) were not fully elucidated due to the
rapid development of this domain. To fill these gaps, we provide a
comprehensive survey of the recent efforts on EVA. In this paper, we first
review the fundamentals of edge computing, followed by an overview of VA. The
EVA system and its enabling techniques are discussed next. In addition, we
introduce prevalent frameworks and datasets to aid future researchers in the
development of EVA systems. Finally, we discuss existing challenges and foresee
future research directions. We believe this survey will help readers comprehend
the relationship between VA and edge computing, and spark new ideas on EVA.Comment: 31 pages, 13 figure
A Cost-Effective Cloud-Based System for Analyzing Big Real-Time Visual Data From Thousands of Network Cameras
Thousands of network cameras stream public real-time visual data from different environments, such as streets, shopping malls, and natural scenes. The big visual data from these cameras can be useful for many applications, but analyzing this data presents many challenges, such as (i) retrieving data from heterogeneous cameras (e.g. different brands and data formats), (ii) providing a software environment for users to simultaneously analyze the large amounts of data from the cameras, (iii) allocating and managing significant amount of computing resources. This dissertation presents a web-based system designed to address these challenges. The system enables users to execute analysis programs on the data from more than 120,000 cameras. The system handles the heterogeneity of the cameras and provides an Application Programming Interface (API) that requires slight changes to the existing analysis programs reading data from files. The system includes a resource manager that allocates cloud resources in order to meet the analysis requirements. Cloud vendors offer different cloud instance types with different capabilities and hourly costs. The manager reduces the overall cost of the allocated instances while meeting the performance requirements. The resource manager monitors the allocated instances; it allocates more instances if needed and deallocates existing instances to reduce the cost if possible. The manager makes decisions based on many factors, such as analysis programs, frame rates, cameras, and instance types
Recommended from our members
An Efficient Privacy-Preserving Framework for Video Analytics
With the proliferation of video content from surveillance cameras, social media, and live streaming services, the need for efficient video analytics has grown immensely. In recent years, machine learning based computer vision algorithms have shown great success in various video analytic tasks. Specifically, neural network models have dominated in visual tasks such as image and video classification, object recognition, object detection, and object tracking. However, compared with classic computer vision algorithms, machine learning based methods are usually much more compute-intensive. Powerful servers are required by many state-of-the-art machine learning models. With the development of cloud computing infrastructures, people are able to use machine learning techniques everywhere through the Internet. An end user just needs to upload its data to a cloud server and enjoy technical advances in machine learning without owning a power device to perform the corresponding computation. The huge workload is offloaded to cloud servers. There are two major challenges in cloud-based video analytics. First, video analytics requires a huge amount of compute resources, which can be very slow even on powerful servers. It limits the application of neural network based solutions on real-time video analytics. Second, uploading user videos to the cloud reveals private information about users. Existing privacy-preserving inference methods rely heavily on cryptographic operations which are compute (and communication) intensive. In this dissertation, we first address the workload problem of video analytics. Compared with analytic tasks on individual images, nearby frames in a video are usually highly correlated. In other words, there is some information redundancy across video frames. We utilize the redundancy and design a system, PFad, for live video analytics. It adaptively adjusts the video configuration for neural network processing, such as the frame rate and resolution. In this work, we propose to perform configuration adaptation without offline profiling and design a corresponding configuration prediction mechanism. We select configurations with a prediction model based on object movement features. In addition, we reduce the latency through resource orchestration on video analytics servers. The key idea of resource orchestration is to batch inference tasks that use the same CNN model and schedule tasks based on a priority value that estimates their impact on the total latency. We evaluate our system with two video analytic applications, road traffic monitoring and pose detection. The experimental results show that our profiling-free adaptation reduces the workload by 80\% of the state-of-the-art adaptation without lowering the accuracy. The average serving latency is reduced by up to 95\% compared with the profiling-based adaptation. This dissertation addresses the privacy issue in two steps. First, we propose PIPO which protects the privacy of frame-level information. The key idea of PIPO is to accelerate the operations in neural network models by avoiding expensive cryptographic operations as much as possible. In particular, the client preprocesses the inference by performing convolution on a secret share of the input through homomorphic encryption. The user only needs to provide the rest secret shares of the input to the server to perform convolution during the online inference. And it can be done with plaintext operations. In addition, PIPO performs non-linear layers on the client side to protect users\u27 data. To prevent model parameters from being revealed to the client directly, the server performs two reversible operations: multiplying each entry of the convolution results with scale factors and shuffling them. We proved that PIPO ensuring the privacy of users\u27 data with a simulation-based argument. Further, we show that the resources to steal the server\u27s model parameters in PIPO is within the same order of magnitude as the prediction API attack, which is an attack that the client can perform on any inference service where both input and inference results are known to the client. Our experiments on well-known neural network architectures show that PIPO improves the inference latency and communication volume by up to 78x and 26x respectively compared to Delphi. Based on PIPO, this dissertation proposes Pevas, which supports efficient privacy-preserving video analytics. Pevas exploits the causality among consecutive frames for both performance and privacy. We propose a privacy-preserving Differential CNN inference protocol based on PIPO. It only transmits and computes on the change part of each frame. Pevas is not only applying privacy-preserving protocol on the changed parts, but also removes the position of the changed parts. In addition, we design a privacy parameter mechanism for privacy-preserving video analytics. Our experiments of Pevas using ResNet-50 on real-world videos show that it improves the inference latency and communication volume by three to four orders of magnitude than protocols based on Delphi, CrypTFlow, LLAMA, and Cheetah
Mission-Critical Communications from LMR to 5G: a Technology Assessment approach for Smart City scenarios
Radiocommunication networks are one of the main support tools of agencies that carry out
actions in Public Protection & Disaster Relief (PPDR), and it is necessary to update these
communications technologies from narrowband to broadband and integrated to information
technologies to have an effective action before society. Understanding that this problem
includes, besides the technical aspects, issues related to the social context to which these
systems are inserted, this study aims to construct scenarios, using several sources of
information, that helps the managers of the PPDR agencies in the technological decisionmaking
process of the Digital Transformation of Mission-Critical Communication considering
Smart City scenarios, guided by the methods and approaches of Technological Assessment
(TA).As redes de radiocomunicações são uma das principais ferramentas de apoio dos órgãos que
realizam ações de Proteção Pública e Socorro em desastres, sendo necessário atualizar essas
tecnologias de comunicação de banda estreita para banda larga, e integra- las às tecnologias
de informação, para se ter uma atuação efetiva perante a sociedade . Entendendo que esse
problema inclui, além dos aspectos técnicos, questões relacionadas ao contexto social ao qual
esses sistemas estão inseridos, este estudo tem por objetivo a construção de cenários,
utilizando diversas fontes de informação que auxiliem os gestores destas agências na tomada
de decisão tecnológica que envolve a transformação digital da Comunicação de Missão Crítica
considerando cenários de Cidades Inteligentes, guiado pelos métodos e abordagens de
Avaliação Tecnológica (TA)
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
Edge computing platforms for Internet of Things
Internet of Things (IoT) has the potential to transform many domains of human activity, enabled by the collection of data from the physical world at a massive scale. As the projected growth of IoT data exceeds that of available network capacity, transferring it to centralized cloud data centers is infeasible. Edge computing aims to solve this problem by processing data at the edge of the network, enabling applications with specialized requirements that cloud computing cannot meet.
The current market of platforms that support building IoT applications is very fragmented, with offerings available from hundreds of companies with no common architecture. This threatens the realization of IoT's potential: with more interoperability, a new class of applications that combine the collected data and use it in new ways could emerge.
In this thesis, promising IoT platforms for edge computing are surveyed. First, an understanding of current challenges in the field is gained through studying the available literature on the topic. Second, IoT edge platforms having the most potential to meet these challenges are chosen and reviewed for their capabilities. Finally, the platforms are compared against each other, with a focus on their potential to meet the challenges learned in the first part.
The work shows that AWS IoT for the edge and Microsoft Azure IoT Edge have mature feature sets. However, these platforms are tied to their respective cloud platforms, limiting interoperability and the possibility of switching providers. On the other hand, open source EdgeX Foundry and KubeEdge have the potential for more standardization and interoperability in IoT but are limited in functionality for building practical IoT applications
- …