1,124 research outputs found

    Approximate Query Service on Autonomous IoT Cameras

    Full text link
    Elf is a runtime for an energy-constrained camera to continuously summarize video scenes as approximate object counts. Elf's novelty centers on planning the camera's count actions under energy constraint. (1) Elf explores the rich action space spanned by the number of sample image frames and the choice of per-frame object counters; it unifies errors from both sources into one single bounded error. (2) To decide count actions at run time, Elf employs a learning-based planner, jointly optimizing for past and future videos without delaying result materialization. Tested with more than 1,000 hours of videos and under realistic energy constraints, Elf continuously generates object counts within only 11% of the true counts on average. Alongside the counts, Elf presents narrow errors shown to be bounded and up to 3.4x smaller than competitive baselines. At a higher level, Elf makes a case for advancing the geographic frontier of video analytics

    Emerging privacy challenges and approaches in CAV systems

    Get PDF
    The growth of Internet-connected devices, Internet-enabled services and Internet of Things systems continues at a rapid pace, and their application to transport systems is heralded as game-changing. Numerous developing CAV (Connected and Autonomous Vehicle) functions, such as traffic planning, optimisation, management, safety-critical and cooperative autonomous driving applications, rely on data from various sources. The efficacy of these functions is highly dependent on the dimensionality, amount and accuracy of the data being shared. It holds, in general, that the greater the amount of data available, the greater the efficacy of the function. However, much of this data is privacy-sensitive, including personal, commercial and research data. Location data and its correlation with identity and temporal data can help infer other personal information, such as home/work locations, age, job, behavioural features, habits, social relationships. This work categorises the emerging privacy challenges and solutions for CAV systems and identifies the knowledge gap for future research, which will minimise and mitigate privacy concerns without hampering the efficacy of the functions

    How Far Can We Go in Compute-less Networking: Computation Correctness and Accuracy

    Full text link
    Emerging applications such as augmented reality and tactile Internet are compute-intensive and latency-sensitive, which hampers their running in constrained end devices alone or in the distant cloud. The stringent requirements of such application drove to the realization of Edge computing in which computation is offloaded near to users. Compute-less networking is an extension of edge computing that aims at reducing computation and abridging communication by adopting in-network computing and computation reuse. Computation reuse aims to cache the result of computations and use them to perform similar tasks in the future and, therefore, avoid redundant calculations and optimize the use of resources. In this paper, we focus on the correctness of the final output produced by computation reuse. Since the input might not be identical but similar, the reuse of previous computation raises questions about the accuracy of the final results. To this end, we implement a proof of concept to study and gauge the effectiveness and efficiency of computation reuse. We are able to reduce task completion time by up to 80% while ensuring high correctness. We further discuss open challenges and highlight future research directions.Comment: Accepted for publication by the IEEE Network Magazin

    System Abstractions for Scalable Application Development at the Edge

    Get PDF
    Recent years have witnessed an explosive growth of Internet of Things (IoT) devices, which collect or generate huge amounts of data. Given diverse device capabilities and application requirements, data processing takes place across a range of settings, from on-device to a nearby edge server/cloud and remote cloud. Consequently, edge-cloud coordination has been studied extensively from the perspectives of job placement, scheduling and joint optimization. Typical approaches focus on performance optimization for individual applications. This often requires domain knowledge of the applications, but also leads to application-specific solutions. Application development and deployment over diverse scenarios thus incur repetitive manual efforts. There are two overarching challenges to provide system-level support for application development at the edge. First, there is inherent heterogeneity at the device hardware level. The execution settings may range from a small cluster as an edge cloud to on-device inference on embedded devices, differing in hardware capability and programming environments. Further, application performance requirements vary significantly, making it even more difficult to map different applications to already heterogeneous hardware. Second, there are trends towards incorporating edge and cloud and multi-modal data. Together, these add further dimensions to the design space and increase the complexity significantly. In this thesis, we propose a novel framework to simplify application development and deployment over a continuum of edge to cloud. Our framework provides key connections between different dimensions of design considerations, corresponding to the application abstraction, data abstraction and resource management abstraction respectively. First, our framework masks hardware heterogeneity with abstract resource types through containerization, and abstracts away the application processing pipelines into generic flow graphs. Further, our framework further supports a notion of degradable computing for application scenarios at the edge that are driven by multimodal sensory input. Next, as video analytics is the killer app of edge computing, we include a generic data management service between video query systems and a video store to organize video data at the edge. We propose a video data unit abstraction based on a notion of distance between objects in the video, quantifying the semantic similarity among video data. Last, considering concurrent application execution, our framework supports multi-application offloading with device-centric control, with a userspace scheduler service that wraps over the operating system scheduler

    An Intelligent Transportation System: the Quito City Case Study

    Get PDF
    Managing traffic in a large city has become a topic of great interest in both politics and science. The costs of poor traffic management have been quantified as losses equal to millions of dollars, not counting the unquantifiable value of the time that a person loses in traffic jams. Intelligent transport systems (ITS) offer a set of innovative solutions specific to the management of different modes of transport. This article focuses on the development of an ITS for the city of Quito that allows smart decision-making to direct heavy haul transporters that want to enter the city via one of its main access routes. Technologies such as Sensor Web Enablement (SWE), in association with the Message Queuing Telemetry Transport (MQTT) communication protocol, facilitate the development of a vehicular management platform/system capable of sending notifications in real-time and issuing instructions to drivers regarding traffic delays along routes, average speeds, etc. The system supports a network of heterogeneous sensors accessible through the web. It can integrate any device that uses HTTP protocol. Time interval and location range testing have been undertaken to refine the accuracy of the system and make it adaptable to any geographic situation. The system allows communicate with the server through MQTT or through web services, using technologies such as: MongoDB and GeoJSON. One of the most relevant results is that the degree of accuracy of the system is within appropriate ranges when compared to commercial applications such as Google Maps and Waze

    Enabling technologies for urban smart mobility: Recent trends, opportunities and challenges

    Get PDF
    The increasing population across the globe makes it essential to link smart and sustainable city planning with the logistics of transporting people and goods, which will significantly contribute to how societies will face mobility in the coming years. The concept of smart mobility emerged with the popularity of smart cities and is aligned with the sustainable development goals defined by the United Nations. A reduction in traffic congestion and new route optimizations with reduced ecological footprint are some of the essential factors of smart mobility; however, other aspects must also be taken into account, such as the promotion of active mobility and inclusive mobility, encour-aging the use of other types of environmentally friendly fuels and engagement with citizens. The Internet of Things (IoT), Artificial Intelligence (AI), Blockchain and Big Data technology will serve as the main entry points and fundamental pillars to promote the rise of new innovative solutions that will change the current paradigm for cities and their citizens. Mobility‐as‐a‐service, traffic flow optimization, the optimization of logistics and autonomous vehicles are some of the services and applications that will encompass several changes in the coming years with the transition of existing cities into smart cities. This paper provides an extensive review of the current trends and solutions presented in the scope of smart mobility and enabling technologies that support it. An overview of how smart mobility fits into smart cities is provided by characterizing its main attributes and the key benefits of using smart mobility in a smart city ecosystem. Further, this paper highlights other various opportunities and challenges related to smart mobility. Lastly, the major services and applications that are expected to arise in the coming years within smart mobility are explored with the prospective future trends and scope

    Edge Video Analytics: A Survey on Applications, Systems and Enabling Techniques

    Full text link
    Video, as a key driver in the global explosion of digital information, can create tremendous benefits for human society. Governments and enterprises are deploying innumerable cameras for a variety of applications, e.g., law enforcement, emergency management, traffic control, and security surveillance, all facilitated by video analytics (VA). This trend is spurred by the rapid advancement of deep learning (DL), which enables more precise models for object classification, detection, and tracking. Meanwhile, with the proliferation of Internet-connected devices, massive amounts of data are generated daily, overwhelming the cloud. Edge computing, an emerging paradigm that moves workloads and services from the network core to the network edge, has been widely recognized as a promising solution. The resulting new intersection, edge video analytics (EVA), begins to attract widespread attention. Nevertheless, only a few loosely-related surveys exist on this topic. The basic concepts of EVA (e.g., definition, architectures) were not fully elucidated due to the rapid development of this domain. To fill these gaps, we provide a comprehensive survey of the recent efforts on EVA. In this paper, we first review the fundamentals of edge computing, followed by an overview of VA. The EVA system and its enabling techniques are discussed next. In addition, we introduce prevalent frameworks and datasets to aid future researchers in the development of EVA systems. Finally, we discuss existing challenges and foresee future research directions. We believe this survey will help readers comprehend the relationship between VA and edge computing, and spark new ideas on EVA.Comment: 31 pages, 13 figure

    Middleware for Internet of Things: A Survey

    Get PDF

    Towards a cloud‑based automated surveillance system using wireless technologies

    Get PDF
    Cloud Computing can bring multiple benefits for Smart Cities. It permits the easy creation of centralized knowledge bases, thus straightforwardly enabling that multiple embedded systems (such as sensor or control devices) can have a collaborative, shared intelligence. In addition to this, thanks to its vast computing power, complex tasks can be done over low-spec devices just by offloading computation to the cloud, with the additional advantage of saving energy. In this work, cloud’s capabilities are exploited to implement and test a cloud-based surveillance system. Using a shared, 3D symbolic world model, different devices have a complete knowledge of all the elements, people and intruders in a certain open area or inside a building. The implementation of a volumetric, 3D, object-oriented, cloud-based world model (including semantic information) is novel as far as we know. Very simple devices (orange Pi) can send RGBD streams (using kinect cameras) to the cloud, where all the processing is distributed and done thanks to its inherent scalability. A proof-of-concept experiment is done in this paper in a testing lab with multiple cameras connected to the cloud with 802.11ac wireless technology. Our results show that this kind of surveillance system is possible currently, and that trends indicate that it can be improved at a short term to produce high performance vigilance system using low-speed devices. In addition, this proof-of-concept claims that many interesting opportunities and challenges arise, for example, when mobile watch robots and fixed cameras would act as a team for carrying out complex collaborative surveillance strategies.Ministerio de Economía y Competitividad TEC2016-77785-PJunta de Andalucía P12-TIC-130
    • 

    corecore