24,607 research outputs found

    The future of camera networks: staying smart in a chaotic world

    Get PDF
    Camera networks become smart when they can interpret video data on board, in order to carry out tasks as a collective, such as target tracking and (re-)identi cation of objects of interest. Unlike today’s deployments, which are mainly restricted to lab settings and highly controlled high-value applications, future smart camera networks will be messy and unpredictable. They will operate on a vast scale, drawing on mobile resources connected in networks structured in complex and changing ways. They will comprise heterogeneous and decentralised aggregations of visual sensors, which will come together in temporary alliances, in unforeseen and rapidly unfolding scenarios. The potential to include and harness citizen-contributed mobile streaming, body-worn video, and robot- mounted cameras, alongside more traditional xed or PTZ cameras, and supported by other non-visual sensors, leads to a number of di cult and important challenges. In this position paper, we discuss a variety of potential uses for such complex smart camera networks, and some of the challenges that arise when staying smart in the presence of such complexity. We present a general discussion on the challenges of heterogeneity, coordination, self-recon gurability, mobility, and collaboration in camera networks

    Real-time marker-less multi-person 3D pose estimation in RGB-Depth camera networks

    Get PDF
    This paper proposes a novel system to estimate and track the 3D poses of multiple persons in calibrated RGB-Depth camera networks. The multi-view 3D pose of each person is computed by a central node which receives the single-view outcomes from each camera of the network. Each single-view outcome is computed by using a CNN for 2D pose estimation and extending the resulting skeletons to 3D by means of the sensor depth. The proposed system is marker-less, multi-person, independent of background and does not make any assumption on people appearance and initial pose. The system provides real-time outcomes, thus being perfectly suited for applications requiring user interaction. Experimental results show the effectiveness of this work with respect to a baseline multi-view approach in different scenarios. To foster research and applications based on this work, we released the source code in OpenPTrack, an open source project for RGB-D people tracking.Comment: Submitted to the 2018 IEEE International Conference on Robotics and Automatio

    Towards a cloud‑based automated surveillance system using wireless technologies

    Get PDF
    Cloud Computing can bring multiple benefits for Smart Cities. It permits the easy creation of centralized knowledge bases, thus straightforwardly enabling that multiple embedded systems (such as sensor or control devices) can have a collaborative, shared intelligence. In addition to this, thanks to its vast computing power, complex tasks can be done over low-spec devices just by offloading computation to the cloud, with the additional advantage of saving energy. In this work, cloud’s capabilities are exploited to implement and test a cloud-based surveillance system. Using a shared, 3D symbolic world model, different devices have a complete knowledge of all the elements, people and intruders in a certain open area or inside a building. The implementation of a volumetric, 3D, object-oriented, cloud-based world model (including semantic information) is novel as far as we know. Very simple devices (orange Pi) can send RGBD streams (using kinect cameras) to the cloud, where all the processing is distributed and done thanks to its inherent scalability. A proof-of-concept experiment is done in this paper in a testing lab with multiple cameras connected to the cloud with 802.11ac wireless technology. Our results show that this kind of surveillance system is possible currently, and that trends indicate that it can be improved at a short term to produce high performance vigilance system using low-speed devices. In addition, this proof-of-concept claims that many interesting opportunities and challenges arise, for example, when mobile watch robots and fixed cameras would act as a team for carrying out complex collaborative surveillance strategies.Ministerio de Economía y Competitividad TEC2016-77785-PJunta de Andalucía P12-TIC-130

    Emerging technologies for learning (volume 1)

    Get PDF
    Collection of 5 articles on emerging technologies and trend

    Interoperability in IoT through the semantic profiling of objects

    Get PDF
    The emergence of smarter and broader people-oriented IoT applications and services requires interoperability at both data and knowledge levels. However, although some semantic IoT architectures have been proposed, achieving a high degree of interoperability requires dealing with a sea of non-integrated data, scattered across vertical silos. Also, these architectures do not fit into the machine-to-machine requirements, as data annotation has no knowledge on object interactions behind arriving data. This paper presents a vision of how to overcome these issues. More specifically, the semantic profiling of objects, through CoRE related standards, is envisaged as the key for data integration, allowing more powerful data annotation, validation, and reasoning. These are the key blocks for the development of intelligent applications.Portuguese Science and Technology Foundation (FCT) [UID/MULTI/00631/2013
    • …
    corecore