7,958 research outputs found

    Towards a cloud‑based automated surveillance system using wireless technologies

    Get PDF
    Cloud Computing can bring multiple benefits for Smart Cities. It permits the easy creation of centralized knowledge bases, thus straightforwardly enabling that multiple embedded systems (such as sensor or control devices) can have a collaborative, shared intelligence. In addition to this, thanks to its vast computing power, complex tasks can be done over low-spec devices just by offloading computation to the cloud, with the additional advantage of saving energy. In this work, cloud’s capabilities are exploited to implement and test a cloud-based surveillance system. Using a shared, 3D symbolic world model, different devices have a complete knowledge of all the elements, people and intruders in a certain open area or inside a building. The implementation of a volumetric, 3D, object-oriented, cloud-based world model (including semantic information) is novel as far as we know. Very simple devices (orange Pi) can send RGBD streams (using kinect cameras) to the cloud, where all the processing is distributed and done thanks to its inherent scalability. A proof-of-concept experiment is done in this paper in a testing lab with multiple cameras connected to the cloud with 802.11ac wireless technology. Our results show that this kind of surveillance system is possible currently, and that trends indicate that it can be improved at a short term to produce high performance vigilance system using low-speed devices. In addition, this proof-of-concept claims that many interesting opportunities and challenges arise, for example, when mobile watch robots and fixed cameras would act as a team for carrying out complex collaborative surveillance strategies.Ministerio de Economía y Competitividad TEC2016-77785-PJunta de Andalucía P12-TIC-130

    Big Data Model Simulation on a Graph Database for Surveillance in Wireless Multimedia Sensor Networks

    Full text link
    Sensors are present in various forms all around the world such as mobile phones, surveillance cameras, smart televisions, intelligent refrigerators and blood pressure monitors. Usually, most of the sensors are a part of some other system with similar sensors that compose a network. One of such networks is composed of millions of sensors connect to the Internet which is called Internet of things (IoT). With the advances in wireless communication technologies, multimedia sensors and their networks are expected to be major components in IoT. Many studies have already been done on wireless multimedia sensor networks in diverse domains like fire detection, city surveillance, early warning systems, etc. All those applications position sensor nodes and collect their data for a long time period with real-time data flow, which is considered as big data. Big data may be structured or unstructured and needs to be stored for further processing and analyzing. Analyzing multimedia big data is a challenging task requiring a high-level modeling to efficiently extract valuable information/knowledge from data. In this study, we propose a big database model based on graph database model for handling data generated by wireless multimedia sensor networks. We introduce a simulator to generate synthetic data and store and query big data using graph model as a big database. For this purpose, we evaluate the well-known graph-based NoSQL databases, Neo4j and OrientDB, and a relational database, MySQL.We have run a number of query experiments on our implemented simulator to show that which database system(s) for surveillance in wireless multimedia sensor networks is efficient and scalable

    MusA: Using Indoor Positioning and Navigation to Enhance Cultural Experiences in a museum

    Get PDF
    In recent years there has been a growing interest into the use of multimedia mobile guides in museum environments. Mobile devices have the capabilities to detect the user context and to provide pieces of information suitable to help visitors discovering and following the logical and emotional connections that develop during the visit. In this scenario, location based services (LBS) currently represent an asset, and the choice of the technology to determine users' position, combined with the definition of methods that can effectively convey information, become key issues in the design process. In this work, we present MusA (Museum Assistant), a general framework for the development of multimedia interactive guides for mobile devices. Its main feature is a vision-based indoor positioning system that allows the provision of several LBS, from way-finding to the contextualized communication of cultural contents, aimed at providing a meaningful exploration of exhibits according to visitors' personal interest and curiosity. Starting from the thorough description of the system architecture, the article presents the implementation of two mobile guides, developed to respectively address adults and children, and discusses the evaluation of the user experience and the visitors' appreciation of these application

    Data Aggregation through Web Service Composition in Smart Camera Networks

    Get PDF
    Distributed Smart Camera (DSC) networks are power constrained real-time distributed embedded systems that perform computer vision using multiple cameras. Providing data aggregation techniques that is criti-cal for running complex image processing algorithms on DSCs is a challenging task due to complexity of video and image data. Providing highly desirable SQL APIs for sophisticated query processing in DSC networks is also challenging for similar reasons. Research on DSCs to date have not addressed the above two problems. In this thesis, we develop a novel SOA based middleware framework on a DSC network that uses Distributed OSGi to expose DSC network services as web services. We also develop a novel web service composition scheme that aid in data aggregation and a SQL query interface for DSC net-works that allow sophisticated query processing. We validate our service orchestration concept for data aggregation by providing query primitive for face detection in smart camera network

    Supporting UAVs with Edge Computing: A Review of Opportunities and Challenges

    Full text link
    Over the last years, Unmanned Aerial Vehicles (UAVs) have seen significant advancements in sensor capabilities and computational abilities, allowing for efficient autonomous navigation and visual tracking applications. However, the demand for computationally complex tasks has increased faster than advances in battery technology. This opens up possibilities for improvements using edge computing. In edge computing, edge servers can achieve lower latency responses compared to traditional cloud servers through strategic geographic deployments. Furthermore, these servers can maintain superior computational performance compared to UAVs, as they are not limited by battery constraints. Combining these technologies by aiding UAVs with edge servers, research finds measurable improvements in task completion speed, energy efficiency, and reliability across multiple applications and industries. This systematic literature review aims to analyze the current state of research and collect, select, and extract the key areas where UAV activities can be supported and improved through edge computing

    Contextual and Human Factors in Information Fusion

    Get PDF
    Proceedings of: NATO Advanced Research Workshop on Human Systems Integration to Enhance Maritime Domain Awareness for Port/Harbour Security Systems, Opatija (Croatia), December 8-12, 2008Context and human factors may be essential to improving measurement processes for each sensor, and the particular context of each sensor could be used to obtain a global definition of context in multisensor environments. Reality may be captured by human sensorial domain based only on machine stimulus and then generate a feedback which can be used by the machine at its different processing levels, adapting its algorithms and methods accordingly. Reciprocally, human perception of the environment could also be modelled by context in the machine. In the proposed model, both machine and man take sensorial information from the environment and process it cooperatively until a decision or semantic synthesis is produced. In this work, we present a model for context representation and reasoning to be exploited by fusion systems. In the first place, the structure and representation of contextual information must be determined before being exploited by a specific application. Under complex circumstances, the use of context information and human interaction can help to improve a tracking system's performance (for instance, video-based tracking systems may fail when dealing with object interaction, occlusions, crosses, etc.).Publicad
    • 

    corecore