43,156 research outputs found

    Big Data Model Simulation on a Graph Database for Surveillance in Wireless Multimedia Sensor Networks

    Full text link
    Sensors are present in various forms all around the world such as mobile phones, surveillance cameras, smart televisions, intelligent refrigerators and blood pressure monitors. Usually, most of the sensors are a part of some other system with similar sensors that compose a network. One of such networks is composed of millions of sensors connect to the Internet which is called Internet of things (IoT). With the advances in wireless communication technologies, multimedia sensors and their networks are expected to be major components in IoT. Many studies have already been done on wireless multimedia sensor networks in diverse domains like fire detection, city surveillance, early warning systems, etc. All those applications position sensor nodes and collect their data for a long time period with real-time data flow, which is considered as big data. Big data may be structured or unstructured and needs to be stored for further processing and analyzing. Analyzing multimedia big data is a challenging task requiring a high-level modeling to efficiently extract valuable information/knowledge from data. In this study, we propose a big database model based on graph database model for handling data generated by wireless multimedia sensor networks. We introduce a simulator to generate synthetic data and store and query big data using graph model as a big database. For this purpose, we evaluate the well-known graph-based NoSQL databases, Neo4j and OrientDB, and a relational database, MySQL.We have run a number of query experiments on our implemented simulator to show that which database system(s) for surveillance in wireless multimedia sensor networks is efficient and scalable

    Design and application of a multi-modal process tomography system

    Get PDF
    This paper presents a design and application study of an integrated multi-modal system designed to support a range of common modalities: electrical resistance, electrical capacitance and ultrasonic tomography. Such a system is designed for use with complex processes that exhibit behaviour changes over time and space, and thus demand equally diverse sensing modalities. A multi-modal process tomography system able to exploit multiple sensor modes must permit the integration of their data, probably centred upon a composite process model. The paper presents an overview of this approach followed by an overview of the systems engineering and integrated design constraints. These include a range of hardware oriented challenges: the complexity and specificity of the front end electronics for each modality; the need for front end data pre-processing and packing; the need to integrate the data to facilitate data fusion; and finally the features to enable successful fusion and interpretation. A range of software aspects are also reviewed: the need to support differing front-end sensors for each modality in a generic fashion; the need to communicate with front end data pre-processing and packing systems; the need to integrate the data to allow data fusion; and finally to enable successful interpretation. The review of the system concepts is illustrated with an application to the study of a complex multi-component process

    People tracking by cooperative fusion of RADAR and camera sensors

    Get PDF
    Accurate 3D tracking of objects from monocular camera poses challenges due to the loss of depth during projection. Although ranging by RADAR has proven effective in highway environments, people tracking remains beyond the capability of single sensor systems. In this paper, we propose a cooperative RADAR-camera fusion method for people tracking on the ground plane. Using average person height, joint detection likelihood is calculated by back-projecting detections from the camera onto the RADAR Range-Azimuth data. Peaks in the joint likelihood, representing candidate targets, are fed into a Particle Filter tracker. Depending on the association outcome, particles are updated using the associated detections (Tracking by Detection), or by sampling the raw likelihood itself (Tracking Before Detection). Utilizing the raw likelihood data has the advantage that lost targets are continuously tracked even if the camera or RADAR signal is below the detection threshold. We show that in single target, uncluttered environments, the proposed method entirely outperforms camera-only tracking. Experiments in a real-world urban environment also confirm that the cooperative fusion tracker produces significantly better estimates, even in difficult and ambiguous situations

    LiDAR and Camera Detection Fusion in a Real Time Industrial Multi-Sensor Collision Avoidance System

    Full text link
    Collision avoidance is a critical task in many applications, such as ADAS (advanced driver-assistance systems), industrial automation and robotics. In an industrial automation setting, certain areas should be off limits to an automated vehicle for protection of people and high-valued assets. These areas can be quarantined by mapping (e.g., GPS) or via beacons that delineate a no-entry area. We propose a delineation method where the industrial vehicle utilizes a LiDAR {(Light Detection and Ranging)} and a single color camera to detect passive beacons and model-predictive control to stop the vehicle from entering a restricted space. The beacons are standard orange traffic cones with a highly reflective vertical pole attached. The LiDAR can readily detect these beacons, but suffers from false positives due to other reflective surfaces such as worker safety vests. Herein, we put forth a method for reducing false positive detection from the LiDAR by projecting the beacons in the camera imagery via a deep learning method and validating the detection using a neural network-learned projection from the camera to the LiDAR space. Experimental data collected at Mississippi State University's Center for Advanced Vehicular Systems (CAVS) shows the effectiveness of the proposed system in keeping the true detection while mitigating false positives.Comment: 34 page

    Multi-Sensor Context-Awareness in Mobile Devices and Smart Artefacts

    Get PDF
    The use of context in mobile devices is receiving increasing attention in mobile and ubiquitous computing research. In this article we consider how to augment mobile devices with awareness of their environment and situation as context. Most work to date has been based on integration of generic context sensors, in particular for location and visual context. We propose a different approach based on integration of multiple diverse sensors for awareness of situational context that can not be inferred from location, and targeted at mobile device platforms that typically do not permit processing of visual context. We have investigated multi-sensor context-awareness in a series of projects, and report experience from development of a number of device prototypes. These include development of an awareness module for augmentation of a mobile phone, of the Mediacup exemplifying context-enabled everyday artifacts, and of the Smart-Its platform for aware mobile devices. The prototypes have been explored in various applications to validate the multi-sensor approach to awareness, and to develop new perspectives of how embedded context-awareness can be applied in mobile and ubiquitous computing

    Stereo and ToF Data Fusion by Learning from Synthetic Data

    Get PDF
    Time-of-Flight (ToF) sensors and stereo vision systems are both capable of acquiring depth information but they have complementary characteristics and issues. A more accurate representation of the scene geometry can be obtained by fusing the two depth sources. In this paper we present a novel framework for data fusion where the contribution of the two depth sources is controlled by confidence measures that are jointly estimated using a Convolutional Neural Network. The two depth sources are fused enforcing the local consistency of depth data, taking into account the estimated confidence information. The deep network is trained using a synthetic dataset and we show how the classifier is able to generalize to different data, obtaining reliable estimations not only on synthetic data but also on real world scenes. Experimental results show that the proposed approach increases the accuracy of the depth estimation on both synthetic and real data and that it is able to outperform state-of-the-art methods
    • …
    corecore