1,083 research outputs found

    Smart Sensor Technologies for IoT

    Get PDF
    The recent development in wireless networks and devices has led to novel services that will utilize wireless communication on a new level. Much effort and resources have been dedicated to establishing new communication networks that will support machine-to-machine communication and the Internet of Things (IoT). In these systems, various smart and sensory devices are deployed and connected, enabling large amounts of data to be streamed. Smart services represent new trends in mobile services, i.e., a completely new spectrum of context-aware, personalized, and intelligent services and applications. A variety of existing services utilize information about the position of the user or mobile device. The position of mobile devices is often achieved using the Global Navigation Satellite System (GNSS) chips that are integrated into all modern mobile devices (smartphones). However, GNSS is not always a reliable source of position estimates due to multipath propagation and signal blockage. Moreover, integrating GNSS chips into all devices might have a negative impact on the battery life of future IoT applications. Therefore, alternative solutions to position estimation should be investigated and implemented in IoT applications. This Special Issue, “Smart Sensor Technologies for IoT” aims to report on some of the recent research efforts on this increasingly important topic. The twelve accepted papers in this issue cover various aspects of Smart Sensor Technologies for IoT

    Simultaneous 3D object tracking and camera parameter estimation by Bayesian methods and transdimensional MCMC sampling

    Get PDF
    Multi-camera 3D tracking systems with overlapping cameras represent a powerful mean for scene analysis, as they potentially allow greater robustness than monocular systems and provide useful 3D information about object location and movement. However, their performance relies on accurately calibrated camera networks, which is not a realistic assumption in real surveillance environments. Here, we introduce a multi-camera system for tracking the 3D position of a varying number of objects and simultaneously refin-ing the calibration of the network of overlapping cameras. Therefore, we introduce a Bayesian framework that combines Particle Filtering for tracking with recursive Bayesian estimation methods by means of adapted transdimensional MCMC sampling. Addi-tionally, the system has been designed to work on simple motion detection masks, making it suitable for camera networks with low transmission capabilities. Tests show that our approach allows a successful performance even when starting from clearly inaccurate camera calibrations, which would ruin conventional approaches

    情報セントリックIoTサーベランスシステムに関する研究

    Get PDF
    早大学位記番号:新8269早稲田大

    Intelligent Navigation for a Solar Powered Unmanned Underwater Vehicle

    Get PDF
    In this paper, an intelligent navigation system for an unmanned underwater vehicle powered by renewable energy and designed for shadow water inspection in missions of a long duration is proposed. The system is composed of an underwater vehicle, which tows a surface vehicle. The surface vehicle is a small boat with photovoltaic panels, a methanol fuel cell and communication equipment, which provides energy and communication to the underwater vehicle. The underwater vehicle has sensors to monitor the underwater environment such as sidescan sonar and a video camera in a flexible configuration and sensors to measure the physical and chemical parameters of water quality on predefined paths for long distances. The underwater vehicle implements a biologically inspired neural architecture for autonomous intelligent navigation. Navigation is carried out by integrating a kinematic adaptive neuro‐controller for trajectory tracking and an obstacle avoidance adaptive neuro‐  controller. The autonomous underwater vehicle is capable of operating during long periods of observation and monitoring. This autonomous vehicle is a good tool for observing large areas of sea, since it operates for long periods of time due to the contribution of renewable energy. It correlates all sensor data for time and geodetic position. This vehicle has been used for monitoring the Mar Menor lagoon.Supported by the Coastal Monitoring System for the Mar Menor (CMS‐  463.01.08_CLUSTER) project founded by the Regional Government of Murcia, by the SICUVA project (Control and Navigation System for AUV Oceanographic Monitoring Missions. REF: 15357/PI/10) founded by the Seneca Foundation of Regional Government of Murcia and by the DIVISAMOS project (Design of an Autonomous Underwater Vehicle for Inspections and oceanographic mission‐UPCT: DPI‐ 2009‐14744‐C03‐02) founded by the Spanish Ministry of Science and Innovation from Spain

    Hierarchical Feature Learning

    Get PDF
    The success of many tasks depends on good feature representation which is often domain-specific and hand-crafted requiring substantial human effort. Such feature representation is not general, i.e. unsuitable for even the same task across multiple domains, let alone different tasks.To address these issues, a multilayered convergent neural architecture is presented for learning from repeating spatially and temporally coincident patterns in data at multiple levels of abstraction. The bottom-up weights in each layer are learned to encode a hierarchy of overcomplete and sparse feature dictionaries from space- and time-varying sensory data. Two algorithms are investigated: recursive layer-by-layer spherical clustering and sparse coding to learn feature hierarchies. The model scales to full-sized high-dimensional input data and to an arbitrary number of layers thereby having the capability to capture features at any level of abstraction. The model learns features that correspond to objects in higher layers and object-parts in lower layers.Learning features invariant to arbitrary transformations in the data is a requirement for any effective and efficient representation system, biological or artificial. Each layer in the proposed network is composed of simple and complex sublayers motivated by the layered organization of the primary visual cortex. When exposed to natural videos, the model develops simple and complex cell-like receptive field properties. The model can predict by learning lateral connections among the simple sublayer neurons. A topographic map to their spatial features emerges by minimizing the wiring length simultaneously with feature learning.The model is general-purpose, unsupervised and online. Operations in each layer of the model can be implemented in parallelized hardware, making it very efficient for real world applications

    Defect and thickness inspection system for cast thin films using machine vision and full-field transmission densitometry

    Get PDF
    Quick mass production of homogeneous thin film material is required in paper, plastic, fabric, and thin film industries. Due to the high feed rates and small thicknesses, machine vision and other nondestructive evaluation techniques are used to ensure consistent, defect-free material by continuously assessing post-production quality. One of the fastest growing inspection areas is for 0.5-500 micrometer thick thin films, which are used for semiconductor wafers, amorphous photovoltaics, optical films, plastics, and organic and inorganic membranes. As a demonstration application, a prototype roll-feed imaging system has been designed to inspect high-temperature polymer electrolyte membrane (PEM), used for fuel cells, after being die cast onto a moving transparent substrate. The inspection system continuously detects thin film defects and classifies them with a neural network into categories of holes, bubbles, thinning, and gels, with a 1.2% false alarm rate, 7.1% escape rate, and classification accuracy of 96.1%. In slot die casting processes, defect types are indicative of a misbalance in the mass flow rate and web speed; so, based on the classified defects, the inspection system informs the operator of corrective adjustments to these manufacturing parameters. Thickness uniformity is also critical to membrane functionality, so a real-time, full-field transmission densitometer has been created to measure the bi-directional thickness profile of the semi-transparent PEM between 25-400 micrometers. The local thickness of the 75 mm x 100 mm imaged area is determined by converting the optical density of the sample to thickness with the Beer-Lambert law. The PEM extinction coefficient is determined to be 1.4 D/mm and the average thickness error is found to be 4.7%. Finally, the defect inspection and thickness profilometry systems are compiled into a specially-designed graphical user interface for intuitive real-time operation and visualization.M.S.Committee Chair: Tequila Harris; Committee Member: Levent Degertekin; Committee Member: Wayne Dale

    An Event-Driven Multi-Kernel Convolution Processor Module for Event-Driven Vision Sensors

    Get PDF
    Event-Driven vision sensing is a new way of sensing visual reality in a frame-free manner. This is, the vision sensor (camera) is not capturing a sequence of still frames, as in conventional video and computer vision systems. In Event-Driven sensors each pixel autonomously and asynchronously decides when to send its address out. This way, the sensor output is a continuous stream of address events representing reality dynamically continuously and without constraining to frames. In this paper we present an Event-Driven Convolution Module for computing 2D convolutions on such event streams. The Convolution Module has been designed to assemble many of them for building modular and hierarchical Convolutional Neural Networks for robust shape and pose invariant object recognition. The Convolution Module has multi-kernel capability. This is, it will select the convolution kernel depending on the origin of the event. A proof-of-concept test prototype has been fabricated in a 0.35 m CMOS process and extensive experimental results are provided. The Convolution Processor has also been combined with an Event-Driven Dynamic Vision Sensor (DVS) for high-speed recognition examples. The chip can discriminate propellers rotating at 2 k revolutions per second, detect symbols on a 52 card deck when browsing all cards in 410 ms, or detect and follow the center of a phosphor oscilloscope trace rotating at 5 KHz.Unión Europea 216777 (NABAB)Ministerio de Ciencia e Innovación TEC2009-10639-C04-0

    Activity understanding and unusual event detection in surveillance videos

    Get PDF
    PhDComputer scientists have made ceaseless efforts to replicate cognitive video understanding abilities of human brains onto autonomous vision systems. As video surveillance cameras become ubiquitous, there is a surge in studies on automated activity understanding and unusual event detection in surveillance videos. Nevertheless, video content analysis in public scenes remained a formidable challenge due to intrinsic difficulties such as severe inter-object occlusion in crowded scene and poor quality of recorded surveillance footage. Moreover, it is nontrivial to achieve robust detection of unusual events, which are rare, ambiguous, and easily confused with noise. This thesis proposes solutions for resolving ambiguous visual observations and overcoming unreliability of conventional activity analysis methods by exploiting multi-camera visual context and human feedback. The thesis first demonstrates the importance of learning visual context for establishing reliable reasoning on observed activity in a camera network. In the proposed approach, a new Cross Canonical Correlation Analysis (xCCA) is formulated to discover and quantify time delayed pairwise correlations of regional activities observed within and across multiple camera views. This thesis shows that learning time delayed pairwise activity correlations offers valuable contextual information for (1) spatial and temporal topology inference of a camera network, (2) robust person re-identification, and (3) accurate activity-based video temporal segmentation. Crucially, in contrast to conventional methods, the proposed approach does not rely on either intra-camera or inter-camera object tracking; it can thus be applied to low-quality surveillance videos featuring severe inter-object occlusions. Second, to detect global unusual event across multiple disjoint cameras, this thesis extends visual context learning from pairwise relationship to global time delayed dependency between regional activities. Specifically, a Time Delayed Probabilistic Graphical Model (TD-PGM) is proposed to model the multi-camera activities and their dependencies. Subtle global unusual events are detected and localised using the model as context-incoherent patterns across multiple camera views. In the model, different nodes represent activities in different decomposed re3 gions from different camera views, and the directed links between nodes encoding time delayed dependencies between activities observed within and across camera views. In order to learn optimised time delayed dependencies in a TD-PGM, a novel two-stage structure learning approach is formulated by combining both constraint-based and scored-searching based structure learning methods. Third, to cope with visual context changes over time, this two-stage structure learning approach is extended to permit tractable incremental update of both TD-PGM parameters and its structure. As opposed to most existing studies that assume static model once learned, the proposed incremental learning allows a model to adapt itself to reflect the changes in the current visual context, such as subtle behaviour drift over time or removal/addition of cameras. Importantly, the incremental structure learning is achieved without either exhaustive search in a large graph structure space or storing all past observations in memory, making the proposed solution memory and time efficient. Forth, an active learning approach is presented to incorporate human feedback for on-line unusual event detection. Contrary to most existing unsupervised methods that perform passive mining for unusual events, the proposed approach automatically requests supervision for critical points to resolve ambiguities of interest, leading to more robust detection of subtle unusual events. The active learning strategy is formulated as a stream-based solution, i.e. it makes decision on-the-fly on whether to request label for each unlabelled sample observed in sequence. It selects adaptively two active learning criteria, namely likelihood criterion and uncertainty criterion to achieve (1) discovery of unknown event classes and (2) refinement of classification boundary. The effectiveness of the proposed approaches is validated using videos captured from busy public scenes such as underground stations and traffic intersections
    corecore