452,120 research outputs found

    Integrating multiple sensor modalities for environmental monitoring of marine locations

    Get PDF
    In this paper we present preliminary work on integrating visual sensing with the more traditional sensing modalities for marine locations. We have deployed visual sensing at one of the Smart Coast WSN sites in Ireland and have built a software platform for gathering and synchronizing all sensed data. We describe how the analysis of a range of different sensor modalities can reinforce readings from a given noisy, unreliable sensor

    Selecting source image sensor nodes based on 2-hop information to improve image transmissions to mobile robot sinks in search \& rescue operations

    Full text link
    We consider Robot-assisted Search &\& Rescue operations enhanced with some fixed image sensor nodes capable of capturing and sending visual information to a robot sink. In order to increase the performance of image transfer from image sensor nodes to the robot sinks we propose a 2-hop neighborhood information-based cover set selection to determine the most relevant image sensor nodes to activate. Then, in order to be consistent with our proposed approach, a multi-path extension of Greedy Perimeter Stateless Routing (called T-GPSR) wherein routing decisions are also based on 2-hop neighborhood information is proposed. Simulation results show that our proposal reduces packet losses, enabling fast packet delivery and higher visual quality of received images at the robot sink

    Modeling Camera Effects to Improve Visual Learning from Synthetic Data

    Full text link
    Recent work has focused on generating synthetic imagery to increase the size and variability of training data for learning visual tasks in urban scenes. This includes increasing the occurrence of occlusions or varying environmental and weather effects. However, few have addressed modeling variation in the sensor domain. Sensor effects can degrade real images, limiting generalizability of network performance on visual tasks trained on synthetic data and tested in real environments. This paper proposes an efficient, automatic, physically-based augmentation pipeline to vary sensor effects --chromatic aberration, blur, exposure, noise, and color cast-- for synthetic imagery. In particular, this paper illustrates that augmenting synthetic training datasets with the proposed pipeline reduces the domain gap between synthetic and real domains for the task of object detection in urban driving scenes

    Multi-sensor fire detection by fusing visual and non-visual flame features

    Get PDF
    This paper proposes a feature-based multi-sensor fire detector operating on ordinary video and long wave infrared (LWIR) thermal images. The detector automatically extracts hot objects from the thermal images by dynamic background subtraction and histogram-based segmentation. Analogously, moving objects are extracted from the ordinary video by intensity-based dynamic background subtraction. These hot and moving objects are then further analyzed using a set of flame features which focus on the distinctive geometric, temporal and spatial disorder characteristics of flame regions. By combining the probabilities of these fast retrievable visual and thermal features, we are able to detect the fire at an early stage. Experiments with video and LWIR sequences of lire and non-fire real case scenarios show good results in id indicate that multi-sensor fire analysis is very promising

    Human mobility monitoring in very low resolution visual sensor network

    Get PDF
    This paper proposes an automated system for monitoring mobility patterns using a network of very low resolution visual sensors (30 30 pixels). The use of very low resolution sensors reduces privacy concern, cost, computation requirement and power consumption. The core of our proposed system is a robust people tracker that uses low resolution videos provided by the visual sensor network. The distributed processing architecture of our tracking system allows all image processing tasks to be done on the digital signal controller in each visual sensor. In this paper, we experimentally show that reliable tracking of people is possible using very low resolution imagery. We also compare the performance of our tracker against a state-of-the-art tracking method and show that our method outperforms. Moreover, the mobility statistics of tracks such as total distance traveled and average speed derived from trajectories are compared with those derived from ground truth given by Ultra-Wide Band sensors. The results of this comparison show that the trajectories from our system are accurate enough to obtain useful mobility statistics

    Daylight adaptive optimal lighting system control strategies for energy savings and visual comfort in commercial buildings

    Get PDF
    Artificial lighting of commercial buildings in Malaysia consumes 21% of the total electrical energy. Therefore, reducing the energy is required to achieve sustainable buildings (i.e., higher energy efficiency and visual comfort), by implementing optimal light sensor placement method and optimisation-based control strategy. However, in recent works related to light sensor placement, energy performance and illuminance uniformity (Uo) are not considered, and the results did not provide the optimal number of sensors to be employed. To optimise power consumption (PC) and visual comfort simultaneously through the optimisation-based control strategy, the previous work developed a visual comfort model to represent Uo. However, the model did not consider daylight and the results of Uo need further improvement. This research proposes: (1) a new optimal light sensor placement method (OLSPM) by using combined particle swarm optimisation (PSO) and fuzzy logic controller (FLC) denoted as OLSPM-PSOFLC, and (2) a new visual comfort metric called illuminance uniformity deviation index (IUDI) and incorporated with multi-objective PSO (MOPSO) for solving energy consumption and visual comfort problem. The OLSPM�PSOFLC is developed to determine the optimal number and position of light sensors by considering PC while satisfying average illuminance level (Eav) and Uo. To ensure both PC and Uo in the room are always at the optimum levels, the IUDI with MOPSO is developed. Before the proposed methods are implemented, retrofitting lighting system is implemented first to determine the best lamp technology to be installed in terms of technical and economic metrics. An actual office room is considered for carrying out the proposed methods. The comparative results showed that the OLSPM�PSOFLC significantly reduced the number of sensors, energy consumption, carbon dioxide emission, payback period, and life cycle cost were 66%, 23%, 23%, and 30%, respectively, compared to the multi-sensor. Meanwhile, based on the comparative study of the IUDI and CVRMSE, the IUDI showed superior performance with 6% and 27% improvement of Uo and energy savings, respectively. Based on their superiority, the newly developed methods can be potentially implemented for all types of rooms and are very useful methodologies towards sustainable commercial buildings

    Programmable active pixel sensor to investigate neural interactions within the retina

    Get PDF
    Detection of the visual scene by the eye and the resultant neural interactions of the retina-brain system give us our perception of sight. We have developed an Active Pixel Sensor (APS) to be used as a tool for both furthering understanding of these interactions via experimentation with the retina and to make developments towards a realisable retinal prosthesis. The sensor consists of 469 pixels in a hexagonal array. The pixels are interconnected by a programmable neural network to mimic lateral interactions between retinal cells. Outputs from the sensor are in the form of biphasic current pulse trains suitable to stimulate retinal cells via a biocompatible array. The APS will be described with initial characterisation and test results

    A sub-mW IoT-endnode for always-on visual monitoring and smart triggering

    Full text link
    This work presents a fully-programmable Internet of Things (IoT) visual sensing node that targets sub-mW power consumption in always-on monitoring scenarios. The system features a spatial-contrast 128x64128\mathrm{x}64 binary pixel imager with focal-plane processing. The sensor, when working at its lowest power mode (10μW10\mu W at 10 fps), provides as output the number of changed pixels. Based on this information, a dedicated camera interface, implemented on a low-power FPGA, wakes up an ultra-low-power parallel processing unit to extract context-aware visual information. We evaluate the smart sensor on three always-on visual triggering application scenarios. Triggering accuracy comparable to RGB image sensors is achieved at nominal lighting conditions, while consuming an average power between 193μW193\mu W and 277μW277\mu W, depending on context activity. The digital sub-system is extremely flexible, thanks to a fully-programmable digital signal processing engine, but still achieves 19x lower power consumption compared to MCU-based cameras with significantly lower on-board computing capabilities.Comment: 11 pages, 9 figures, submitteted to IEEE IoT Journa
    corecore