525 research outputs found

    Human mobility monitoring in very low resolution visual sensor network

    Get PDF
    This paper proposes an automated system for monitoring mobility patterns using a network of very low resolution visual sensors (30 30 pixels). The use of very low resolution sensors reduces privacy concern, cost, computation requirement and power consumption. The core of our proposed system is a robust people tracker that uses low resolution videos provided by the visual sensor network. The distributed processing architecture of our tracking system allows all image processing tasks to be done on the digital signal controller in each visual sensor. In this paper, we experimentally show that reliable tracking of people is possible using very low resolution imagery. We also compare the performance of our tracker against a state-of-the-art tracking method and show that our method outperforms. Moreover, the mobility statistics of tracks such as total distance traveled and average speed derived from trajectories are compared with those derived from ground truth given by Ultra-Wide Band sensors. The results of this comparison show that the trajectories from our system are accurate enough to obtain useful mobility statistics

    Exploring Deep Neural Network Models for Classification of High-resolution Panoramas

    Get PDF
    The objective of this thesis is to explore Deep Learning algorithms for classifying high-resolution images. While most deep learning algorithms focus on relatively low-resolution imagery (under 400×400 pixels), very high-resolution image classification poses unique challenges. These images occur in pathology and remote sensing, but here we focus on the classification of invasive plant species. We aimed to develop a computer vision system that can provide geo-coordinates of the locations of invasive plants by processing Google Map Street View images at using finite computational resources. We explore six methods for classifying these images and compare them. Our results could significantly impact the management of invasive plant species, which pose both economic and ecological threats

    Small-Object Detection in Remote Sensing Images with End-to-End Edge-Enhanced GAN and Object Detector Network

    Full text link
    The detection performance of small objects in remote sensing images is not satisfactory compared to large objects, especially in low-resolution and noisy images. A generative adversarial network (GAN)-based model called enhanced super-resolution GAN (ESRGAN) shows remarkable image enhancement performance, but reconstructed images miss high-frequency edge information. Therefore, object detection performance degrades for small objects on recovered noisy and low-resolution remote sensing images. Inspired by the success of edge enhanced GAN (EEGAN) and ESRGAN, we apply a new edge-enhanced super-resolution GAN (EESRGAN) to improve the image quality of remote sensing images and use different detector networks in an end-to-end manner where detector loss is backpropagated into the EESRGAN to improve the detection performance. We propose an architecture with three components: ESRGAN, Edge Enhancement Network (EEN), and Detection network. We use residual-in-residual dense blocks (RRDB) for both the ESRGAN and EEN, and for the detector network, we use the faster region-based convolutional network (FRCNN) (two-stage detector) and single-shot multi-box detector (SSD) (one stage detector). Extensive experiments on a public (car overhead with context) and a self-assembled (oil and gas storage tank) satellite dataset show superior performance of our method compared to the standalone state-of-the-art object detectors.Comment: This paper contains 27 pages and accepted for publication in MDPI remote sensing journal. GitHub Repository: https://github.com/Jakaria08/EESRGAN (Implementation

    Featureless visual processing for SLAM in changing outdoor environments

    Get PDF
    Vision-based SLAM is mostly a solved problem providing clear, sharp images can be obtained. However, in outdoor environments a number of factors such as rough terrain, high speeds and hardware limitations can result in these conditions not being met. High speed transit on rough terrain can lead to image blur and under/over exposure, problems that cannot easily be dealt with using low cost hardware. Furthermore, recently there has been a growth in interest in lifelong autonomy for robots, which brings with it the challenge in outdoor environments of dealing with a moving sun and lack of constant artificial lighting. In this paper, we present a lightweight approach to visual localization and visual odometry that addresses the challenges posed by perceptual change and low cost cameras. The approach combines low resolution imagery with the SLAM algorithm, RatSLAM. We test the system using a cheap consumer camera mounted on a small vehicle in a mixed urban and vegetated environment, at times ranging from dawn to dusk and in conditions ranging from sunny weather to rain. We first show that the system is able to provide reliable mapping and recall over the course of the day and incrementally incorporate new visual scenes from different times into an existing map. We then restrict the system to only learning visual scenes at one time of day, and show that the system is still able to localize and map at other times of day. The results demonstrate the viability of the approach in situations where image quality is poor and environmental or hardware factors preclude the use of visual features

    NeMO-Net The Neural Multi-Modal Observation & Training Network for Global Coral Reef Assessment

    Get PDF
    We present NeMO-Net, the Srst open-source deep convolutional neural network (CNN) and interactive learning and training software aimed at assessing the present and past dynamics of coral reef ecosystems through habitat mapping into 10 biological and physical classes. Shallow marine systems, particularly coral reefs, are under significant pressures due to climate change, ocean acidification, and other anthropogenic pressures, leading to rapid, often devastating changes, in these fragile and diverse ecosystems. Historically, remote sensing of shallow marine habitats has been limited to meter-scale imagery due to the optical effects of ocean wave distortion, refraction, and optical attenuation. NeMO-Net combines 3D cm-scale distortion-free imagery captured using NASA FluidCam and Fluid lensing remote sensing technology with low resolution airborne and spaceborne datasets of varying spatial resolutions, spectral spaces, calibrations, and temporal cadence in a supercomputer-based machine learning framework. NeMO-Net augments and improves the benthic habitat classification accuracy of low-resolution datasets across large geographic ad temporal scales using high-resolution training data from FluidCam.NeMO-Net uses fully convolutional networks based upon ResNet and ReSneNet to perform semantic segmentation of remote sensing imagery of shallow marine systems captured by drones, aircraft, and satellites, including WorldView and Sentinel. Deep Laplacian Pyramid Super-Resolution Networks (LapSRN) alongside Domain Adversarial Neural Networks (DANNs) are used to reconstruct high resolution information from low resolution imagery, and to recognize domain-invariant features across datasets from multiple platforms to achieve high classification accuracies, overcoming inter-sensor spatial, spectral and temporal variations.Finally, we share our online active learning and citizen science platform, which allows users to provide interactive training data for NeMO-Net in 2D and 3D, integrated within a deep learning framework. We present results from the PaciSc Islands including Fiji, Guam and Peros Banhos 1 1 2 1 3 1 where 24-class classification accuracy exceeds 91%

    Applications systems verification and transfer project. Volume 2: Operational applications of satellite snow-cover observations and data-collection systems in the Arizona test site

    Get PDF
    Ground surveys and aerial observations were used to monitor rapidly changing moisture conditions in the Salt-Verde watershed. Repetitive satellite snow cover observations greatly reduce the necessity for routine aerial snow reconnaissance flights over the mountains. High resolution, multispectral imagery provided by LANDSAT satellite series enabled rapid and accurate mapping of snow-cover distributions for small- to medium-sized subwatersheds; however, the imagery provided only one observation every 9 days of about a third of the watershed. Low resolution imagery acquired by the ITOSa dn SMS/GOES meteorological satellite series provides the daily synoptic observation necessary to monitor the rapid changes in snow-covered area in the entire watershed. Short term runoff volumes can be predicted from daily sequential snow cover observations
    • …
    corecore