461 research outputs found

    A study of smart device-based mobile imaging and implementation for engineering applications

    Get PDF
    Title from PDF of title page, viewed on June 12, 2013Thesis advisor: ZhiQiang ChenVitaIncludes bibliographic references (pages 76-82)Thesis (M.S.)--School of Computing and Engineering. University of Missouri--Kansas City, 2013Mobile imaging has become a very active research topic in recent years thanks to the rapid development of computing and sensing capabilities of mobile devices. This area features multi-disciplinary studies of mobile hardware, imaging sensors, imaging and vision algorithms, wireless network and human-machine interface problems. Due to the limitation of computing capacity that early mobile devices have, researchers proposed client-server module, which push the data to more powerful computing platforms through wireless network, and let the cloud or standalone servers carry out all the computing and processing work. This thesis reviewed the development of mobile hardware and software platform, and the related research done on mobile imaging for the past 20 years. There are several researches on mobile imaging, but few people aim at building a framework which helps engineers solving problems by using mobile imaging. With higher-resolution imaging and high-performance computing power built into smart mobile devices, more and more imaging processing tasks can be achieved on the device rather than the client-server module. Based on this fact, a framework of collaborative mobile imaging is introduced for civil infrastructure condition assessment to help engineers solving technical challenges. Another contribution in this thesis is applying mobile imaging application into home automation. E-SAVE is a research project focusing on extensive use of automation in conserving and using energy wisely in home automation. Mobile users can view critical information such as energy data of the appliances with the help of mobile imaging. OpenCV is an image processing and computer vision library. The applications in this thesis use functions in OpenCV including camera calibration, template matching, image stitching and Canny edge detection. The application aims to help field engineers is interactive crack detection. The other one uses template matching to recognize appliances in the home automation system.Introduction -- Background and related work -- Basic imaging processing methods for mobile applications -- Collaborative and interactive mobile imaging -- Mobile imaging for smart energy -- Conclusion and recommendation

    CloudScout: A deep neural network for on-board cloud detection on hyperspectral images

    Get PDF
    The increasing demand for high-resolution hyperspectral images from nano and microsatellites conflicts with the strict bandwidth constraints for downlink transmission. A possible approach to mitigate this problem consists in reducing the amount of data to transmit to ground through on-board processing of hyperspectral images. In this paper, we propose a custom Convolutional Neural Network (CNN) deployed for a nanosatellite payload to select images eligible for transmission to ground, called CloudScout. The latter is installed on the Hyperscout-2, in the frame of the Phisat-1 ESA mission, which exploits a hyperspectral camera to classify cloud-covered images and clear ones. The images transmitted to ground are those that present less than 70% of cloudiness in a frame. We train and test the network against an extracted dataset from the Sentinel-2 mission, which was appropriately pre-processed to emulate the Hyperscout-2 hyperspectral sensor. On the test set we achieve 92% of accuracy with 1% of False Positives (FP). The Phisat-1 mission will start in 2020 and will operate for about 6 months. It represents the first in-orbit demonstration of Deep Neural Network (DNN) for data processing on the edge. The innovation aspect of our work concerns not only cloud detection but in general low power, low latency, and embedded applications. Our work should enable a new era of edge applications and enhance remote sensing applications directly on-board satellite

    CIRA annual report FY 2011/2012

    Get PDF

    DragonflEYE: a passive approach to aerial collision sensing

    Get PDF
    "This dissertation describes the design, development and test of a passive wide-field optical aircraft collision sensing instrument titled 'DragonflEYE'. Such a ""sense-and-avoid"" instrument is desired for autonomous unmanned aerial systems operating in civilian airspace. The instrument was configured as a network of smart camera nodes and implemented using commercial, off-the-shelf components. An end-to-end imaging train model was developed and important figures of merit were derived. Transfer functions arising from intermediate mediums were discussed and their impact assessed. Multiple prototypes were developed. The expected performance of the instrument was iteratively evaluated on the prototypes, beginning with modeling activities followed by laboratory tests, ground tests and flight tests. A prototype was mounted on a Bell 205 helicopter for flight tests, with a Bell 206 helicopter acting as the target. Raw imagery was recorded alongside ancillary aircraft data, and stored for the offline assessment of performance. The ""range at first detection"" (R0), is presented as a robust measure of sensor performance, based on a suitably defined signal-to-noise ratio. The analysis treats target radiance fluctuations, ground clutter, atmospheric effects, platform motion and random noise elements. Under the measurement conditions, R0 exceeded flight crew acquisition ranges. Secondary figures of merit are also discussed, including time to impact, target size and growth, and the impact of resolution on detection range. The hardware was structured to facilitate a real-time hierarchical image-processing pipeline, with selected image processing techniques introduced. In particular, the height of an observed event above the horizon compensates for angular motion of the helicopter platform.

    Synthetic Aperture Radar Image Formation and Processing on an MPSoC

    Get PDF
    Satellite remote sensing acquisitions are usually processed after downlink to a ground station. The satellite travel time to the ground station adds to the total latency, increasing the time until a user can obtain the processing results. Performing the processing and information extraction onboard of the satellite can significantly reduce this time. In this study, synthetic aperture radar (SAR) image formation as well as ship detection and extreme weather detection were implemented in a multiprocessor system on a chip (MPSoC). Processing steps with high computational complexity were ported to run on the programmable logic (PL), achieving significant speed-up by implementing a high degree of parallelization and pipelining as well as efficient memory accesses. Steps with lower complexity run on the processing system (PS), allowing for higher flexibility and reducing the need for resources in the PL. The achieved processing times for an area covering 375 km2 were approximately 4 s for image formation, 16 s for ship detection, and 31 s for extreme weather detection. These evelopments combined with new downlink concepts for low-rate information data streams show that the provision of satellite remote sensing results to end users in less than 5 min after acquisition is possible using an adequately equipped satellite

    Earth Observation Open Science and Innovation

    Get PDF
    geospatial analytics; social observatory; big earth data; open data; citizen science; open innovation; earth system science; crowdsourced geospatial data; citizen science; science in society; data scienc

    CIRA annual report FY 2013/2014

    Get PDF

    CIRA annual report FY 2016/2017

    Get PDF
    Reporting period April 1, 2016-March 31, 2017

    Deep learning for internet of underwater things and ocean data analytics

    Get PDF
    The Internet of Underwater Things (IoUT) is an emerging technological ecosystem developed for connecting objects in maritime and underwater environments. IoUT technologies are empowered by an extreme number of deployed sensors and actuators. In this thesis, multiple IoUT sensory data are augmented with machine intelligence for forecasting purposes
    • …
    corecore