505 research outputs found

    The Cyborg Astrobiologist: Testing a Novelty-Detection Algorithm on Two Mobile Exploration Systems at Rivas Vaciamadrid in Spain and at the Mars Desert Research Station in Utah

    Full text link
    (ABRIDGED) In previous work, two platforms have been developed for testing computer-vision algorithms for robotic planetary exploration (McGuire et al. 2004b,2005; Bartolo et al. 2007). The wearable-computer platform has been tested at geological and astrobiological field sites in Spain (Rivas Vaciamadrid and Riba de Santiuste), and the phone-camera has been tested at a geological field site in Malta. In this work, we (i) apply a Hopfield neural-network algorithm for novelty detection based upon color, (ii) integrate a field-capable digital microscope on the wearable computer platform, (iii) test this novelty detection with the digital microscope at Rivas Vaciamadrid, (iv) develop a Bluetooth communication mode for the phone-camera platform, in order to allow access to a mobile processing computer at the field sites, and (v) test the novelty detection on the Bluetooth-enabled phone-camera connected to a netbook computer at the Mars Desert Research Station in Utah. This systems engineering and field testing have together allowed us to develop a real-time computer-vision system that is capable, for example, of identifying lichens as novel within a series of images acquired in semi-arid desert environments. We acquired sequences of images of geologic outcrops in Utah and Spain consisting of various rock types and colors to test this algorithm. The algorithm robustly recognized previously-observed units by their color, while requiring only a single image or a few images to learn colors as familiar, demonstrating its fast learning capability.Comment: 28 pages, 12 figures, accepted for publication in the International Journal of Astrobiolog

    Change Detection of Marine Environments Using Machine Learning

    Get PDF
    NPS NRP Technical ReportChange Detection of Marine Environments Using Machine LearningHQMC Intelligence Department (I)This research is supported by funding from the Naval Postgraduate School, Naval Research Program (PE 0605853N/2098). https://nps.edu/nrpChief of Naval Operations (CNO)Approved for public release. Distribution is unlimited.

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    Applicability of Artificial Neural Network for Automatic Crop Type Classification on UAV-Based Images

    Get PDF
    Recent advances in optical remote sensing, especially with the development of machine learning models have made it possible to automatically classify different crop types based on their unique spectral characteristics. In this article, a simple feed-forward artificial neural network (ANN) was implemented for the automatic classification of various crop types. A DJI Mavic air drone was used to simultaneously collect about 549 images of a mixed-crop farmland belonging to Federal University of Technology Minna, Nigeria. The images were annotated and the ANN algorithm was implemented using custom-designed Python programming scripts with libraries such as NumPy, Label box, and Segmentation Mask, for the classification. The algorithm was designed to automatically classify maize, rice, soya beans, groundnut, yam and a non-crop feature into different land spectral classes. The model training performance, using 70% of the dataset, shows that the loss curve flattened down with minimal over-fitting, showing that the model was improving as it trained. Finally, the accuracy of the automatic crop-type classification was evaluated with the aid of the recorded loss function and confusion matrix, and the result shows that the implemented ANN gave an overall training classification accuracy of 87.7% from the model and an overall accuracy of 0.9393 as computed from the confusion matrix, which attests to the robustness of ANN when implemented on high-resolution image data for automatic classification of crop types in a mixed farmland. The overall accuracy, including the user accuracy, proved that only a few images were incorrectly classified, which demonstrated that the errors of omission and commission were minimal

    Multispectral tracing in densely labeled mouse brain with nTracer

    Get PDF
    Summary: This note describes nTracer, an ImageJ plug-in for user-guided, semi-automated tracing of multispectral fluorescent tissue samples. This approach allows for rapid and accurate reconstruction of whole cell morphology of large neuronal populations in densely labeled brains. Availability: nTracer was written as a plugin for the open source image processing software ImageJ. The software, instructional documentation, tutorial videos, sample image and sample tracing results are available at https://www.cai-lab.org/ntracer-tutorial. Supplementary information: Supplementary data are available at Bioinformatics online

    Cybergis-enabled remote sensing data analytics for deep learning of landscape patterns and dynamics

    Get PDF
    Mapping landscape patterns and dynamics is essential to various scientific domains and many practical applications. The availability of large-scale and high-resolution light detection and ranging (LiDAR) remote sensing data provides tremendous opportunities to unveil complex landscape patterns and better understand landscape dynamics from a 3D perspective. LiDAR data have been applied to diverse remote sensing applications where large-scale landscape mapping is among the most important topics. While researchers have used LiDAR for understanding landscape patterns and dynamics in many fields, to fully reap the benefits and potential of LiDAR is increasingly dependent on advanced cyberGIS and deep learning approaches. In this context, the central goal of this dissertation is to develop a suite of innovative cyberGIS-enabled deep-learning frameworks for combining LiDAR and optical remote sensing data to analyze landscape patterns and dynamics with four interrelated studies. The first study demonstrates a high-accuracy land-cover mapping method by integrating 3D information from LiDAR with multi-temporal remote sensing data using a 3D deep-learning model. The second study combines a point-based classification algorithm and an object-oriented change detection strategy for urban building change detection using deep learning. The third study develops a deep learning model for accurate hydrological streamline detection using LiDAR, which has paved a new way of harnessing LiDAR data to map landscape patterns and dynamics at unprecedented computational and spatiotemporal scales. The fourth study resolves computational challenges in handling remote sensing big data and deep learning of landscape feature extraction and classification through a cutting-edge cyberGIS approach

    3D Convolutional Neural Networks for Solving Complex Digital Agriculture and Medical Imaging Problems

    Get PDF
    3D signals have become widely popular in view of the advantage they provide via 3D representations of data by employing a third spatial or temporal dimension to extend 2D signals. Predominantly, 3D signals contain details inexistent in their 2D counterparts such as the depth of an image, which is inherent to point clouds (PC), or the temporal evolution of an image, which is inherent to time series data such as videos. Despite this advantage, 3D models are still underexploited in machine learning (ML) compared to 2D signals, mainly due to data scarcity. In this thesis, we exploit and determine the efficiency and influence of using both multispectral PCs and time-series data with 3D convolutional neural networks (CNNs). We evaluate the performance and utility of these networks and data in the context of two applications from the areas of digital agriculture and medical imaging. In particular, multispectral PCs are investigated for the problem of fusarium-head-blight (FHB) detection and total number of spikelets estimation, while time-series echocardiography are investigated for the problem of myocardial infarction (MI) detection. In the context of the digital agriculture application, two state-of-the-art datasets were created, namely the UW-MRDC WHEAT-PLANT PC dataset, consisting of 216 multispectral PC of wheat plants, and the UW-MRDC WHEAT-HEAD PC dataset, consisting of 80 multispectral PC of wheat heads. Both dataset samples were acquired using a multispectral 3D scanner. Moreover, a real-time parallel GPU-enabled preprocessing method, that runs 1065 times faster than its CPU counterpart, was proposed to convert multispectral PCs into multispectral 3D images compatible with CNNs. Also, the UW-MRDC WHEAT-PLANT PC dataset was used to develop novel and efficient 3D CNNs for disease detection to automatically identify wheat infected with FHB from multispectral 3D images of wheat plants. In addition, the influence of the multispectral information on the detection performance was evaluated, and our results showed the dominance of the red, green, and blue (RGB) colour channels over both the near-infra-red (NIR) channel and RGB and NIR channels combined. Our best model for FHB detection in wheat plants achieved 100% accuracy. Furthermore, the UW-MRDC WHEAT-HEAD PC dataset was used to develop unique and efficient 3D CNNs for total number of spikelets estimation in multispectral 3D images of wheat heads, in addition to adapting three benchmark 2D CNN architectures to 3D images to achieve the same purpose. Our best model for total number of spikelets estimation in wheat head achieved 1.13 mean absolute error, meaning that, on average, the difference between the estimated number of spikelets and the actual value is equal to 1.13. Our 3D CNN for FHB detection in wheat achieved the highest accuracy amongst existing FHB detection models, and our 3D CNN for total number of spikelets estimation in wheat is a unique and pioneer application. These results suggest that replacing arduous tasks that require the input of field experts and significant temporal resources with automated ML models in the context of digital agriculture is feasible and promising. In the context of the medical imaging application, an innovative, real-time, and fully automated pipeline based on 2D and 3D CNNs was proposed for early detection of MI, which is a deadly cardiac disorder, from a patient’s echocardiography. The developed pipeline consists of a 2D CNN that performs data preprocessing by segmenting the left ventricle (LV) chamber from the apical 4-chamber (A4C) view from an echocardiography, followed by a 3D CNN that performs MI detection in real-time. The pipeline was trained and tested on the HMC-QU dataset consisting of 162 echocardiography. The 2D CNN achieved 97.18% accuracy on data segmentation, and the 3D CNN achieved 90.9% accuracy, 100% precision, 95% recall, and 97.2% F1 score. Our detection results outperformed existing state-of-the-art models that were tested on the HMC-QU dataset for MI detection. Moreover, our results demonstrate that developing a fully automated system for LV segmentation and MI detection is efficient and propitious and could enable the creation of a tool that reliably suggests the presence of MI in a given echocardiography on the fly. All the empirical results achieved in our thesis indicate the efficiency and reliability of 3D signals, that are multispectral PCs and videos, in developing detection and regression 3D CNN models that can achieve accurate and reliable results.Mitacs, EMILI, NSERC, Western Diversification Canada, The Faculty of Graduate Studies.Master of Science in Applied Computer Scienc
    • …
    corecore