12,244 research outputs found

    Automated identification of river hydromorphological features using UAV high resolution aerial imagery

    Get PDF
    European legislation is driving the development of methods for river ecosystem protection in light of concerns over water quality and ecology. Key to their success is the accurate and rapid characterisation of physical features (i.e., hydromorphology) along the river. Image pattern recognition techniques have been successfully used for this purpose. The reliability of the methodology depends on both the quality of the aerial imagery and the pattern recognition technique used. Recent studies have proved the potential of Unmanned Aerial Vehicles (UAVs) to increase the quality of the imagery by capturing high resolution photography. Similarly, Artificial Neural Networks (ANN) have been shown to be a high precision tool for automated recognition of environmental patterns. This paper presents a UAV based framework for the identification of hydromorphological features from high resolution RGB aerial imagery using a novel classification technique based on ANNs. The framework is developed for a 1.4 km river reach along the river Dee in Wales, United Kingdom. For this purpose, a Falcon 8 octocopter was used to gather 2.5 cm resolution imagery. The results show that the accuracy of the framework is above 81%, performing particularly well at recognising vegetation. These results leverage the use of UAVs for environmental policy implementation and demonstrate the potential of ANNs and RGB imagery for high precision river monitoring and river management

    Algorithmic commonalities in the parallel environment

    Get PDF
    The ultimate aim of this project was to analyze procedures from substantially different application areas to discover what is either common or peculiar in the process of conversion to the Massively Parallel Processor (MPP). Three areas were identified: molecular dynamic simulation, production systems (rule systems), and various graphics and vision algorithms. To date, only selected graphics procedures have been investigated. They are the most readily available, and produce the most visible results. These include simple polygon patch rendering, raycasting against a constructive solid geometric model, and stochastic or fractal based textured surface algorithms. Only the simplest of conversion strategies, mapping a major loop to the array, has been investigated so far. It is not entirely satisfactory

    Land and cryosphere products from Suomi NPP VIIRS: overview and status

    Get PDF
    [1] The Visible Infrared Imaging Radiometer Suite (VIIRS) instrument was launched in October 2011 as part of the Suomi National Polar-Orbiting Partnership (S-NPP). The VIIRS instrument was designed to improve upon the capabilities of the operational Advanced Very High Resolution Radiometer and provide observation continuity with NASA's Earth Observing System's Moderate Resolution Imaging Spectroradiometer (MODIS). Since the VIIRS first-light images were received in November 2011, NASA- and NOAA-funded scientists have been working to evaluate the instrument performance and generate land and cryosphere products to meet the needs of the NOAA operational users and the NASA science community. NOAA's focus has been on refining a suite of operational products known as Environmental Data Records (EDRs), which were developed according to project specifications under the National Polar-Orbiting Environmental Satellite System. The NASA S-NPP Science Team has focused on evaluating the EDRs for science use, developing and testing additional products to meet science data needs, and providing MODIS data product continuity. This paper presents to-date findings of the NASA Science Team's evaluation of the VIIRS land and cryosphere EDRs, specifically Surface Reflectance, Land Surface Temperature, Surface Albedo, Vegetation Indices, Surface Type, Active Fires, Snow Cover, Ice Surface Temperature, and Sea Ice Characterization. The study concludes that, for MODIS data product continuity and earth system science, an enhanced suite of land and cryosphere products and associated data system capabilities are needed beyond the EDRs currently available from the VIIRS

    A low cost mobile mapping system (LCMMS) for field data acquisition: a potential use to validate aerial/satellite building damage assessment

    Get PDF
    Among the major natural disasters that occurred in 2010, the Haiti earthquake was a real turning point concerning the availability, dissemination and licensing of a huge quantity of geospatial data. In a few days several map products based on the analysis of remotely sensed data-sets were delivered to users. This demonstrated the need for reliable methods to validate the increasing variety of open source data and remote sensing-derived products for crisis management, with the aim to correctly spatially reference and interconnect these data with other global digital archives. As far as building damage assessment is concerned, the need for accurate field data to overcome the limitations of both vertical and oblique view satellite and aerial images was evident. To cope with the aforementioned need, a newly developed Low-Cost Mobile Mapping System (LCMMS) was deployed in Port-au-Prince (Haiti) and tested during a five-day survey in FebruaryMarch 2010. The system allows for acquisition of movies and single georeferenced frames by means of a transportable device easily installable (or adaptable) to every type of vehicle. It is composed of four webcams with a total field of view of about 180 degrees and one Global Positioning System (GPS) receiver, with the main aim to rapidly cover large areas for effective usage in emergency situations. The main technical features of the LCMMS, the operational use in the field (and related issues) and a potential approach to be adopted for the validation of satellite/aerial building damage assessments are thoroughly described in the articl

    Topographic mappings and feed-forward neural networks

    Get PDF
    This thesis is a study of the generation of topographic mappings - dimension reducing transformations of data that preserve some element of geometric structure - with feed-forward neural networks. As an alternative to established methods, a transformational variant of Sammon's method is proposed, where the projection is effected by a radial basis function neural network. This approach is related to the statistical field of multidimensional scaling, and from that the concept of a 'subjective metric' is defined, which permits the exploitation of additional prior knowledge concerning the data in the mapping process. This then enables the generation of more appropriate feature spaces for the purposes of enhanced visualisation or subsequent classification. A comparison with established methods for feature extraction is given for data taken from the 1992 Research Assessment Exercise for higher educational institutions in the United Kingdom. This is a difficult high-dimensional dataset, and illustrates well the benefit of the new topographic technique. A generalisation of the proposed model is considered for implementation of the classical multidimensional scaling (¸mds}) routine. This is related to Oja's principal subspace neural network, whose learning rule is shown to descend the error surface of the proposed ¸mds model. Some of the technical issues concerning the design and training of topographic neural networks are investigated. It is shown that neural network models can be less sensitive to entrapment in the sub-optimal global minima that badly affect the standard Sammon algorithm, and tend to exhibit good generalisation as a result of implicit weight decay in the training process. It is further argued that for ideal structure retention, the network transformation should be perfectly smooth for all inter-data directions in input space. Finally, there is a critique of optimisation techniques for topographic mappings, and a new training algorithm is proposed. A convergence proof is given, and the method is shown to produce lower-error mappings more rapidly than previous algorithms

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Online Learning of Quantum States

    Full text link
    Suppose we have many copies of an unknown nn-qubit state ρ\rho. We measure some copies of ρ\rho using a known two-outcome measurement E1E_{1}, then other copies using a measurement E2E_{2}, and so on. At each stage tt, we generate a current hypothesis σt\sigma_{t} about the state ρ\rho, using the outcomes of the previous measurements. We show that it is possible to do this in a way that guarantees that Tr(Eiσt)Tr(Eiρ)|\operatorname{Tr}(E_{i} \sigma_{t}) - \operatorname{Tr}(E_{i}\rho) |, the error in our prediction for the next measurement, is at least ε\varepsilon at most O ⁣(n/ε2)\operatorname{O}\!\left(n / \varepsilon^2 \right) times. Even in the "non-realizable" setting---where there could be arbitrary noise in the measurement outcomes---we show how to output hypothesis states that do significantly worse than the best possible states at most O ⁣(Tn)\operatorname{O}\!\left(\sqrt {Tn}\right) times on the first TT measurements. These results generalize a 2007 theorem by Aaronson on the PAC-learnability of quantum states, to the online and regret-minimization settings. We give three different ways to prove our results---using convex optimization, quantum postselection, and sequential fat-shattering dimension---which have different advantages in terms of parameters and portability.Comment: 18 page

    A Hybrid Machine Learning Technique For Feature Optimization In Object-Based Classification of Debris-Covered Glaciers

    Get PDF
    Object-based features like spectral, topographic, and textural are supportive to determine debris-covered glacier classes. The original feature space includes relevant and irrelevant features. The inclusion of all these features increases the complexity and renders the classifier’s performance. Therefore, feature space optimization is requisite for the classification process. Previous studies have shown a rigorous exercise in manually selecting the best combination of features to define the target class and proven to be a time consuming task. The present study proposed a hybrid feature selection technique to automate the selection of the best suitable features. This study aimed to reduce the classifier’s complexity and enhance the performance of the classification model. Relief-F and Pearson Correlation filter-based feature selection methods ranked features according to the relevance and filtered out irrelevant or less important features based on the defined condition. Later, the hybrid model selected the common features to get an optimal feature set. The proposed hybrid model was tested on Landsat 8 images of debris-covered glaciers in Central Karakoram Range and validated with present glacier inventories. The results showed that the classification accuracy of the proposed hybrid feature selection model with a Decision Tree classifier is 99.82%, which is better than the classification results obtained using other mapping techniques. In addition, the hybrid feature selection technique has sped up the process of classification by reducing the number of features by 77% without compromising the classification accuracy
    corecore