2,135 research outputs found

    Error processes in the integration of digital cartographic data in geographic information systems.

    Get PDF
    Errors within a Geographic Information System (GIS) arise from several factors. In the first instance receiving data from a variety of different sources results in a degree of incompatibility between such information. Secondly, the very processes used to acquire the information into the GIS may in fact degrade the quality of the data. If geometric overlay (the very raison d'etre of many GISs) is to be performed, such inconsistencies need to be carefully examined and dealt with. A variety of techniques exist for the user to eliminate such problems, but all of these tend to rely on the geometry of the information, rather than on its meaning or nature. This thesis explores the introduction of error into GISs and the consequences this has for any subsequent data analysis. Techniques for error removal at the overlay stage are also examined and improved solutions are offered. Furthermore, the thesis also looks at the role of the data model and the potential detrimental effects this can have, in forcing the data to be organised into a pre-defined structure

    Network-state dependent effects in naming and learning

    Get PDF

    An Agent-Based Variogram Modeller: Investigating Intelligent, Distributed-Component Geographical Information Systems

    Get PDF
    Geo-Information Science (GIScience) is the field of study that addresses substantive questions concerning the handling, analysis and visualisation of spatial data. Geo- Information Systems (GIS), including software, data acquisition and organisational arrangements, are the key technologies underpinning GIScience. A GIS is normally tailored to the service it is supposed to perform. However, there is often the need to do a function that might not be supported by the GIS tool being used. The normal solution in these circumstances is to go out and look for another tool that can do the service, and often an expert to use that tool. This is expensive, time consuming and certainly stressful to the geographical data analyses. On the other hand, GIS is often used in conjunction with other technologies to form a geocomputational environment. One of the complex tools in geocomputation is geostatistics. One of its functions is to provide the means to determine the extent of spatial dependencies within geographical data and processes. Spatial datasets are often large and complex. Currently Agent system are being integrated into GIS to offer flexibility and allow better data analysis. The theis will look into the current application of Agents in within the GIS community, determine if they are used to representing data, process or act a service. The thesis looks into proving the applicability of an agent-oriented paradigm as a service based GIS, having the possibility of providing greater interoperability and reducing resource requirements (human and tools). In particular, analysis was undertaken to determine the need to introduce enhanced features to agents, in order to maximise their effectiveness in GIS. This was achieved by addressing the software agent complexity in design and implementation for the GIS environment and by suggesting possible solutions to encountered problems. The software agent characteristics and features (which include the dynamic binding of plans to software agents in order to tackle the levels of complexity and range of contexts) were examined, as well as discussing current GIScience and the applications of agent technology to GIS, agents as entities, objects and processes. These concepts and their functionalities to GIS are then analysed and discussed. The extent of agent functionality, analysis of the gaps and the use these technologies to express a distributed service providing an agent-based GIS framework is then presented. Thus, a general agent-based framework for GIS and a novel agent-based architecture for a specific part of GIS, the variogram, to examine the applicability of the agent- oriented paradigm to GIS, was devised. An examination of the current mechanisms for constructing variograms, underlying processes and functions was undertaken, then these processes were embedded into a novel agent architecture for GIS. Once the successful software agent implementation had been achieved, the corresponding tool was tested and validated - internally for code errors and externally to determine its functional requirements and whether it enhances the GIS process of dealing with data. Thereafter, its compared with other known service based GIS agents and its advantages and disadvantages analysed

    Practical deep learning

    Get PDF
    Deep learning is experiencing a revolution with tremendous progress because of the availability of large datasets and computing resources. The development of deeper and larger neural network models has made significant progress recently in boosting the accuracy of many applications, such as image classification, image captioning, object detection, and language translation. However, despite the opportunities they offer, existing deep learning approaches are impractical for many applications due to the following challenges. Many applications exist with only limited amounts of annotated training data, or the collected labelled training data is too expensive. Such scenarios impose significant drawbacks for deep learning methods, which are not designed for limited data and suffer from performance decay. Especially for generative tasks, because the data for many generative tasks is difficult to obtain from the real world and the results they generate are difficult to control. As deep learning algorithms become more complicated increasing the workload for researchers to train neural network models and manage the life-cycle deep learning workflows, including the model, dataset, and training pipeline, the demand for efficient deep learning development is rising. Practical deep learning should achieve adequate performance from the limited training data as well as be based on efficient deep learning development processes. In this thesis, we propose several novel methods to improve the practicability of deep generative models and development processes, leading to four contributions. First, we improve the visual quality of synthesising images conditioned on text descriptions without requiring more manual labelled data, which provides controllable generated results using object attribute information from text descriptions. Second, we achieve unsupervised image-to-image translation that synthesises images conditioned on input images without requiring paired images to supervise the training, which provides controllable generated results using semantic visual information from input images. Third, we deliver semantic image synthesis that synthesises images conditioned on both image and text descriptions without requiring ground truth images to supervise the training, which provides controllable generated results using both semantic visual and object attribute information. Fourth, we develop a research-oriented deep learning library called TensorLayer to reduce the workload of researchers for defining models, implementing new layers, and managing the deep learning workflow comprised of the dataset, model, and training pipeline. In 2017, this library has won the best open source software award issued by ACM Multimedia (MM).Open Acces

    Machine learning to generate soil information

    Get PDF
    This thesis is concerned with the novel use of machine learning (ML) methods in soil science research. ML adoption in soil science has increased considerably, especially in pedometrics (the use of quantitative methods to study the variation of soils). In parallel, the size of the soil datasets has also increased thanks to projects of global impact that aim to rescue legacy data or new large extent surveys to collect new information. While we have big datasets and global projects, currently, modelling is mostly based on "traditional" ML approaches which do not take full advantage of these large data compilations. This compilation of these global datasets is severely limited by privacy concerns and, currently, no solution has been implemented to facilitate the process. If we consider the performance differences derived from the generality of global models versus the specificity of local models, there is still a debate on which approach is better. Either in global or local DSM, most applications are static. Even with the large soil datasets available to date, there is not enough soil data to perform a fully-empirical, space-time modelling. Considering these knowledge gaps, this thesis aims to introduce advanced ML algorithms and training techniques, specifically deep neural networks, for modelling large datasets at a global scale and provide new soil information. The research presented here has been successful at applying the latest advances in ML to improve upon some of the current approaches for soil modelling with large datasets. It has also created opportunities to utilise information, such as descriptive data, that has been generally disregarded. ML methods have been embraced by the soil community and their adoption is increasing. In the particular case of neural networks, their flexibility in terms of structure and training makes them a good candidate to improve on current soil modelling approaches

    Visual identification of individual Holstein-Friesian cattle via deep metric learning

    Get PDF
    Holstein-Friesian cattle exhibit individually-characteristic black and white coat patterns visually akin to those arising from Turing's reaction-diffusion systems. This work takes advantage of these natural markings in order to automate visual detection and biometric identification of individual Holstein-Friesians via convolutional neural networks and deep metric learning techniques. Existing approaches rely on markings, tags or wearables with a variety of maintenance requirements, whereas we present a totally hands-off method for the automated detection, localisation, and identification of individual animals from overhead imaging in an open herd setting, i.e. where new additions to the herd are identified without re-training. We propose the use of SoftMax-based reciprocal triplet loss to address the identification problem and evaluate the techniques in detail against fixed herd paradigms. We find that deep metric learning systems show strong performance even when many cattle unseen during system training are to be identified and re-identified - achieving 98.2% accuracy when trained on just half of the population. This work paves the way for facilitating the non-intrusive monitoring of cattle applicable to precision farming and surveillance for automated productivity, health and welfare monitoring, and to veterinary research such as behavioural analysis, disease outbreak tracing, and more. Key parts of the source code, network weights and underpinning datasets are available publicly.Comment: 37 pages, 14 figures, 2 tables; Submitted to Computers and Electronics in Agriculture; Source code and network weights available at https://github.com/CWOA/MetricLearningIdentification; OpenCows2020 dataset available at https://doi.org/10.5523/bris.10m32xl88x2b61zlkkgz3fml1

    Investigating the potential for detecting Oak Decline using Unmanned Aerial Vehicle (UAV) Remote Sensing

    Get PDF
    This PhD project develops methods for the assessment of forest condition utilising modern remote sensing technologies, in particular optical imagery from unmanned aerial systems and with Structure from Motion photogrammetry. The research focuses on health threats to the UK’s native oak trees, specifically, Chronic Oak Decline (COD) and Acute Oak Decline (AOD). The data requirements and methods to identify these complex diseases are investigatedusing RGB and multispectral imagery with very high spatial resolution, as well as crown textural information. These image data are produced photogrammetrically from multitemporal unmanned aerial vehicle (UAV) flights, collected during different seasons to assess the influence of phenology on the ability to detect oak decline. Particular attention is given to the identification of declined oak health within the context of semi-natural forests and heterogenous stands. Semi-natural forest environments pose challenges regarding naturally occurring variability. The studies investigate the potential and practical implications of UAV remote sensing approaches for detection of oak decline under these conditions. COD is studied at Speculation Cannop, a section in the Forest of Dean, dominated by 200-year-old oaks, where decline symptoms have been present for the last decade. Monks Wood, a semi-natural woodland in Cambridgeshire, is the study site for AOD, where trees exhibit active decline symptoms. Field surveys at these sites are designed and carried out to produce highly-accurate differential GNSS positional information of symptomatic and control oak trees. This allows the UAV data to be related to COD or AOD symptoms and the validation of model predictions. Random Forest modelling is used to determine the explanatory value of remote sensing-derived metrics to distinguish trees affected by COD or AOD from control trees. Spectral and textural variables are extracted from the remote sensing data using an object-based approach, adopting circular plots around crown centres at individual tree level. Furthermore, acquired UAV imagery is applied to generate a species distribution map, improving on the number of detectable species and spatial resolution from a previous classification using multispectral data from a piloted aircraft. In the production of the map, parameters relevant for classification accuracy, and identification of oak in particular, are assessed. The effect of plot size, sample size and data combinations are studied. With optimised parameters for species classification, the updated species map is subsequently employed to perform a wall-to-wall prediction of individual oak tree condition, evaluating the potential of a full inventory detection of declined health. UAV-acquired data showed potential for discrimination of control trees and declined trees, in the case of COD and AOD. The greatest potential for detecting declined oak condition was demonstrated with narrowband multispectral imagery. Broadband RGB imagery was determined to be unsuitable for a robust distinction between declined and control trees. The greatest explanatory power was found in remotely-sensed spectra related to photosynthetic activity, indicated by the high feature importance of nearinfrared spectra and the vegetation indices NDRE and NDVI. High feature importance was also produced by texture metrics, that describe structural variations within the crown. The findings indicate that the remotely sensed explanatory variables hold significant information regarding changes in leaf chemistry and crown morphology that relate to chlorosis, defoliation and dieback occurring in the course of the decline. In the case of COD, a distinction of symptomatic from control trees was achieved with 75 % accuracy. Models developed for AOD detection yielded AUC scores up to 0.98,when validated on independent sample data. Classification of oak presence was achieved with a User’s accuracy of 97 % and the produced species map generated 95 % overall accuracy across the eight species within the study area in the north-east of Monks Wood. Despite these encouraging results, it was shown that the generalisation of models is unfeasible at this stage and many challenges remain. A wall-to-wall prediction of decline status confirmed the inability to generalise, yielding unrealistic results, with a high number of declined trees predicted. Identified weaknesses of the developed models indicate complexity related to the natural variability of heterogenous forests combined with the diverse symptoms of oak decline. Specific to the presented studies, additional limitations were attributed to limited ground truth, consequent overfitting,the binary classification of oak health status and uncertainty in UAV-acquired reflectance values. Suggestions for future work are given and involve the extension of field sampling with a non-binary dependent variable to reflect the severity of oak decline induced stress. Further technical research on the quality and reliability of UAV remote sensing data is also required

    TEXTURAL CLASSIFICATION OF MULTIPLE SCLEROSISLESIONS IN MULTIMODAL MRI VOLUMES

    Get PDF
    Background and objectives:Multiple Sclerosis is a common relapsing demyelinating diseasecausing the significant degradation of cognitive and motor skills and contributes towards areduced life expectancy of 5 to 10 years. The identification of Multiple Sclerosis Lesionsat early stages of a patient’s life can play a significant role in the diagnosis, treatment andprognosis for that individual. In recent years the process of disease detection has been aidedthrough the implementation of radiomic pipelines for texture extraction and classificationutilising Computer Vision and Machine Learning techniques. Eight Multiple Sclerosis Patient datasets have been supplied, each containing one standardclinical T2 MRI sequence and four diffusion-weighted sequences (T2, FA, ADC, AD, RD).This work proposes a Multimodal Multiple Sclerosis Lesion segmentation methodology util-ising supervised texture analysis, feature selection and classification. Three Machine Learningmodels were applied to Multimodal MRI data and tested using unseen patient datasets to eval-uate the classification performance of various extracted features, feature selection algorithmsand classifiers to MRI volumes uncommonly applied to MS Lesion detection. Method: First Order Statistics, Haralick Texture Features, Gray-Level Run-Lengths, His-togram of Oriented Gradients and Local Binary Patterns were extracted from MRI volumeswhich were minimally pre-processed using a skull stripping and background removal algorithm.mRMR and LASSO feature selection algorithms were applied to identify a subset of rankingsfor use in Machine Learning using Support Vector Machine, Random Forests and ExtremeLearning Machine classification. Results: ELM achieved a top slice classification accuracy of 85% while SVM achieved 79%and RF 78%. It was found that combining information from all MRI sequences increased theclassification performance when analysing unseen T2 scans in almost all cases. LASSO andmRMR feature selection methods failed to increase accuracy, and the highest-scoring groupof features were Haralick Texture Features, derived from Grey-Level Co-occurrence matrices
    • …
    corecore