73 research outputs found

    Quality assessment of 3D building data

    Get PDF
    Three-dimensional building models are often now produced from lidar and photogrammetric data. The quality control of these models is a relevant issue both from the scientific and practical points of view. This work presents a method for the quality control of such models. The input model (3D building data) is co-registered to the verification data using a 3D surface matching method. The 3D surface matching evaluates the Euclidean distances between the verification and input data-sets. The Euclidean distances give appropriate metrics for the 3D model quality. This metric is independent of the method of data capture. The proposed method can favourably address the reference system accuracy, positional accuracy and completeness. Three practical examples of the method are provided for demonstration.This project has been funded by Ordnance Survey Research, the research and development department of the Ordnance Survey of Great Britain, which is gratefully acknowledged. The first author, Devrim Akca, was formerly with the Institute of Geodesy and Photogrammetry of ETH Zurich, SwitzerlandPublisher's Versio

    Scale Sequence Joint Deep Learning (SS-JDL) for land use and land cover classification

    Get PDF
    Choosing appropriate scales for remotely sensed image classification is extremely important yet still an open question in relation to deep convolutional neural networks (CNN), due to the impact of spatial scale (i.e., input patch size) on the recognition of ground objects. Currently, the optimal scale selection processes are extremely cumbersome and time-consuming requiring repetitive experiments involving trial-and-error procedures, which significantly reduces the practical utility of the corresponding classification methods. This issue is crucial when trying to classify large-scale land use (LU) and land cover (LC) jointly (Zhang et al., 2019). In this paper, a simple and parsimonious scale sequence joint deep learning (SS-JDL) method is proposed for joint LU and LC classification, in which a sequence of scales is embedded in the iterative process of fitting the joint distribution implicit in the joint deep learning (JDL) method, thus, replacing the previous paradigm of scale selection. The sequence of scales, derived autonomously and used to define the CNN input patch sizes, provides consecutive information transmission from small-scale features to large-scale representations, and from simple LC states to complex LU characterisations. The effectiveness of the novel SS-JDL method was tested on aerial digital photography of three complex and heterogeneous landscapes, two in Southern England (Bournemouth and Southampton) and one in North West England (Manchester). Benchmark comparisons were provided in the form of a range of LU and LC methods, including the state-of-the-art joint deep learning (JDL) method. The experimental results demonstrated that the SS-JDL consistently outperformed all of the state-of-the-art baselines in terms of both LU and LC classification accuracies, as well as computational efficiency. The proposed SS-JDL method, therefore, represents a fast and effective implementation of the state-of-the-art JDL method. By creating a single, unifying joint distribution framework for classifying higher order feature representations, including LU, the SS-JDL method has the potential to transform the classification paradigm in remote sensing, and in machine learning more generally

    Classification images for aerial images capture visual expertise for binocular disparity and a prior for lighting from above

    Get PDF
    Using a novel approach to classification images (CIs), we investigated the visual expertise of surveyors for luminance and binocular disparity cues simultaneously after screening for stereoacuity. Stereoscopic aerial images of hedges and ditches were classified in 10,000 trials by six trained remote sensing surveyors and six novices. Images were heavily masked with luminance and disparity noise simultaneously. Hedge and ditch images had reversed disparity on around half the trials meaning hedges became ditch-like and vice versa. The hedge and ditch images were also flipped vertically on around half the trials, changing the direction of the light source and completing a 2 × 2 × 2 stimulus design. CIs were generated by accumulating the noise textures associated with "hedge" and "ditch" classifications, respectively, and subtracting one from the other. Typical CIs had a central peak with one or two negative side-lobes. We found clear differences in the amplitudes and shapes of perceptual templates across groups and noise-type, with experts prioritizing binocular disparity and using this more effectively. Contrariwise, novices used luminance cues more than experts meaning that task motivation alone could not explain group differences. Asymmetries in the luminance CIs revealed individual differences for lighting interpretation, with experts less prone to assume lighting from above, consistent with their training on aerial images of UK scenes lit by a southerly sun. Our results show that (i) dual noise in images can be used to produce simultaneous CI pairs, (ii) expertise for disparity cues does not depend on stereoacuity, (iii) CIs reveal the visual strategies developed by experts, (iv) top-down perceptual biases can be overcome with long-term learning effects, and (v) CIs have practical potential for directing visual training

    VPRS-based regional decision fusion of CNN and MRF classifications for very fine resolution remotely sensed images

    Get PDF
    Recent advances in computer vision and pattern recognition have demonstrated the superiority of deep neural networks using spatial feature representation, such as convolutional neural networks (CNN), for image classification. However, any classifier, regardless of its model structure (deep or shallow), involves prediction uncertainty when classifying spatially and spectrally complicated very fine spatial resolution (VFSR) imagery. We propose here to characterise the uncertainty distribution of CNN classification and integrate it into a regional decision fusion to increase classification accuracy. Specifically, a variable precision rough set (VPRS) model is proposed to quantify the uncertainty within CNN classifications of VFSR imagery, and partition this uncertainty into positive regions (correct classifications) and non-positive regions (uncertain or incorrect classifications). Those “more correct” areas were trusted by the CNN, whereas the uncertain areas were rectified by a Multi-Layer Perceptron (MLP)-based Markov random field (MLP-MRF) classifier to provide crisp and accurate boundary delineation. The proposed MRF-CNN fusion decision strategy exploited the complementary characteristics of the two classifiers based on VPRS uncertainty description and classification integration. The effectiveness of the MRF-CNN method was tested in both urban and rural areas of southern England as well as Semantic Labelling datasets. The MRF-CNN consistently outperformed the benchmark MLP, SVM, MLP-MRF and CNN and the baseline methods. This research provides a regional decision fusion framework within which to gain the advantages of model-based CNN, while overcoming the problem of losing effective resolution and uncertain prediction at object boundaries, which is especially pertinent for complex VFSR image classification

    A hybrid MLP-CNN classifier for very fine resolution remotely sensed image classification

    Get PDF
    The contextual-based convolutional neural network (CNN) with deep architecture and pixel-based multilayer perceptron (MLP) with shallow structure are well-recognized neural network algorithms, representing the state-of-the-art deep learning method and the classical non-parametric machine learning approach, respectively. The two algorithms, which have very different behaviours, were integrated in a concise and effective way using a rule-based decision fusion approach for the classification of very fine spatial resolution (VFSR) remotely sensed imagery. The decision fusion rules, designed primarily based on the classification confidence of the CNN, reflect the generally complementary patterns of the individual classifiers. In consequence, the proposed ensemble classifier MLP-CNN harvests the complementary results acquired from the CNN based on deep spatial feature representation and from the MLP based on spectral discrimination. Meanwhile, limitations of the CNN due to the adoption of convolutional filters such as the uncertainty in object boundary partition and loss of useful fine spatial resolution detail were compensated. The effectiveness of the ensemble MLP-CNN classifier was tested in both urban and rural areas using aerial photography together with an additional satellite sensor dataset. The MLP-CNN classifier achieved promising performance, consistently outperforming the pixel-based MLP, spectral and textural-based MLP, and the contextual-based CNN in terms of classification accuracy. This research paves the way to effectively address the complicated problem of VFSR image classification

    An object-based convolutional neural network (OCNN) for urban land use classification

    Get PDF
    Urban land use information is essential for a variety of urban-related applications such as urban planning and regional administration. The extraction of urban land use from very fine spatial resolution (VFSR) remotely sensed imagery has, therefore, drawn much attention in the remote sensing community. Nevertheless, classifying urban land use from VFSR images remains a challenging task, due to the extreme difficulties in differentiating complex spatial patterns to derive high-level semantic labels. Deep convolutional neural networks (CNNs) offer great potential to extract high-level spatial features, thanks to its hierarchical nature with multiple levels of abstraction. However, blurred object boundaries and geometric distortion, as well as huge computational redundancy, severely restrict the potential application of CNN for the classification of urban land use. In this paper, a novel object-based convolutional neural network (OCNN) is proposed for urban land use classification using VFSR images. Rather than pixel-wise convolutional processes, the OCNN relies on segmented objects as its functional units, and CNN networks are used to analyse and label objects such as to partition within-object and between-object variation. Two CNN networks with different model structures and window sizes are developed to predict linearly shaped objects (e.g. Highway, Canal) and general (other non-linearly shaped) objects. Then a rule-based decision fusion is performed to integrate the class-specific classification results. The effectiveness of the proposed OCNN method was tested on aerial photography of two large urban scenes in Southampton and Manchester in Great Britain. The OCNN combined with large and small window sizes achieved excellent classification accuracy and computational efficiency, consistently outperforming its sub-modules, as well as other benchmark comparators, including the pixel-wise CNN, contextual-based MRF and object-based OBIA-SVM methods. The proposed method provides the first object-based CNN framework to effectively and efficiently address the complicated problem of urban land use classification from VFSR images

    Opportunities for machine learning and artificial intelligence in national mapping agencies:enhancing ordnance survey workflow

    Get PDF
    National Mapping agencies (NMA) are frequently tasked with providing highly accurate geospatial data for a range of customers. Traditionally, this challenge has been met by combining the collection of remote sensing data with extensive field work, and the manual interpretation and processing of the combined data. Consequently, this task is a significant logistical undertaking which benefits the production of high quality output, but which is extremely expensive to deliver. Therefore, novel approaches that can automate feature extraction and classification from remotely sensed data, are of great potential interest to NMAs across the entire sector. Using research undertaken at Great Britain’s NMA; Ordnance Survey (OS) as an example, this paper provides an overview of the recent advances at an NMA in the use of artificial intelligence (AI), including machine learning (ML) and deep learning (DL) based applications. Examples of these approaches are in automating the process of feature extraction and classification from remotely sensed aerial imagery. In addition, recent OS research in applying deep (convolutional) neural network architectures to image classification are also described. This overview is intended to be useful to other NMAs who may be considering the adoption of similar approaches within their workflows

    Deep multiband surface photometry on star forming galaxies: I. A sample of 24 blue compact galaxies

    Get PDF
    [Abridged] We present deep optical and near-infrared UBVRIHKs imaging data for 24 blue compact galaxies (BCGs). The sample contains luminous dwarf and intermediate-mass BCGs which are predominantly metal-poor, although a few have near-solar metallicities. We have analyzed isophotal and elliptical integration surface brightness and color profiles, extremely deep (mu_B<29 mag arcsec^{-2}) contour maps and RGB images for each galaxy in the sample. The colors are compared to different spectral evolutionary models. We detect extremely extended low surface brightness (LSB) components dominant beyond the Holmberg radius as well as optical bridges between companion galaxies at the mu_V~28th mag arcsec^{-2} isophotal level. The central surface brightness mu_0 and scale length h_r are derived from two radial ranges typically assumed to be dominated by the underlying host galaxy. We find that mu_0 and h_r of the BCGs host deviate from those of dwarf ellipticals (dE) and dwarf irregulars (dI) solely due to a strong burst contribution to the surface brightness profile almost down to the Holmberg radius. Structural parameters obtained from a fainter region, mu_B=26-28 mag arcsec^{-2}, are consistent with those of true LSB galaxies for the starbursting BCGs in our sample, and with dEs and dIs for the BCGs with less vigorous star formation.Comment: 61 pages, 45 figures, submitte

    Trends in Silicates in the β\beta Pictoris Disk

    Full text link
    While beta Pic is known to host silicates in ring-like structures, whether the properties of these silicate dust vary with stellocentric distance remains an open question. We re-analyze the beta Pictoris debris disk spectrum from the Spitzer Infrared Spectrograph (IRS) and a new IRTF/SpeX spectrum to investigate trends in Fe/Mg ratio, shape, and crystallinity in grains as a function of wavelength, a proxy for stellocentric distance. By analyzing a re-calibrated and re-extracted spectrum, we identify a new 18 micron forsterite emission feature and recover a 23 micron forsterite emission feature with a substantially larger line-to-continuum ratio than previously reported. We find that these prominent spectral features are primarily produced by small submicron-sized grains, which are continuously generated and replenished from planetesimal collisions in the disk and can elucidate their parent bodies' composition. We discover three trends about these small grains: as stellocentric distance increases, (1) small silicate grains become more crystalline (less amorphous), (2) they become more irregular in shape, and (3) for crystalline silicate grains, the Fe/Mg ratio decreases. Applying these trends to beta Pic's planetary architecture, we find that the dust population exterior to the orbits of beta Pic b and c differs substantially in crystallinity and shape. We also find a tentative 3-5 micron dust excess due to spatially unresolved hot dust emission close to the star. From our findings, we infer that the surfaces of large planetesimals are more Fe-rich and collisionally-processed closer to the star but more Fe-poor and primordial farther from the star.Comment: 19 pages, 12 figures, Accepted for Publication in Ap

    Functional Desaturase Fads1 (Δ5) and Fads2 (Δ6) Orthologues Evolved before the Origin of Jawed Vertebrates

    Get PDF
    Long-chain polyunsaturated fatty acids (LC-PUFAs) such as arachidonic (ARA), eicosapentaenoic (EPA) and docosahexaenoic (DHA) acids are essential components of biomembranes, particularly in neural tissues. Endogenous synthesis of ARA, EPA and DHA occurs from precursor dietary essential fatty acids such as linoleic and α-linolenic acid through elongation and Δ5 and Δ6 desaturations. With respect to desaturation activities some noteworthy differences have been noted in vertebrate classes. In mammals, the Δ5 activity is allocated to the Fads1 gene, while Fads2 is a Δ6 desaturase. In contrast, teleosts show distinct combinations of desaturase activities (e.g. bifunctional or separate Δ5 and Δ6 desaturases) apparently allocated to Fads2-type genes. To determine the timing of Fads1-Δ5 and Fads2-Δ6 evolution in vertebrates we used a combination of comparative and functional genomics with the analysis of key phylogenetic species. Our data show that Fads1 and Fads2 genes with Δ5 and Δ6 activities respectively, evolved before gnathostome radiation, since the catshark Scyliorhinus canicula has functional orthologues of both gene families. Consequently, the loss of Fads1 in teleosts is a secondary episode, while the existence of Δ5 activities in the same group most likely occurred through independent mutations into Fads2 type genes. Unexpectedly, we also establish that events of Fads1 gene expansion have taken place in birds and reptiles. Finally, a fourth Fads gene (Fads4) was found with an exclusive occurrence in mammalian genomes. Our findings enlighten the history of a crucially important gene family in vertebrate fatty acid metabolism and physiology and provide an explanation of how observed lineage-specific gene duplications, losses and diversifications might be linked to habitat-specific food web structures in different environments and over geological timescales
    corecore