114 research outputs found

    Extracting Agricultural Fields from Remote Sensing Imagery Using Graph-Based Growing Contours

    Get PDF
    Knowledge of the location and extent of agricultural fields is required for many applications, including agricultural statistics, environmental monitoring, and administrative policies. Furthermore, many mapping applications, such as object-based classification, crop type distinction, or large-scale yield prediction benefit significantly from the accurate delineation of fields. Still, most existing field maps and observation systems rely on historic administrative maps or labor-intensive field campaigns. These are often expensive to maintain and quickly become outdated, especially in regions of frequently changing agricultural patterns. However, exploiting openly available remote sensing imagery (e.g., from the European Union’s Copernicus programme) may allow for frequent and efficient field mapping with minimal human interaction. We present a new approach to extracting agricultural fields at the sub-pixel level. It consists of boundary detection and a field polygon extraction step based on a newly developed, modified version of the growing snakes active contours model we refer to as graph-based growing contours. This technique is capable of extracting complex networks of boundaries present in agricultural landscapes, and is largely automatic with little supervision required. The whole detection and extraction process is designed to work independently of sensor type, resolution, or wavelength. As a test case, we applied the method to two regions of interest in a study area in the northern Germany using multi-temporal Sentinel-2 imagery. Extracted fields were compared visually and quantitatively to ground reference data. The technique proved reliable in producing polygons closely matching reference data, both in terms of boundary location and statistical proxies such as median field size and total acreage

    Human perception-oriented segmentation for triangle meshes

    Get PDF
    A segmentação de malhas é um tópico importante de investigação em computação gráfica, em particular em modelação geométrica. Isto deve-se ao facto de as técnicas de segmentaçãodemalhasteremváriasaplicações,nomeadamentenaproduçãodefilmes, animaçãoporcomputador, realidadevirtual, compressãodemalhas, assimcomoemjogosdigitais. Emconcreto, asmalhastriangularessãoamplamenteusadasemaplicações interativas, visto que sua segmentação em partes significativas (também designada por segmentação significativa, segmentação perceptiva ou segmentação perceptualmente significativa ) é muitas vezes vista como uma forma de acelerar a interação com o utilizador ou a deteção de colisões entre esses objetos 3D definidos por uma malha, bem como animar uma ou mais partes significativas (por exemplo, a cabeça de uma personagem) de um dado objeto, independentemente das restantes partes. Acontece que não se conhece nenhuma técnica capaz de segmentar correctamente malhas arbitrárias −ainda que restritas aos domínios de formas livres e não-livres− em partes significativas. Algumas técnicas são mais adequadas para objetos de forma não-livre (por exemplo, peças mecânicas definidas geometricamente por quádricas), enquanto outras são mais talhadas para o domínio dos objectos de forma livre. Só na literatura recente surgem umas poucas técnicas que se aplicam a todo o universo de objetos de forma livre e não-livre. Pior ainda é o facto de que a maioria das técnicas de segmentação não serem totalmente automáticas, no sentido de que quase todas elas exigem algum tipo de pré-requisitos e assistência do utilizador. Resumindo, estes três desafios relacionados com a proximidade perceptual, generalidade e automação estão no cerne do trabalho descrito nesta tese. Para enfrentar estes desafios, esta tese introduz o primeiro algoritmo de segmentação baseada nos contornos ou fronteiras dos segmentos, cuja técnica se inspira nas técnicas de segmentação baseada em arestas, tão comuns em análise e processamento de imagem,porcontraposiçãoàstécnicasesegmentaçãobaseadaemregiões. Aideiaprincipal é a de encontrar em primeiro lugar a fronteira de cada região para, em seguida, identificar e agrupar todos os seus triângulos internos. As regiões da malha encontradas correspondem a saliências e reentrâncias, que não precisam de ser estritamente convexas, nem estritamente côncavas, respectivamente. Estas regiões, designadas regiões relaxadamenteconvexas(ousaliências)eregiõesrelaxadamentecôncavas(oureentrâncias), produzem segmentações que são menos sensíveis ao ruído e, ao mesmo tempo, são mais intuitivas do ponto de vista da perceção humana; por isso, é designada por segmentação orientada à perceção humana (ou, human perception- oriented (HPO), do inglês). Além disso, e ao contrário do atual estado-da-arte da segmentação de malhas, a existência destas regiões relaxadas torna o algoritmo capaz de segmentar de maneira bastante plausível tanto objectos de forma não-livre como objectos de forma livre. Nesta tese, enfrentou-se também um quarto desafio, que está relacionado com a fusão de segmentação e multi-resolução de malhas. Em boa verdade, já existe na literatura uma variedade grande de técnicas de segmentação, bem como um número significativo de técnicas de multi-resolução, para malhas triangulares. No entanto, não é assim tão comum encontrar estruturas de dados e algoritmos que façam a fusão ou a simbiose destes dois conceitos, multi-resolução e segmentação, num único esquema multi-resolução que sirva os propósitos das aplicações que lidam com malhas simples e segmentadas, sendo que neste contexto se entende que uma malha simples é uma malha com um único segmento. Sendo assim, nesta tese descreve-se um novo esquema (entenda-seestruturasdedadosealgoritmos)demulti-resoluçãoesegmentação,designado por extended Ghost Cell (xGC). Este esquema preserva a forma das malhas, tanto em termos globais como locais, ou seja, os segmentos da malha e as suas fronteiras, bem como os seus vincos e ápices são preservados, não importa o nível de resolução que usamos durante a/o simplificação/refinamento da malha. Além disso, ao contrário de outros esquemas de segmentação, tornou-se possível ter segmentos adjacentes com dois ou mais níveis de resolução de diferença. Isto é particularmente útil em animação por computador, compressão e transmissão de malhas, operações de modelação geométrica, visualização científica e computação gráfica. Em suma, esta tese apresenta um esquema genérico, automático, e orientado à percepção humana, que torna possível a simbiose dos conceitos de segmentação e multiresolução de malhas trianguladas que sejam representativas de objectos 3D.The mesh segmentation is an important topic in computer graphics, in particular in geometric computing. This is so because mesh segmentation techniques find many applications in movies, computer animation, virtual reality, mesh compression, and games. Infact, trianglemeshesarewidelyusedininteractiveapplications, sothattheir segmentation in meaningful parts (i.e., human-perceptually segmentation, perceptive segmentationormeaningfulsegmentation)isoftenseenasawayofspeedinguptheuser interaction, detecting collisions between these mesh-covered objects in a 3D scene, as well as animating one or more meaningful parts (e.g., the head of a humanoid) independently of the other parts of a given object. It happens that there is no known technique capable of correctly segmenting any mesh into meaningful parts. Some techniques are more adequate for non-freeform objects (e.g., quadricmechanicalparts), whileothersperformbetterinthedomainoffreeform objects. Only recently, some techniques have been developed for the entire universe of objects and shapes. Even worse it is the fact that most segmentation techniques are not entirely automated in the sense that almost all techniques require some sort of pre-requisites and user assistance. Summing up, these three challenges related to perceptual proximity, generality and automation are at the core of the work described in this thesis. In order to face these challenges, we have developed the first contour-based mesh segmentation algorithm that we may find in the literature, which is inspired in the edgebased segmentation techniques used in image analysis, as opposite to region-based segmentation techniques. Its leading idea is to firstly find the contour of each region, and then to identify and collect all of its inner triangles. The encountered mesh regions correspond to ups and downs, which do not need to be strictly convex nor strictly concave, respectively. These regions, called relaxedly convex regions (or saliences) and relaxedly concave regions (or recesses), produce segmentations that are less-sensitive to noise and, at the same time, are more intuitive from the human point of view; hence it is called human perception- oriented (HPO) segmentation. Besides, and unlike the current state-of-the-art in mesh segmentation, the existence of these relaxed regions makes the algorithm suited to both non-freeform and freeform objects. In this thesis, we have also tackled a fourth challenge, which is related with the fusion of mesh segmentation and multi-resolution. Truly speaking, a plethora of segmentation techniques, as well as a number of multiresolution techniques, for triangle meshes already exist in the literature. However, it is not so common to find algorithms and data structures that fuse these two concepts, multiresolution and segmentation, into a symbiotic multi-resolution scheme for both plain and segmented meshes, in which a plainmeshisunderstoodasameshwithasinglesegment. So, weintroducesuchanovel multiresolution segmentation scheme, called extended Ghost Cell (xGC) scheme. This scheme preserves the shape of the meshes in both global and local terms, i.e., mesh segments and their boundaries, as well as creases and apices are preserved, no matter the level of resolution we use for simplification/refinement of the mesh. Moreover, unlike other segmentation schemes, it was made possible to have adjacent segments with two or more resolution levels of difference. This is particularly useful in computer animation, mesh compression and transmission, geometric computing, scientific visualization, and computer graphics. In short, this thesis presents a fully automatic, general, and human perception-oriented scheme that symbiotically integrates the concepts of mesh segmentation and multiresolution

    Dataset shift in land-use classification for optical remote sensing

    Get PDF
    Multimodal dataset shifts consisting of both concept and covariate shifts are addressed in this study to improve texture-based land-use classification accuracy for optical panchromatic and multispectral remote sensing. Multitemporal and multisensor variances between train and test data are caused by atmospheric, phenological, sensor, illumination and viewing geometry differences, which cause supervised classification inaccuracies. The first dataset shift reduction strategy involves input modification through shadow removal before feature extraction with gray-level co-occurrence matrix and local binary pattern features. Components of a Rayleigh quotient-based manifold alignment framework is investigated to reduce multimodal dataset shift at the input level of the classifier through unsupervised classification, followed by manifold matching to transfer classification labels by finding across-domain cluster correspondences. The ability of weighted hierarchical agglomerative clustering to partition poorly separated feature spaces is explored and weight-generalized internal validation is used for unsupervised cardinality determination. Manifold matching solves the Hungarian algorithm with a cost matrix featuring geometric similarity measurements that assume the preservation of intrinsic structure across the dataset shift. Local neighborhood geometric co-occurrence frequency information is recovered and a novel integration thereof is shown to improve matching accuracy. A final strategy for addressing multimodal dataset shift is multiscale feature learning, which is used within a convolutional neural network to obtain optimal hierarchical feature representations instead of engineered texture features that may be sub-optimal. Feature learning is shown to produce features that are robust against multimodal acquisition differences in a benchmark land-use classification dataset. A novel multiscale input strategy is proposed for an optimized convolutional neural network that improves classification accuracy to a competitive level for the UC Merced benchmark dataset and outperforms single-scale input methods. All the proposed strategies for addressing multimodal dataset shift in land-use image classification have resulted in significant accuracy improvements for various multitemporal and multimodal datasets.Thesis (PhD)--University of Pretoria, 2016.National Research Foundation (NRF)University of Pretoria (UP)Electrical, Electronic and Computer EngineeringPhDUnrestricte

    Variational models for color image processing in the RGB space inspired by human vision Mémoire d'Habilitation a Diriger des Recherches dans la spécialité Mathématiques

    Get PDF
    La recherche que j'ai développée jusqu'à maintenant peut être divisée en quatre catégories principales : les modèles variationnels pourla correction de la couleur basée sur la perception humaine, le transfert d'histogrammes, le traitement d'images à haute gammedynamique et les statistiques d'images naturelles en couleur. Les sujets ci-dessus sont très inter-connectés car la couleur est un sujetfortement inter-disciplinaire

    Surface-guided computing to analyze subcellular morphology and membrane-associated signals in 3D

    Full text link
    Signal transduction and cell function are governed by the spatiotemporal organization of membrane-associated molecules. Despite significant advances in visualizing molecular distributions by 3D light microscopy, cell biologists still have limited quantitative understanding of the processes implicated in the regulation of molecular signals at the whole cell scale. In particular, complex and transient cell surface morphologies challenge the complete sampling of cell geometry, membrane-associated molecular concentration and activity and the computing of meaningful parameters such as the cofluctuation between morphology and signals. Here, we introduce u-Unwrap3D, a framework to remap arbitrarily complex 3D cell surfaces and membrane-associated signals into equivalent lower dimensional representations. The mappings are bidirectional, allowing the application of image processing operations in the data representation best suited for the task and to subsequently present the results in any of the other representations, including the original 3D cell surface. Leveraging this surface-guided computing paradigm, we track segmented surface motifs in 2D to quantify the recruitment of Septin polymers by blebbing events; we quantify actin enrichment in peripheral ruffles; and we measure the speed of ruffle movement along topographically complex cell surfaces. Thus, u-Unwrap3D provides access to spatiotemporal analyses of cell biological parameters on unconstrained 3D surface geometries and signals.Comment: 49 pages, 10 figure

    Modelling Neuron Morphology: Automated Reconstruction from Microscopy Images

    Get PDF
    Understanding how the brain works is, beyond a shadow of doubt, one of the greatest challenges for modern science. Achieving a deep knowledge about the structure, function and development of the nervous system at the molecular, cellular and network levels is crucial in this attempt, as processes at all these scales are intrinsically linked with higher-order cognitive functions. The research in the various areas of neuroscience deals with advanced imaging techniques, collecting an increasing amounts of heterogeneous and complex data at different scales. Then, computational tools and neuroinformatics solutions are required in order to integrate and analyze the massive quantity of acquired information. Within this context, the development of automaticmethods and tools for the study of neuronal anatomy has a central role. The morphological properties of the soma and of the axonal and dendritic arborizations constitute a key discriminant for the neuronal phenotype and play a determinant role in network connectivity. A quantitative analysis allows the study of possible factors influencing neuronal development, the neuropathological abnormalities related to specific syndromes, the relationships between neuronal shape and function, the signal transmission and the network connectivity. Therefore, three-dimensional digital reconstructions of soma, axons and dendrites are indispensable for exploring neural networks. This thesis proposes a novel and completely automatic pipeline for neuron reconstruction with operations ranging from the detection and segmentation of the soma to the dendritic arborization tracing. The pipeline can deal with different datasets and acquisitions both at the network and at the single scale level without any user interventions or manual adjustment. We developed an ad hoc approach for the localization and segmentation of neuron bodies. Then, various methods and research lines have been investigated for the reconstruction of the whole dendritic arborization of each neuron, which is solved both in 2D and in 3D images

    Three-Dimensional Identification and Reconstruction of Galaxy Systems within Deep Redshift Surveys

    Full text link
    We have developed a new geometrical method for identifying and reconstructing a homogeneous and highly complete set of galaxy groups in the next generation of deep, flux-limited redshift surveys. Our method combines information from the three-dimensional Voronoi diagram and its dual, the Delaunay triangulation, to obtain group and cluster catalogs that are remarkably robust over wide ranges in redshift and degree of density enhancement. Using the mock DEEP2 catalogs, we demonstrate that the VDM algorithm can be used to identify a homogeneous set of groups in a magnitude-limited sample (I\sbr{AB}\le23.5) throughout the survey redshift window 0.7<z<1.20.7 < z < 1.2. The actual group membership can be effectively reconstructed even in the distorted redshift space environment for systems with line of sight velocity dispersion σlos\sigma_{los} greater than ≈200\approx 200 \kms. By comparing the galaxy cluster catalog derived from the mock DEEP2 observations to the underlying distribution of clusters found in real space with much fainter galaxies included (which should more closely trace mass in the cluster), we can assess completeness in velocity dispersion directly. We conclude that the recovered DEEP2 group and cluster sample should be statistically complete for σlos≳400\sigma_{los} \gtrsim 400 \kms. Finally, we argue that the reconstructed bivariate distribution of systems as a function of redshift and velocity dispersion reproduces with high fidelity the underlying real space distribution and can thus be used robustly to constrain cosmological parametersComment: Latex, 21 pages, ApJ submitte

    Data Fusion of Surface Meshes and Volumetric Representations

    Get PDF
    The term Data Fusion refers to integrating knowledge from at least two independent sources of information such that the result is more than merely the sum of all inputs. In our project, the knowledge about a given specimen comprises its acquisitions from optical 3D scans and Computed Tomography with a special focus on limited-angle artifacts. In industrial quality inspection those imaging techniques are commonly used for non-destructive testing. Additional sources of information are digital descriptions for manufacturing, or tactile measurements of the specimen. Hence, we have several representations comprising the object as a whole, each with certain shortcomings and unique insights. We strive for combining all their strengths and compensating their weaknesses in order to create an enhanced representation of the acquired object. To achieve this, the identification of correspondences in the representations is the first task. We extract a subset with prominent exterior features from each input because all acquisitions include these features. To this end, regional queries from random seeds on an enclosing hull are employed. Subsequently, the relative orientation of the original data sets is calculated based on their subsets, as those comprise the - potentially defective - areas of overlap. We consider global features such as principal components and barycenters for the alignment, since in this specific case classical point-to-point comparisons are prone to error. Our alignment scheme outperforms traditional approaches and can even be enhanced by considering limited-angle artifacts in the reconstruction process of Computed Tomography. An analysis of local gradients in the resulting volumetric representation allows to distinguish between reliable observations and defects. Lastly, tactile measurements are extremely accurate but lack a suitable 3D representation. Thus, we also present an approach for converting them in a 3D surface suiting our work flow. As a result, the respective inputs are now aligned with each other, indicate the quality of the included information, and are in compatible format to be combined in a subsequent step. The data fusion result permits more accurate metrological tasks and increases the precision of detecting flaws in production or indications of wear-out. The final step of combining the data sets is briefly presented here along with the resulting augmented representation, but in its entirety and details subject to another PhD thesis within our joint project

    McIDAS-eXplorer: A version of McIDAS for planetary applications

    Get PDF
    McIDAS-eXplorer is a set of software tools developed for analysis of planetary data published by the Planetary Data System on CD-ROM's. It is built upon McIDAS-X, an environment which has been in use nearly two decades now for earth weather satellite data applications in research and routine operations. The environment allows convenient access, navigation, analysis, display, and animation of planetary data by utilizing the full calibration data accompanying the planetary data. Support currently exists for Voyager images of the giant planets and their satellites; Magellan radar images (F-MIDR and C-MIDR's, global map products (GxDR's), and altimetry data (ARCDR's)); Galileo SSI images of the earth, moon, and Venus; Viking Mars images and MDIM's as well as most earth based telescopic images of solar system objects (FITS). The NAIF/JPL SPICE kernels are used for image navigation when available. For data without the SPICE kernels (such as the bulk of the Voyager Jupiter and Saturn imagery and Pioneer Orbiter images of Venus), tools based on NAIF toolkit allow the user to navigate the images interactively. Multiple navigation types can be attached to a given image (e.g., for ring navigation and planet navigation in the same image). Tools are available to perform common image processing tasks such as digital filtering, cartographic mapping, map overlays, and data extraction. It is also possible to have different planetary radii for an object such as Venus which requires a different radius for the surface and for the cloud level. A graphical user interface based on Tel-Tk scripting language is provided (UNIX only at present) for using the environment and also to provide on-line help. It is possible for end users to add applications of their own to the environment at any time
    • …
    corecore