1,887 research outputs found

    Distributed texture-based terrain synthesis

    Get PDF
    Terrain synthesis is an important field of Computer Graphics that deals with the generation of 3D landscape models for use in virtual environments. The field has evolved to a stage where large and even infinite landscapes can be generated in realtime. However, user control of the generation process is still minimal, as well as the creation of virtual landscapes that mimic real terrain. This thesis investigates the use of texture synthesis techniques on real landscapes to improve realism and the use of sketch-based interfaces to enable intuitive user control

    Color Image Processing based on Graph Theory

    Full text link
    [ES] La visión artificial es uno de los campos en mayor crecimiento en la actualidad que, junto con otras tecnologías como la Biometría o el Big Data, se ha convertido en el foco de interés de numerosas investigaciones y es considerada como una de las tecnologías del futuro. Este amplio campo abarca diversos métodos entre los que se encuentra el procesamiento y análisis de imágenes digitales. El éxito del análisis de imágenes y otras tareas de procesamiento de alto nivel, como pueden ser el reconocimiento de patrones o la visión 3D, dependerá en gran medida de la buena calidad de las imágenes de partida. Hoy en día existen multitud de factores que dañan las imágenes dificultando la obtención de imágenes de calidad óptima, esto ha convertido el (pre-) procesamiento digital de imágenes en un paso fundamental previo a la aplicación de cualquier otra tarea de procesado. Los factores más comunes son el ruido y las malas condiciones de adquisición: los artefactos provocados por el ruido dificultan la interpretación adecuada de la imagen y la adquisición en condiciones de iluminación o exposición deficientes, como escenas dinámicas, causan pérdida de información de la imagen que puede ser clave para ciertas tareas de procesamiento. Los pasos de (pre-)procesamiento de imágenes conocidos como suavizado y realce se aplican comúnmente para solventar estos problemas: El suavizado tiene por objeto reducir el ruido mientras que el realce se centra en mejorar o recuperar la información imprecisa o dañada. Con estos métodos conseguimos reparar información de los detalles y bordes de la imagen con una nitidez insuficiente o un contenido borroso que impide el (post-)procesamiento óptimo de la imagen. Existen numerosos métodos que suavizan el ruido de una imagen, sin embargo, en muchos casos el proceso de filtrado provoca emborronamiento en los bordes y detalles de la imagen. De igual manera podemos encontrar una enorme cantidad de técnicas de realce que intentan combatir las pérdidas de información, sin embargo, estas técnicas no contemplan la existencia de ruido en la imagen que procesan: ante una imagen ruidosa, cualquier técnica de realce provocará también un aumento del ruido. Aunque la idea intuitiva para solucionar este último caso será el previo filtrado y posterior realce, este enfoque ha demostrado no ser óptimo: el filtrado podrá eliminar información que, a su vez, podría no ser recuperable en el siguiente paso de realce. En la presente tesis doctoral se propone un modelo basado en teoría de grafos para el procesamiento de imágenes en color. En este modelo, se construye un grafo para cada píxel de tal manera que sus propiedades permiten caracterizar y clasificar dicho pixel. Como veremos, el modelo propuesto es robusto y capaz de adaptarse a una gran variedad de aplicaciones. En particular, aplicamos el modelo para crear nuevas soluciones a los dos problemas fundamentales del procesamiento de imágenes: suavizado y realce. Se ha estudiado el modelo en profundidad en función del umbral, parámetro clave que asegura la correcta clasificación de los píxeles de la imagen. Además, también se han estudiado las posibles características y posibilidades del modelo que nos han permitido sacarle el máximo partido en cada una de las posibles aplicaciones. Basado en este modelo se ha diseñado un filtro adaptativo capaz de eliminar ruido gaussiano de una imagen sin difuminar los bordes ni perder información de los detalles. Además, también ha permitido desarrollar un método capaz de realzar los bordes y detalles de una imagen al mismo tiempo que se suaviza el ruido presente en la misma. Esta aplicación simultánea consigue combinar dos operaciones opuestas por definición y superar así los inconvenientes presentados por el enfoque en dos etapas.[CA] La visió artificial és un dels camps en major creixement en l'actualitat que, junt amb altres tecnlogies com la Biometria o el Big Data, s'ha convertit en el focus d'interés de nombroses investigacions i és considerada com una de les tecnologies del futur. Aquest ampli camp comprén diversos m`etodes entre els quals es troba el processament digital d'imatges i anàlisis d'imatges digitals. L'èxit de l'anàlisis d'imatges i altres tasques de processament d'alt nivell, com poden ser el reconeixement de patrons o la visió 3D, dependrà en gran manera de la bona qualitat de les imatges de partida. Avui dia existeixen multitud de factors que danyen les imatges dificultant l'obtenció d'imatges de qualitat òptima, açò ha convertit el (pre-) processament digital d'imatges en un pas fonamental previa la l'aplicació de qualsevol altra tasca de processament. Els factors més comuns són el soroll i les males condicions d'adquisició: els artefactes provocats pel soroll dificulten la inter- pretació adequada de la imatge i l'adquisició en condicions d'il·luminació o exposició deficients, com a escenes dinàmiques, causen pèrdua d'informació de la imatge que pot ser clau per a certes tasques de processament. Els passos de (pre-) processament d'imatges coneguts com suavitzat i realç s'apliquen comunament per a resoldre aquests problemes: El suavitzat té com a objecte reduir el soroll mentres que el real se centra a millorar o recuperar la informació imprecisa o danyada. Amb aquests mètodes aconseguim reparar informació dels detalls i bords de la imatge amb una nitidesa insuficient o un contingut borrós que impedeix el (post-)processament òptim de la imatge. Existeixen nombrosos mètodes que suavitzen el soroll d'una imatge, no obstant això, en molts casos el procés de filtrat provoca emborronamiento en els bords i detalls de la imatge. De la mateixa manera podem trobar una enorme quantitat de tècniques de realç que intenten combatre les pèrdues d'informació, no obstant això, aquestes tècniques no contemplen l'existència de soroll en la imatge que processen: davant d'una image sorollosa, qualsevol tècnica de realç provocarà també un augment del soroll. Encara que la idea intuïtiva per a solucionar aquest últim cas seria el previ filtrat i posterior realç, aquest enfocament ha demostrat no ser òptim: el filtrat podria eliminar informació que, al seu torn, podria no ser recuperable en el seguënt pas de realç. En la present Tesi doctoral es proposa un model basat en teoria de grafs per al processament d'imatges en color. En aquest model, es construïx un graf per a cada píxel de tal manera que les seues propietats permeten caracteritzar i classificar el píxel en quëstió. Com veurem, el model proposat és robust i capaç d'adaptar-se a una gran varietat d'aplicacions. En particular, apliquem el model per a crear noves solucions als dos problemes fonamentals del processament d'imatges: suavitzat i realç. S'ha estudiat el model en profunditat en funció del llindar, paràmetre clau que assegura la correcta classificació dels píxels de la imatge. A més, també s'han estudiat les possibles característiques i possibilitats del model que ens han permés traure-li el màxim partit en cadascuna de les possibles aplicacions. Basat en aquest model s'ha dissenyat un filtre adaptatiu capaç d'eliminar soroll gaussià d'una imatge sense difuminar els bords ni perdre informació dels detalls. A més, també ha permés desenvolupar un mètode capaç de realçar els bords i detalls d'una imatge al mateix temps que se suavitza el soroll present en la mateixa. Aquesta aplicació simultània aconseguix combinar dues operacions oposades per definició i superar així els inconvenients presentats per l'enfocament en dues etapes.[EN] Computer vision is one of the fastest growing fields at present which, along with other technologies such as Biometrics or Big Data, has become the focus of interest of many research projects and it is considered one of the technologies of the future. This broad field includes a plethora of digital image processing and analysis tasks. To guarantee the success of image analysis and other high-level processing tasks as 3D imaging or pattern recognition, it is critical to improve the quality of the raw images acquired. Nowadays all images are affected by different factors that hinder the achievement of optimal image quality, making digital image processing a fundamental step prior to the application of any other practical application. The most common of these factors are noise and poor acquisition conditions: noise artefacts hamper proper image interpretation of the image; and acquisition in poor lighting or exposure conditions, such as dynamic scenes, causes loss of image information that can be key for certain processing tasks. Image (pre-) processing steps known as smoothing and sharpening are commonly applied to overcome these inconveniences: Smoothing is aimed at reducing noise and sharpening at improving or recovering imprecise or damaged information of image details and edges with insufficient sharpness or blurred content that prevents optimal image (post-)processing. There are many methods for smoothing the noise in an image, however in many cases the filtering process causes blurring at the edges and details of the image. Besides, there are also many sharpening techniques, which try to combat the loss of information due to blurring of image texture and need to contemplate the existence of noise in the image they process. When dealing with a noisy image, any sharpening technique may amplify the noise. Although the intuitive idea to solve this last case would be the previous filtering and later sharpening, this approach has proved not to be optimal: the filtering could remove information that, in turn, may not be recoverable in the later sharpening step. In the present PhD dissertation we propose a model based on graph theory for color image processing from a vector approach. In this model, a graph is built for each pixel in such a way that its features allow to characterize and classify the pixel. As we will show, the model we proposed is robust and versatile: potentially able to adapt to a variety of applications. In particular, we apply the model to create new solutions for the two fundamentals problems in image processing: smoothing and sharpening. To approach high performance image smoothing we use the proposed model to determine if a pixel belongs to a at region or not, taking into account the need to achieve a high-precision classification even in the presence of noise. Thus, we build an adaptive soft-switching filter by employing the pixel classification to combine the outputs from a filter with high smoothing capability and a softer one to smooth edge/detail regions. Further, another application of our model allows to use pixels characterization to successfully perform a simultaneous smoothing and sharpening of color images. In this way, we address one of the classical challenges within the image processing field. We compare all the image processing techniques proposed with other state-of-the-art methods to show that they are competitive both from an objective (numerical) and visual evaluation point of view.Pérez Benito, C. (2019). Color Image Processing based on Graph Theory [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/123955TESI

    Efficient Algorithms for Large-Scale Image Analysis

    Get PDF
    This work develops highly efficient algorithms for analyzing large images. Applications include object-based change detection and screening. The algorithms are 10-100 times as fast as existing software, sometimes even outperforming FGPA/GPU hardware, because they are designed to suit the computer architecture. This thesis describes the implementation details and the underlying algorithm engineering methodology, so that both may also be applied to other applications

    Paraglide: Interactive Parameter Space Partitioning for Computer Simulations

    Full text link
    In this paper we introduce paraglide, a visualization system designed for interactive exploration of parameter spaces of multi-variate simulation models. To get the right parameter configuration, model developers frequently have to go back and forth between setting parameters and qualitatively judging the outcomes of their model. During this process, they build up a grounded understanding of the parameter effects in order to pick the right setting. Current state-of-the-art tools and practices, however, fail to provide a systematic way of exploring these parameter spaces, making informed decisions about parameter settings a tedious and workload-intensive task. Paraglide endeavors to overcome this shortcoming by assisting the sampling of the parameter space and the discovery of qualitatively different model outcomes. This results in a decomposition of the model parameter space into regions of distinct behaviour. We developed paraglide in close collaboration with experts from three different domains, who all were involved in developing new models for their domain. We first analyzed current practices of six domain experts and derived a set of design requirements, then engaged in a longitudinal user-centered design process, and finally conducted three in-depth case studies underlining the usefulness of our approach

    Automated Pattern Detection and Generalization of Building Groups

    Get PDF
    This dissertation focuses on the topic of building group generalization by considering the detection of building patterns. Generalization is an important research field in cartography, which is part of map production and the basis for the derivation of multiple representation. As one of the most important features on map, buildings occupy large amount of map space and normally have complex shape and spatial distribution, which leads to that the generalization of buildings has long been an important and challenging task. For social, architectural and geographical reasons, the buildings were built with some special rules which forms different building patterns. Building patterns are crucial structures which should be carefully considered during graphical representation and generalization. Although people can effortlessly perceive these patterns, however, building patterns are not explicitly described in building datasets. Therefore, to better support the subsequent generalization process, it is important to automatically recognize building patterns. The objective of this dissertation is to develop effective methods to detect building patterns from building groups. Based on the identified patterns, some generalization methods are proposed to fulfill the task of building generalization. The main contribution of the dissertation is described as the following five aspects: (1) The terminology and concept of building pattern has been clearly explained; a detailed and relative complete typology of building patterns has been proposed by summarizing the previous researches as well as extending by the author; (2) A stroke-mesh based method has been developed to group buildings and detect different patterns from the building groups; (3) Through the analogy between line simplification and linear building group typification, a stroke simplification based typification method has been developed aiming at solving the generalization of building groups with linear patterns; (4) A mesh-based typification method has been developed for the generalization of the building groups with grid patterns; (5) A method of extracting hierarchical skeleton structures from discrete buildings have been proposed. The extracted hierarchical skeleton structures are regarded as the representations of the global shape of the entire region, which is used to control the generalization process. With the above methods, the building patterns are detected from the building groups and the generalization of building groups are executed based on the patterns. In addition, the thesis has also discussed the drawbacks of the methods and gave the potential solutions.:Abstract I Kurzfassung III Contents V List of Figures IX List of Tables XIII List of Abbreviations XIV Chapter 1 Introduction 1 1.1 Background and motivation 1 1.1.1 Cartographic generalization 1 1.1.2 Urban building and building patterns 1 1.1.3 Building generalization 3 1.1.4 Hierarchical property in geographical objects 3 1.2 Research objectives 4 1.3 Study area 5 1.4 Thesis structure 6 Chapter 2 State of the Art 8 2.1 Operators for building generalization 8 2.1.1 Selection 9 2.1.2 Aggregation 9 2.1.3 Simplification 10 2.1.4 Displacement 10 2.2 Researches of building grouping and pattern detection 11 2.2.1 Building grouping 11 2.2.2 Pattern detection 12 2.2.3 Problem analysis . 14 2.3 Researches of building typification 14 2.3.1 Global typification 15 2.3.2 Local typification 15 2.3.3 Comparison analysis 16 2.3.4 Problem analysis 17 2.4 Summary 17 Chapter 3 Using stroke and mesh to recognize building group patterns 18 3.1 Abstract 19 3.2 Introduction 19 3.3 Literature review 20 3.4 Building pattern typology and study area 22 3.4.1 Building pattern typology 22 3.4.2 Study area 24 3.5 Methodology 25 3.5.1 Generating and refining proximity graph 25 3.5.2 Generating stroke and mesh 29 3.5.3 Building pattern recognition 31 3.6 Experiments 33 3.6.1 Data derivation and test framework 33 3.6.2 Pattern recognition results 35 3.6.3 Evaluation 39 3.7 Discussion 40 3.7.1 Adaptation of parameters 40 3.7.2 Ambiguity of building patterns 44 3.7.3 Advantage and Limitation 45 3.8 Conclusion 46 Chapter 4 A typification method for linear building groups based on stroke simplification 47 4.1 Abstract 48 4.2 Introduction 48 4.3 Detection of linear building groups 50 4.3.1 Stroke-based detection method 50 4.3.2 Distinguishing collinear and curvilinear patterns 53 4.4 Typification method 55 4.4.1 Analogy of building typification and line simplification 55 4.4.2 Stroke generation 56 4.4.3 Stroke simplification 57 4.5 Representation of newly typified buildings 60 4.6 Experiment 63 4.6.1 Linear building group detection 63 4.6.2 Typification results 65 4.7 Discussion 66 4.7.1 Comparison of reallocating remained nodes 66 4.7.2 Comparison with classic line simplification method 67 4.7.3 Advantage 69 4.7.4 Further improvement 71 4.8 Conclusion 71 Chapter 5 A mesh-based typification method for building groups with grid patterns 73 5.1 Abstract 74 5.2 Introduction 74 5.3 Related work 75 5.4 Methodology of mesh-based typification 78 5.4.1 Grid pattern classification 78 5.4.2 Mesh generation 79 5.4.3 Triangular mesh elimination 80 5.4.4 Number and positioning of typified buildings 82 5.4.5 Representation of typified buildings 83 5.4.6 Resizing Newly Typified Buildings 85 5.5 Experiments 86 5.5.1 Data derivation 86 5.5.2 Typification results and evaluation 87 5.5.3 Comparison with official map 91 5.6 Discussion 92 5.6.1 Advantages 92 5.6.2 Further improvements 93 5.7 Conclusion 94 Chapter 6 Hierarchical extraction of skeleton structures from discrete buildings 95 6.1 Abstract 96 6.2 Introduction 96 6.3 Related work 97 6.4 Study area 99 6.5 Hierarchical extraction of skeleton structures 100 6.5.1 Proximity Graph Network (PGN) of buildings 100 6.5.2 Centrality analysis of proximity graph network 103 6.5.3 Hierarchical skeleton structures of buildings 108 6.6 Generalization application 111 6.7 Experiment and discussion 114 6.7.1 Data statement 114 6.7.2 Experimental results 115 6.7.3 Discussion 118 6.8 Conclusions 120 Chapter 7 Discussion 121 7.1 Revisiting the research problems 121 7.2 Evaluation of the presented methodology 123 7.2.1 Strengths 123 7.2.2 Limitations 125 Chapter 8 Conclusions 127 8.1 Main contributions 127 8.2 Outlook 128 8.3 Final thoughts 131 Bibliography 132 Acknowledgements 142 Publications 14

    Seventh Biennial Report : June 2003 - March 2005

    No full text

    Sixth Biennial Report : August 2001 - May 2003

    No full text
    corecore