373 research outputs found

    Error-Bounded and Feature Preserving Surface Remeshing with Minimal Angle Improvement

    Get PDF
    The typical goal of surface remeshing consists in finding a mesh that is (1) geometrically faithful to the original geometry, (2) as coarse as possible to obtain a low-complexity representation and (3) free of bad elements that would hamper the desired application. In this paper, we design an algorithm to address all three optimization goals simultaneously. The user specifies desired bounds on approximation error {\delta}, minimal interior angle {\theta} and maximum mesh complexity N (number of vertices). Since such a desired mesh might not even exist, our optimization framework treats only the approximation error bound {\delta} as a hard constraint and the other two criteria as optimization goals. More specifically, we iteratively perform carefully prioritized local operators, whenever they do not violate the approximation error bound and improve the mesh otherwise. In this way our optimization framework greedily searches for the coarsest mesh with minimal interior angle above {\theta} and approximation error bounded by {\delta}. Fast runtime is enabled by a local approximation error estimation, while implicit feature preservation is obtained by specifically designed vertex relocation operators. Experiments show that our approach delivers high-quality meshes with implicitly preserved features and better balances between geometric fidelity, mesh complexity and element quality than the state-of-the-art.Comment: 14 pages, 20 figures. Submitted to IEEE Transactions on Visualization and Computer Graphic

    A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning

    Full text link
    We present a tutorial on Bayesian optimization, a method of finding the maximum of expensive cost functions. Bayesian optimization employs the Bayesian technique of setting a prior over the objective function and combining it with evidence to get a posterior function. This permits a utility-based selection of the next observation to make on the objective function, which must take into account both exploration (sampling from areas of high uncertainty) and exploitation (sampling areas likely to offer improvement over the current best observation). We also present two detailed extensions of Bayesian optimization, with experiments---active user modelling with preferences, and hierarchical reinforcement learning---and a discussion of the pros and cons of Bayesian optimization based on our experiences

    Unfolding the dynamical structure of Lisbon’s public space: space syntax and micromobility data

    Get PDF
    Space Syntax and the theory of natural movement demonstrated that spatial mor- phology is a primary factor influencing movement. This paper investigates to what extent spatial morphology at different scales (node, community and global network) influences the use of public space by micromobility. An axial map and corresponding network for Lisbon’s walkable and open public space, and data from e-scooters parking locations, is used as case study. Relevant metrics and their correlations (intelligibility, accessibility, permeability and local dimension) for the quantitative characterization of spatial morphology properties are described and computed for Lisbon’s axial map. Communities are identified based on the network topological structure in order to investigate how these properties are affected at different scales in the case study. The resulting axial line clustering is compared via the variation of information metric with the clustering obtained from e-scooters’ proximity. The results obtained enable to con- clude that the space syntax properties are scale dependent in Lisbon’s pedestrian net- work. On the other hand both the correlation between these properties, the number of scooters and the variation of information between clusters indicate that the spatial morphology is not the only factor influencing micromobility. Through the compara- tive analysis between the main properties of the public space network of Lisbon and data collected from e-scooters locations in a timeframe, centrality becomes a dynamic concept, relying not only on the static topological properties of the urban network, but also on other quantitative and qualitative factors, since the flows’ operating on the network will operate several transformations on the spatial network properties through time, uncovering spatiotemporal dynamics.info:eu-repo/semantics/publishedVersio

    Nonlinear approximation with redundant multi-component dictionaries

    Get PDF
    The problem of efficiently representing and approximating digital data is an open challenge and it is of paramount importance for many applications. This dissertation focuses on the approximation of natural signals as an organized combination of mutually connected elements, preserving and at the same time benefiting from their inherent structure. This is done by decomposing a signal onto a multi-component, redundant collection of functions (dictionary), built by the union of several subdictionaries, each of which is designed to capture a specific behavior of the signal. In this way, instead of representing signals as a superposition of sinusoids or wavelets many alternatives are available. In addition, since dictionaries we are interested in are overcomplete, the decomposition is non-unique. This gives us the possibility of adaptation, choosing among many possible representations the one which best fits our purposes. On the other hand, it also requires more complex approximation techniques whose theoretical decomposition capacity and computational load have to be carefully studied. In general, we aim at representing a signal with few and meaningful components. If we are able to represent a piece of information by using only few elements, it means that such elements can capture its main characteristics, allowing to compact the energy carried by a signal into the smallest number of terms. In such a framework, this work also proposes analysis methods which deal with the goal of considering the a priori information available when decomposing a structured signal. Indeed, a natural signal is not only an array of numbers, but an expression of a physical event about which we usually have a deep knowledge. Therefore, we claim that it is worth exploiting its structure, since it can be advantageous not only in helping the analysis process, but also in making the representation of such information more accessible and meaningful. The study of an adaptive image representation inspired and gave birth to this work. We often refer to images and visual information throughout the course of the dissertation. However, the proposed approximation setting extends to many different kinds of structured data and examples are given involving videos and electrocardiogram signals. An important part of this work is constituted by practical applications: first of all we provide very interesting results for image and video compression. Then, we also face the problem of signal denoising and, finally, promising achievements in the field of source separation are presented

    Sparse feature learning for image analysis in segmentation, classification, and disease diagnosis.

    Get PDF
    The success of machine learning algorithms generally depends on intermediate data representation, called features that disentangle the hidden factors of variation in data. Moreover, machine learning models are required to be generalized, in order to reduce the specificity or bias toward the training dataset. Unsupervised feature learning is useful in taking advantage of large amount of unlabeled data, which is available to capture these variations. However, learned features are required to capture variational patterns in data space. In this dissertation, unsupervised feature learning with sparsity is investigated for sparse and local feature extraction with application to lung segmentation, interpretable deep models, and Alzheimer\u27s disease classification. Nonnegative Matrix Factorization, Autoencoder and 3D Convolutional Autoencoder are used as architectures or models for unsupervised feature learning. They are investigated along with nonnegativity, sparsity and part-based representation constraints for generalized and transferable feature extraction

    Acta Cybernetica : Volume 15. Number 2.

    Get PDF

    Genetic programming applied to morphological image processing

    Get PDF
    This thesis presents three approaches to the automatic design of algorithms for the processing of binary images based on the Genetic Programming (GP) paradigm. In the first approach the algorithms are designed using the basic Mathematical Morphology (MM) operators, i.e. erosion and dilation, with a variety of Structuring Elements (SEs). GP is used to design algorithms to convert a binary image into another containing just a particular characteristic of interest. In the study we have tested two similarity fitness functions, training sets with different numbers of elements and different sizes of the training images over three different objectives. The results of the first approach showed some success in the evolution of MM algorithms but also identifed problems with the amount of computational resources the method required. The second approach uses Sub-Machine-Code GP (SMCGP) and bitwise operators as an attempt to speed-up the evolution of the algorithms and to make them both feasible and effective. The SMCGP approach was successful in the speeding up of the computation but it was not successful in improving the quality of the obtained algorithms. The third approach presents the combination of logical and morphological operators in an attempt to improve the quality of the automatically designed algorithms. The results obtained provide empirical evidence showing that the evolution of high quality MM algorithms using GP is possible and that this technique has a broad potential that should be explored further. This thesis includes an analysis of the potential of GP and other Machine Learning techniques for solving the general problem of Signal Understanding by means of exploring Mathematical Morphology

    Part decomposition of 3D surfaces

    Get PDF
    This dissertation describes a general algorithm that automatically decomposes realworld scenes and objects into visual parts. The input to the algorithm is a 3 D triangle mesh that approximates the surfaces of a scene or object. This geometric mesh completely specifies the shape of interest. The output of the algorithm is a set of boundary contours that dissect the mesh into parts where these parts agree with human perception. In this algorithm, shape alone defines the location of a bom1dary contour for a part. The algorithm leverages a human vision theory known as the minima rule that states that human visual perception tends to decompose shapes into parts along lines of negative curvature minima. Specifically, the minima rule governs the location of part boundaries, and as a result the algorithm is known as the Minima Rule Algorithm. Previous computer vision methods have attempted to implement this rule but have used pseudo measures of surface curvature. Thus, these prior methods are not true implementations of the rule. The Minima Rule Algorithm is a three step process that consists of curvature estimation, mesh segmentation, and quality evaluation. These steps have led to three novel algorithms known as Normal Vector Voting, Fast Marching Watersheds, and Part Saliency Metric, respectively. For each algorithm, this dissertation presents both the supporting theory and experimental results. The results demonstrate the effectiveness of the algorithm using both synthetic and real data and include comparisons with previous methods from the research literature. Finally, the dissertation concludes with a summary of the contributions to the state of the art

    Automatic Reconstruction of Parametric, Volumetric Building Models from 3D Point Clouds

    Get PDF
    Planning, construction, modification, and analysis of buildings requires means of representing a building's physical structure and related semantics in a meaningful way. With the rise of novel technologies and increasing requirements in the architecture, engineering and construction (AEC) domain, two general concepts for representing buildings have gained particular attention in recent years. First, the concept of Building Information Modeling (BIM) is increasingly used as a modern means for representing and managing a building's as-planned state digitally, including not only a geometric model but also various additional semantic properties. Second, point cloud measurements are now widely used for capturing a building's as-built condition by means of laser scanning techniques. A particular challenge and topic of current research are methods for combining the strengths of both point cloud measurements and Building Information Modeling concepts to quickly obtain accurate building models from measured data. In this thesis, we present our recent approaches to tackle the intermeshed challenges of automated indoor point cloud interpretation using targeted segmentation methods, and the automatic reconstruction of high-level, parametric and volumetric building models as the basis for further usage in BIM scenarios. In contrast to most reconstruction methods available at the time, we fundamentally base our approaches on BIM principles and standards, and overcome critical limitations of previous approaches in order to reconstruct globally plausible, volumetric, and parametric models.Automatische Rekonstruktion von parametrischen, volumetrischen Gebäudemodellen aus 3D Punktwolken Für die Planung, Konstruktion, Modifikation und Analyse von Gebäuden werden Möglichkeiten zur sinnvollen Repräsentation der physischen Gebäudestruktur sowie dazugehöriger Semantik benötigt. Mit dem Aufkommen neuer Technologien und steigenden Anforderungen im Bereich von Architecture, Engineering and Construction (AEC) haben zwei Konzepte für die Repräsentation von Gebäuden in den letzten Jahren besondere Aufmerksamkeit erlangt. Erstens wird das Konzept des Building Information Modeling (BIM) zunehmend als ein modernes Mittel zur digitalen Abbildung und Verwaltung "As-Planned"-Zustands von Gebäuden verwendet, welches nicht nur ein geometrisches Modell sondern auch verschiedene zusätzliche semantische Eigenschaften beinhaltet. Zweitens werden Punktwolkenmessungen inzwischen häufig zur Aufnahme des "As-Built"-Zustands mittels Laser-Scan-Techniken eingesetzt. Eine besondere Herausforderung und Thema aktueller Forschung ist die Entwicklung von Methoden zur Vereinigung der Stärken von Punktwolken und Konzepten des Building Information Modeling um schnell akkurate Gebäudemodelle aus den gemessenen Daten zu erzeugen. In dieser Dissertation präsentieren wir unsere aktuellen Ansätze um die miteinander verwobenen Herausforderungen anzugehen, Punktwolken mithilfe geeigneter Segmentierungsmethoden automatisiert zu interpretieren, sowie hochwertige, parametrische und volumetrische Gebäudemodelle als Basis für die Verwendung im BIM-Umfeld zu rekonstruieren. Im Gegensatz zu den meisten derzeit verfügbaren Rekonstruktionsverfahren basieren unsere Ansätze grundlegend auf Prinzipien und Standards aus dem BIM-Umfeld und überwinden kritische Einschränkungen bisheriger Ansätze um vollständig plausible, volumetrische und parametrische Modelle zu erzeugen.</p
    • …
    corecore