91 research outputs found

    Advances in 3D reconstruction

    Get PDF
    La tesi affronta il problema della ricostruzione di scene tridimensionali a partire da insiemi non strutturati di fotografie delle stesse. Lo stato dell'arte viene avanzato su diversi fronti: il primo contributo consiste in una formulazione robusta del problema di struttura e moto basata su di un approccio gerarchico, contrariamente a quello sequenziale prevalente in letteratura. Questa metodologia abbatte di un ordine di grandezza il costo computazionale complessivo, risulta inerentemente parallelizzabile, minimizza il progressivo accumulo degli errori e elimina la cruciale dipendenza dalla scelta della coppia di viste iniziale comune a tutte le formulazioni concorrenti. Un secondo contributo consiste nello sviluppo di una nuova procedura di autocalibrazione, particolarmente robusta e adatta al contesto del problema di moto e struttura. La soluzione proposta consiste in una procedura in forma chiusa per il recupero del piano all'infinito data una stima dei parametri intrinseci di almeno due camere. Questo metodo viene utilizzato per la ricerca esaustiva dei parametri interni, il cui spazio di ricerca Š strutturalmente limitato dalla finitezza dei dispositivi di acquisizione. Si Š indagato infine come visualizzare in maniera efficiente e gradevole i risultati di ricostruzione ottenuti: a tale scopo sono stati sviluppati algoritmi per il calcolo della disparit… stereo e procedure per la visualizzazione delle ricostruzione come insiemi di piani tessiturati automaticamente estratti, ottenendo una rappresentazione fedele, compatta e semanticamente significativa. Ogni risultato Š stato corredato da una validazione sperimentale rigorosa, con verifiche sia qualitative che quantitative.The thesis tackles the problem of 3D reconstruction of scenes from unstructured picture datasets. State of the art is advanced on several aspects: the first contribute consists in a robust formulation of the structure and motion problem based on a hierarchical approach, as opposed to the sequential one prevalent in literature. This methodology reduces the total computational complexity by one order of magnitude, is inherently parallelizable, minimizes the error accumulation causing drift and eliminates the crucial dependency from the choice of the initial couple of views which is common to all competing approaches. A second contribute consists in the discovery of a novel slef-calibration procedure, very robust and tailored to the structure and motion task. The proposed solution is a closed-form procedure for the recovery of the plane at infinity given a rough estimate of focal parameters of at least two cameras. This method is employed for the exaustive search of internal parameters, whise space is inherently bounded from the finiteness of acquisition devices. Finally, we inevstigated how to visualize in a efficient and compelling way the obtained reconstruction results: to this effect several algorithms for the computation of stereo disparity are presented. Along with procedures for the automatic extraction of support planes, they have been employed to obtain a faithful, compact and semantically significant representation of the scene as a collection of textured planes, eventually augmented by depth information encoded in relief maps. Every result has been verified by a rigorous experimental validation, comprising both qualitative and quantitative comparisons

    Field D* pathfinding in weighted simplicial complexes

    Get PDF
    Includes abstract.Includes bibliographical references.The development of algorithms to efficiently determine an optimal path through a complex environment is a continuing area of research within Computer Science. When such environments can be represented as a graph, established graph search algorithms, such as Dijkstra’s shortest path and A*, can be used. However, many environments are constructed from a set of regions that do not conform to a discrete graph. The Weighted Region Problem was proposed to address the problem of finding the shortest path through a set of such regions, weighted with values representing the cost of traversing the region. Robust solutions to this problem are computationally expensive since finding shortest paths across a region requires expensive minimisation. Sampling approaches construct graphs by introducing extra points on region edges and connecting them with edges criss-crossing the region. Dijkstra or A* are then applied to compute shortest paths. The connectivity of these graphs is high and such techniques are thus not particularly well suited to environments where the weights and representation frequently change. The Field D* algorithm, by contrast, computes the shortest path across a grid of weighted square cells and has replanning capabilites that cater for environmental changes. However, representing an environment as a weighted grid (an image) is not space-efficient since high resolution is required to produce accurate paths through areas containing features sensitive to noise. In this work, we extend Field D* to weighted simplicial complexes – specifically – triangulations in 2D and tetrahedral meshes in 3D

    A Survey on Automatic Parameter Tuning for Big Data Processing Systems

    Get PDF
    Big data processing systems (e.g., Hadoop, Spark, Storm) contain a vast number of configuration parameters controlling parallelism, I/O behavior, memory settings, and compression. Improper parameter settings can cause significant performance degradation and stability issues. However, regular users and even expert administrators grapple with understanding and tuning them to achieve good performance. We investigate existing approaches on parameter tuning for both batch and stream data processing systems and classify them into six categories: rule-based, cost modeling, simulation-based, experiment-driven, machine learning, and adaptive tuning. We summarize the pros and cons of each approach and raise some open research problems for automatic parameter tuning.Peer reviewe

    Nature of the learning algorithms for feedforward neural networks

    Get PDF
    The neural network model (NN) comprised of relatively simple computing elements, operating in parallel, offers an attractive and versatile framework for exploring a variety of learning structures and processes for intelligent systems. Due to the amount of research developed in the area many types of networks have been defined. The one of interest here is the multi-layer perceptron as it is one of the simplest and it is considered a powerful representation tool whose complete potential has not been adequately exploited and whose limitations need yet to be specified in a formal and coherent framework. This dissertation addresses the theory of generalisation performance and architecture selection for the multi-layer perceptron; a subsidiary aim is to compare and integrate this model with existing data analysis techniques and exploit its potential by combining it with certain constructs from computational geometry creating a reliable, coherent network design process which conforms to the characteristics of a generative learning algorithm, ie. one including mechanisms for manipulating the connections and/or units that comprise the architecture in addition to the procedure for updating the weights of the connections. This means that it is unnecessary to provide an initial network as input to the complete training process.After discussing in general terms the motivation for this study, the multi-layer perceptron model is introduced and reviewed, along with the relevant supervised training algorithm, ie. backpropagation. More particularly, it is argued that a network developed employing this model can in general be trained and designed in a much better way by extracting more information about the domains of interest through the application of certain geometric constructs in a preprocessing stage, specifically by generating the Voronoi Diagram and Delaunav Triangulation [Okabe et al. 92] of the set of points comprising the training set and once a final architecture which performs appropriately on it has been obtained, Principal Component Analysis [Jolliffe 86] is applied to the outputs produced by the units in the network's hidden layer to eliminate the redundant dimensions of this space

    Near-Optimal Motion Planning Algorithms Via A Topological and Geometric Perspective

    Get PDF
    Motion planning is a fundamental problem in robotics, which involves finding a path for an autonomous system, such as a robot, from a given source to a destination while avoiding collisions with obstacles. The properties of the planning space heavily influence the performance of existing motion planning algorithms, which can pose significant challenges in handling complex regions, such as narrow passages or cluttered environments, even for simple objects. The problem of motion planning becomes deterministic if the details of the space are fully known, which is often difficult to achieve in constantly changing environments. Sampling-based algorithms are widely used among motion planning paradigms because they capture the topology of space into a roadmap. These planners have successfully solved high-dimensional planning problems with a probabilistic-complete guarantee, i.e., it guarantees to find a path if one exists as the number of vertices goes to infinity. Despite their progress, these methods have failed to optimize the sub-region information of the environment for reuse by other planners. This results in re-planning overhead at each execution, affecting the performance complexity for computation time and memory space usage. In this research, we address the problem by focusing on the theoretical foundation of the algorithmic approach that leverages the strengths of sampling-based motion planners and the Topological Data Analysis methods to extract intricate properties of the environment. The work contributes a novel algorithm to overcome the performance shortcomings of existing motion planners by capturing and preserving the essential topological and geometric features to generate a homotopy-equivalent roadmap of the environment. This roadmap provides a mathematically rich representation of the environment, including an approximate measure of the collision-free space. In addition, the roadmap graph vertices sampled close to the obstacles exhibit advantages when navigating through narrow passages and cluttered environments, making obstacle-avoidance path planning significantly more efficient. The application of the proposed algorithms solves motion planning problems, such as sub-optimal planning, diverse path planning, and fault-tolerant planning, by demonstrating the improvement in computational performance and path quality. Furthermore, we explore the potential of these algorithms in solving computational biology problems, particularly in finding optimal binding positions for protein-ligand or protein-protein interactions. Overall, our work contributes a new way to classify routes in higher dimensional space and shows promising results for high-dimensional robots, such as articulated linkage robots. The findings of this research provide a comprehensive solution to motion planning problems and offer a new perspective on solving computational biology problems

    Collection of abstracts of the 24th European Workshop on Computational Geometry

    Get PDF
    International audienceThe 24th European Workshop on Computational Geomety (EuroCG'08) was held at INRIA Nancy - Grand Est & LORIA on March 18-20, 2008. The present collection of abstracts contains the 63 scientific contributions as well as three invited talks presented at the workshop

    Visual Analysis of High-Dimensional Point Clouds using Topological Abstraction

    Get PDF
    This thesis is about visualizing a kind of data that is trivial to process by computers but difficult to imagine by humans because nature does not allow for intuition with this type of information: high-dimensional data. Such data often result from representing observations of objects under various aspects or with different properties. In many applications, a typical, laborious task is to find related objects or to group those that are similar to each other. One classic solution for this task is to imagine the data as vectors in a Euclidean space with object variables as dimensions. Utilizing Euclidean distance as a measure of similarity, objects with similar properties and values accumulate to groups, so-called clusters, that are exposed by cluster analysis on the high-dimensional point cloud. Because similar vectors can be thought of as objects that are alike in terms of their attributes, the point cloud\''s structure and individual cluster properties, like their size or compactness, summarize data categories and their relative importance. The contribution of this thesis is a novel analysis approach for visual exploration of high-dimensional point clouds without suffering from structural occlusion. The work is based on implementing two key concepts: The first idea is to discard those geometric properties that cannot be preserved and, thus, lead to the typical artifacts. Topological concepts are used instead to shift away the focus from a point-centered view on the data to a more structure-centered perspective. The advantage is that topology-driven clustering information can be extracted in the data\''s original domain and be preserved without loss in low dimensions. The second idea is to split the analysis into a topology-based global overview and a subsequent geometric local refinement. The occlusion-free overview enables the analyst to identify features and to link them to other visualizations that permit analysis of those properties not captured by the topological abstraction, e.g. cluster shape or value distributions in particular dimensions or subspaces. The advantage of separating structure from data point analysis is that restricting local analysis only to data subsets significantly reduces artifacts and the visual complexity of standard techniques. That is, the additional topological layer enables the analyst to identify structure that was hidden before and to focus on particular features by suppressing irrelevant points during local feature analysis. This thesis addresses the topology-based visual analysis of high-dimensional point clouds for both the time-invariant and the time-varying case. Time-invariant means that the points do not change in their number or positions. That is, the analyst explores the clustering of a fixed and constant set of points. The extension to the time-varying case implies the analysis of a varying clustering, where clusters appear as new, merge or split, or vanish. Especially for high-dimensional data, both tracking---which means to relate features over time---but also visualizing changing structure are difficult problems to solve

    Geometric optimization and querying : exact & approximate

    Get PDF
    This thesis has two main parts. The first part deals with the stage illumination problem. Given a stage represented by a line segment L and a set of lightsources represented by a set of points S in the plane, assign powers to the lightsources such that every point on the stage receives a sufficient amount, e.g. one unit, of light while minimizing the overall power consumption. By assuming that the amount of light arriving from a fixed lightsource decreases rapidly with the distance from the lightsource, this becomes an interesting geometric optimization problem. We present different solutions, based on convex optimization, discretization and linear programming, as well as a purely combinatorial approximation algorithm. Some experimental results are also provided. In the second part of this thesis, we are concerned with two different geometric problems whose solutions are based on the construction of a data structure that would allow for efficient queries. The central idea of our data structures is the well-separated pair decomposition. The first problem we address is the k-hop restricted shortest path under the power-euclidean distance function. Given a set P of n points in the plane and the distance function jpqjd +Cp for some constant d > 1, nonnegative offset cost Cp and p;q 2 P, where jpqj denotes the Euclidean distance between p and q, we consider the problem of finding paths between any pair of points that minimize the lenght of the path and do not use more than some constant number k of hops. Known exact algorithms for this problem required W(nlogn) per query pair (p;q). We relax the exactness requirement and only require approximate (1+e) solutions which allows us to derive schemes which guarantee constant query time using linear space and O(nlogn) preprocessing time. The dependence on e is polynomial in 1=e. We also develop a tool that might be of independent interest: For any pair of points p;q 2 P report in constant time the cluster pair (A;B) representing (p;q) in a well-separated pair decomposition of P. The second problem in this part is so-called cone-restricted nearest neighbor. For a given point set in Euclidean space we consider the problem of finding (approximate) nearest neighbors of a query point but restricting only to points that lie within a fixed cone with apex at the query point. We investigate the structure of the Voronoi diagram induced by this notion of proximity and present approximate and exact data structures for answering cone-restricted nearest neighbor queries. In particular, we develop an approximate Voronoi diagram of size O((n=ed) log(1=e)) that can be used to answer cone-restricted nearest neighbor queries in O(log(n=e)) time.Diese Arbeit besteht aus zwei Teilen. Der erste Teil behandelt das Stage Illumination Problem. Hierbei möchte man eine Bühne, die durch ein Geradenstück repräsentiert ist, durch Lichtquellen, die durch Punkte in der Ebene repräsentiert sind, so beleuchten, dass jeder Punkt der Bühne genügend Licht erhält und dabei möglichst wenig Energie verbrauchen. Wenn man annimmt, dass die Lichtintensität stark mit der Entfernung zur Lichtquelle abnimmt, so stellt dies ein interesanntes geometrisches Optimierungsproblem dar. Wir geben verschiedene Lösungen an, die sowohl auf konvexer Optimierung, Diskretisierung und Linearer Programmierung basieren, als auch einen kombinatorischen Approximationsalgorithmus. Es werden auch experimentelle Resultate angegeben. Im zweiten Teil dieser Arbeit behandeln wir zwei verschiedene geometrische Probleme, deren Lösungen auf einer Datenstruktur basieren, die effiziente Anfragen beantworten kann. Die zentrale Idee unserer Datenstruktur ist die well-separated pair decomposition WSPD. Das erste Problem, das wir ansprechen ist das k-hop restricted shortest path under the power-euclidean distance function. Für n Punkte in der Ebene möchte man den kürzesten Pfad zwischen zwei beliebigen Punkten finden, der nicht mehr als k Kanten benötigt. Bekannte exakte Algorithmen für dieses Problem benötigen W(nlogn) Zeit pro Anfrage (p;q). Wir lockern die Exaktheitsforderung und verlangen nur eine (1+e)-Approximation. Dies erlaubt uns eine Methode zu entwickeln, die konstante Zeit pro Anfrage garaniert und nur linearen Platz benötigt bei einer Vorverarbeitungszeit von O(nlogn). Die Abhängigkeit von e ist polynomiell in 1=e. Außerdem entwickeln wir eine Methode, die davon unabhängig von Interesse ist. Für ein Punktepaar p;q 2 P bestimmen wir in konstanter Zeit das Cluster-paar (A;B), das (p;q) in einer WSPD von P bestimmt. Das zweite Problem in diesem Teil ist das sogenannte cone-restricted nearest neighbor problem. Für eine gegebene Menge von Punkten im Euklidischen Raum betrachten wir das Problem den nächsten Nachbarpunkt zu bestimmen, der in einem Kegel liegt, dessen Spitze ein beliebiger Anfragepunkt ist. Wir untersuchen das dazugehörige Voronoi- Diagramm und entwickeln effiziente Datenstrukturen sowohl für exakte als auch für approximative cone-restricted nearest neighbor-Anfragen. Im speziellen entwickeln wir ein approximatives Voronoi-Diagramm der Größe O((n=ed) log(1=e)), das dazu benutzt werden kann, Anfragen in der Zeit O(log(n=e)) zu beantworten

    Geometric optimization and querying : exact & approximate

    Get PDF
    This thesis has two main parts. The first part deals with the stage illumination problem. Given a stage represented by a line segment L and a set of lightsources represented by a set of points S in the plane, assign powers to the lightsources such that every point on the stage receives a sufficient amount, e.g. one unit, of light while minimizing the overall power consumption. By assuming that the amount of light arriving from a fixed lightsource decreases rapidly with the distance from the lightsource, this becomes an interesting geometric optimization problem. We present different solutions, based on convex optimization, discretization and linear programming, as well as a purely combinatorial approximation algorithm. Some experimental results are also provided. In the second part of this thesis, we are concerned with two different geometric problems whose solutions are based on the construction of a data structure that would allow for efficient queries. The central idea of our data structures is the well-separated pair decomposition. The first problem we address is the k-hop restricted shortest path under the power-euclidean distance function. Given a set P of n points in the plane and the distance function jpqjd +Cp for some constant d > 1, nonnegative offset cost Cp and p;q 2 P, where jpqj denotes the Euclidean distance between p and q, we consider the problem of finding paths between any pair of points that minimize the lenght of the path and do not use more than some constant number k of hops. Known exact algorithms for this problem required W(nlogn) per query pair (p;q). We relax the exactness requirement and only require approximate (1+e) solutions which allows us to derive schemes which guarantee constant query time using linear space and O(nlogn) preprocessing time. The dependence on e is polynomial in 1=e. We also develop a tool that might be of independent interest: For any pair of points p;q 2 P report in constant time the cluster pair (A;B) representing (p;q) in a well-separated pair decomposition of P. The second problem in this part is so-called cone-restricted nearest neighbor. For a given point set in Euclidean space we consider the problem of finding (approximate) nearest neighbors of a query point but restricting only to points that lie within a fixed cone with apex at the query point. We investigate the structure of the Voronoi diagram induced by this notion of proximity and present approximate and exact data structures for answering cone-restricted nearest neighbor queries. In particular, we develop an approximate Voronoi diagram of size O((n=ed) log(1=e)) that can be used to answer cone-restricted nearest neighbor queries in O(log(n=e)) time.Diese Arbeit besteht aus zwei Teilen. Der erste Teil behandelt das Stage Illumination Problem. Hierbei möchte man eine Bühne, die durch ein Geradenstück repräsentiert ist, durch Lichtquellen, die durch Punkte in der Ebene repräsentiert sind, so beleuchten, dass jeder Punkt der Bühne genügend Licht erhält und dabei möglichst wenig Energie verbrauchen. Wenn man annimmt, dass die Lichtintensität stark mit der Entfernung zur Lichtquelle abnimmt, so stellt dies ein interesanntes geometrisches Optimierungsproblem dar. Wir geben verschiedene Lösungen an, die sowohl auf konvexer Optimierung, Diskretisierung und Linearer Programmierung basieren, als auch einen kombinatorischen Approximationsalgorithmus. Es werden auch experimentelle Resultate angegeben. Im zweiten Teil dieser Arbeit behandeln wir zwei verschiedene geometrische Probleme, deren Lösungen auf einer Datenstruktur basieren, die effiziente Anfragen beantworten kann. Die zentrale Idee unserer Datenstruktur ist die well-separated pair decomposition WSPD. Das erste Problem, das wir ansprechen ist das k-hop restricted shortest path under the power-euclidean distance function. Für n Punkte in der Ebene möchte man den kürzesten Pfad zwischen zwei beliebigen Punkten finden, der nicht mehr als k Kanten benötigt. Bekannte exakte Algorithmen für dieses Problem benötigen W(nlogn) Zeit pro Anfrage (p;q). Wir lockern die Exaktheitsforderung und verlangen nur eine (1+e)-Approximation. Dies erlaubt uns eine Methode zu entwickeln, die konstante Zeit pro Anfrage garaniert und nur linearen Platz benötigt bei einer Vorverarbeitungszeit von O(nlogn). Die Abhängigkeit von e ist polynomiell in 1=e. Außerdem entwickeln wir eine Methode, die davon unabhängig von Interesse ist. Für ein Punktepaar p;q 2 P bestimmen wir in konstanter Zeit das Cluster-paar (A;B), das (p;q) in einer WSPD von P bestimmt. Das zweite Problem in diesem Teil ist das sogenannte cone-restricted nearest neighbor problem. Für eine gegebene Menge von Punkten im Euklidischen Raum betrachten wir das Problem den nächsten Nachbarpunkt zu bestimmen, der in einem Kegel liegt, dessen Spitze ein beliebiger Anfragepunkt ist. Wir untersuchen das dazugehörige Voronoi- Diagramm und entwickeln effiziente Datenstrukturen sowohl für exakte als auch für approximative cone-restricted nearest neighbor-Anfragen. Im speziellen entwickeln wir ein approximatives Voronoi-Diagramm der Größe O((n=ed) log(1=e)), das dazu benutzt werden kann, Anfragen in der Zeit O(log(n=e)) zu beantworten

    ONLINE HIERARCHICAL MODELS FOR SURFACE RECONSTRUCTION

    Get PDF
    Applications based on three-dimensional object models are today very common, and can be found in many fields as design, archeology, medicine, and entertainment. A digital 3D model can be obtained by means of physical object measurements performed by using a 3D scanner. In this approach, an important step of the 3D model building process consists of creating the object's surface representation from a cloud of noisy points sampled on the object itself. This process can be viewed as the estimation of a function from a finite subset of its points. Both in statistics and machine learning this is known as a regression problem. Machine learning views the function estimation as a learning problem to be addressed by using computational intelligence techniques: the points represent a set of examples and the surface to be reconstructed represents the law that has generated them. On the other hand, in many applications the cloud of sampled points may become available only progressively during system operation. The conventional approaches to regression are therefore not suited to deal efficiently with this operating condition. The aim of the thesis is to introduce innovative approaches to the regression problem suited for achieving high reconstruction accuracy, while limiting the computational complexity, and appropriate for online operation. Two classical computational intelligence paradigms have been considered as basic tools to address the regression problem: namely the Radial Basis Functions and the Support Vector Machines. The original and innovative aspect introduced by this thesis is the extension of these tools toward a multi-scale incremental structure, based on hierarchical schemes and suited for online operation. This allows for obtaining modular, scalable, accurate and efficient modeling procedures with training algorithms appropriate for dealing with online learning. Radial Basis Function Networks have a fast configuration procedure that, operating locally, does not require iterative algorithms. On the other side, the computational complexity of the configuration procedure of Support Vector Machines is independent from the number of input variables. These two approaches have been considered in order to analyze advantages and limits of each of them due to the differences in their intrinsic nature
    corecore