48 research outputs found

    Skeletal representations of orthogonal shapes

    Get PDF
    Skeletal representations are important shape descriptors which encode topological and geometrical properties of shapes and reduce their dimension. Skeletons are used in several fields of science and attract the attention of many researchers. In the biocad field, the analysis of structural properties such as porosity of biomaterials requires the previous computation of a skeleton. As the size of three-dimensional images become larger, efficient and robust algorithms that extract simple skeletal structures are required. The most popular and prominent skeletal representation is the medial axis, defined as the shape points which have at least two closest points on the shape boundary. Unfortunately, the medial axis is highly sensitive to noise and perturbations of the shape boundary. That is, a small change of the shape boundary may involve a considerable change of its medial axis. Moreover, the exact computation of the medial axis is only possible for a few classes of shapes. For example, the medial axis of polyhedra is composed of non planar surfaces, and its accurate and robust computation is difficult. These problems led to the emergence of approximate medial axis representations. There exists two main approximation methods: the shape is approximated with another shape class or the Euclidean metric is approximated with another metric. The main contribution of this thesis is the combination of a specific shape and metric simplification. The input shape is approximated with an orthogonal shape, which are polygons or polyhedra enclosed by axis-aligned edges or faces, respectively. In the same vein, the Euclidean metric is replaced by the L infinity or Chebyshev metric. Despite the simpler structure of orthogonal shapes, there are few works on skeletal representations applied to orthogonal shapes. Much of the efforts have been devoted to binary images and volumes, which are a subset of orthogonal shapes. Two new skeletal representations based on this paradigm are introduced: the cube skeleton and the scale cube skeleton. The cube skeleton is shown to be composed of straight line segments or planar faces and to be homotopical equivalent to the input shape. The scale cube skeleton is based upon the cube skeleton, and introduces a family of skeletons that are more stable to shape noise and perturbations. In addition, the necessary algorithms to compute the cube skeleton of polygons and polyhedra and the scale cube skeleton of polygons are presented. Several experimental results confirm the efficiency, robustness and practical use of all the presented methods

    Image understanding and feature extraction for applications in industry and mapping

    Get PDF
    Bibliography: p. 212-220.The aim of digital photogrammetry is the automated extraction and classification of the three dimensional information of a scene from a number of images. Existing photogrammetric systems are semi-automatic requiring manual editing and control, and have very limited domains of application so that image understanding capabilities are left to the user. Among the most important steps in a fully integrated system are the extraction of features suitable for matching, the establishment of the correspondence between matching points and object classification. The following study attempts to explore the applicability of pattern recognition concepts in conjunction with existing area-based methods, feature-based techniques and other approaches used in computer vision in order to increase the level of automation and as a general alternative and addition to existing methods. As an illustration of the pattern recognition approach examples of industrial applications are given. The underlying method is then extended to the identification of objects in aerial images of urban scenes and to the location of targets in close-range photogrammetric applications. Various moment-based techniques are considered as pattern classifiers including geometric invariant moments, Legendre moments, Zernike moments and pseudo-Zernike moments. Two-dimensional Fourier transforms are also considered as pattern classifiers. The suitability of these techniques is assessed. These are then applied as object locators and as feature extractors or interest operators. Additionally the use of fractal dimension to segment natural scenes for regional classification in order to limit the search space for particular objects is considered. The pattern recognition techniques require considerable preprocessing of images. The various image processing techniques required are explained where needed. Extracted feature points are matched using relaxation based techniques in conjunction with area-based methods to 'obtain subpixel accuracy. A subpixel pattern recognition based method is also proposed and an investigation into improved area-based subpixel matching methods is undertaken. An algorithm for determining relative orientation parameters incorporating the epipolar line constraint is investigated and compared with a standard relative orientation algorithm. In conclusion a basic system that can be automated based on some novel techniques in conjunction with existing methods is described and implemented in a mapping application. This system could be largely automated with suitably powerful computers

    A custom computing framework for orientation and photogrammetry

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.Includes bibliographical references (p. 211-223).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.There is great demand today for real-time computer vision systems, with applications including image enhancement, target detection and surveillance, autonomous navigation, and scene reconstruction. These operations generally require extensive computing power; when multiple conventional processors and custom gate arrays are inappropriate, due to either excessive cost or risk, a class of devices known as Field-Programmable Gate Arrays (FPGAs) can be employed. FPGAs per the flexibility of a programmable solution and nearly the performance of a custom gate array. When implementing a custom algorithm in an FPGA, one must be more efficient than with a gate array technology. By tailoring the algorithms, architectures, and precisions, the gate count of an algorithm may be sufficiently reduced to t into an FPGA. The challenge is to perform this customization of the algorithm, while still maintaining the required performance. The techniques required to perform algorithmic optimization for FPGAs are scattered across many fields; what is currently lacking is a framework for utilizing all these well known and developing techniques. The purpose of this thesis is to develop this framework for orientation and photogrammetry systems.by Paul D. Fiore.Ph.D

    The design and implementation of a purely digital stereo-photogrammetric system on the IBM 3090 multi-user mainframe computer

    Get PDF
    This thesis is concerned with an investigation into the possibilities of implementing various aspects of a purely digital stereo-photogrammetric (DSP) system on the IBM 3090 150E mainframe multi-user computer. The main aspects discussed within the context of this thesis are:-i) Mathematical modelling of the process of formation of digital images in the space and frequency domains.ii) Experiments on improving the pictorial quality of digital aerial photos using Inverse and Wiener filters.iii) Devising and implementing an approach for the automatic sub-pixel measurement of cross-type fiducial marks for the inner orientation, using the Gradient operator and image modelling least squares (IML) approach.iv) Devising and implementing a method for the digital rectification of overlapping aerial photos and the formation of the stereo-model.v) Design and implementation of a digital stereo-photogrammetric system (DSP) and the generation of a DTM using visual measurement.vi) Investigating the feasibility of stereo-viewing of binary images and the possibility of performing measurements on such images.vii) Implementing a method for the automatic generation of a DTM using a one-dimensional image correlation along epipolar lines and experimentally optimizing the size of the correlation window.viii) Assessment of the accuracy of the DTM data generated both by the DSP and the automatic correlation method.ix) Vectorization of the rectification and correlation programs to achieve higher speed-up factors in the computational process

    Mathematics and Its Applications, A Transcendental-Idealist Perspective

    Get PDF
    This monograph offers a fresh perspective on the applicability of mathematics in science. It explores what mathematics must be so that its applications to the empirical world do not constitute a mystery. In the process, readers are presented with a new version of mathematical structuralism. The author details a philosophy of mathematics in which the problem of its applicability, particularly in physics, in all its forms can be explained and justified. Chapters cover: mathematics as a formal science, mathematical ontology: what does it mean to exist, mathematical structures: what are they and how do we know them, how different layers of mathematical structuring relate to each other and to perceptual structures, and how to use mathematics to find out how the world is. The book simultaneously develops along two lines, both inspired and enlightened by Edmund Husserl’s phenomenological philosophy. One line leads to the establishment of a particular version of mathematical structuralism, free of “naturalist” and empiricist bias. The other leads to a logical-epistemological explanation and justification of the applicability of mathematics carried out within a unique structuralist perspective. This second line points to the “unreasonable” effectiveness of mathematics in physics as a means of representation, a tool, and a source of not always logically justified but useful and effective heuristic strategies

    Describing and Simulating Dynamic Reconfiguration in SystemC Exemplified by a Dedicated 3D Collision Detection Hardware

    Get PDF
    The ongoing trend towards development of parallel software and the increased flexibility of state-of-the-art programmable logic devices are currently converging in the field of reconfigurable hardware. On the other hand there is the traditional hardware market, with its increasingly short development cycles, which is mainly driven by high-level prototyping of products. To enable the design community to conveniently develop reconfigurable architectures in a short time-to-market, this thesis introduces the library ReChannel, which extends SystemC with advanced language constructs for high level reconfiguration modelling. It combines IP reuse and high-level modelling with reconfiguration. The proposed methodology was tested on a hierarchical FPGA-based 3D collision detection accelerator, is also presented. To enable implementation of such a complex algorithm in FPGA logic it had to be implemented using fixed-point arithmetic. Therefore a special method was derived that enables rounding of the used bounding-volumes without incurring the correctness of the non-intersection reports. This guarantees a correct overall result. A bound on the rounding error was derived that gives a measure of the number of false intersection reports, and thus on the run-time. A triangle and a quadrangle intersection test were implemented as the second</p

    Frustrated Magnetism, Monopole Dynamics, and Non-Coplanar Ordering in Classical Spin Systems - Insights from Monte Carlo Simulations

    Get PDF
    For decades, physicists have been fascinated by frustrated magnetism. Frustration arises when the pairwise interaction energies between localized magnetic moments in a system cannot be minimized simultaneously. This competition prevents the formation of long range magnetic order even at lowest temperatures and, eventually, leads to the hallmark of frustration in the form of a massive ground state degeneracy accompanied by a finite zero-point entropy. Thus, exotic magnetic analogs of ordinary fluids can be realized, where spins are highly correlated but remain strongly fluctuating. Such spin liquids are linked to remarkable unconventional collective phenomena, most notably emergent gauge fields and fractionalized quasi-particle excitations that can be conceived as magnetic monopoles. In this thesis, we study the dynamics of such quasi-monopoles in the presence of a magnetic field and address the question whether critical fluctuations at a phase transition always slow down equilibration or, conversely, can also help the system to thermalize. Employing large-scale dynamical Monte Carlo simulations, we show that the latter---a critical speeding up---can indeed be observed in two prototypical classical spin systems: dipolar spin ice in pyrochlore magnets and a Coulomb spin liquid in the triangular lattice Ising antiferromagnet. For the former, we also establish a relationship between Monte Carlo time and real time by comparing numerical and experimental data. Another part of this thesis is concerned with systems in which further-neighbor interactions stabilize certain non-coplanar magnetic orders. Such orders are particularly interesting because upon introducing quantum fluctuations, they may possibly melt into a chiral quantum spin liquid. To this end, we study two model systems that turn out to have non-coplanar classical ground states and are motivated by the recent synthesis of a number of Mott insulating square-kagome materials as well as of spin-1/2 maple-leaf lattice antiferromagnets. We explore the rich phenomenology of frustrated magnetism induced by these two lattice geometries, including extensive degeneracies and order-by-disorder mechanisms. For both models, we study an elementary, classical Heisenberg model with nearest-neighbor and additional cross-plaquette interactions and discuss a multitude of non-coplanar orders and spiral spin phases. Using extensive numerical simulations, we also discuss the thermodynamic signatures of these phases, which often show multi-step thermal ordering

    Toward knowledge-based automatic 3D spatial topological modeling from LiDAR point clouds for urban areas

    Get PDF
    Le traitement d'un très grand nombre de données LiDAR demeure très coûteux et nécessite des approches de modélisation 3D automatisée. De plus, les nuages de points incomplets causés par l'occlusion et la densité ainsi que les incertitudes liées au traitement des données LiDAR compliquent la création automatique de modèles 3D enrichis sémantiquement. Ce travail de recherche vise à développer de nouvelles solutions pour la création automatique de modèles géométriques 3D complets avec des étiquettes sémantiques à partir de nuages de points incomplets. Un cadre intégrant la connaissance des objets à la modélisation 3D est proposé pour améliorer la complétude des modèles géométriques 3D en utilisant un raisonnement qualitatif basé sur les informations sémantiques des objets et de leurs composants, leurs relations géométriques et spatiales. De plus, nous visons à tirer parti de la connaissance qualitative des objets en reconnaissance automatique des objets et à la création de modèles géométriques 3D complets à partir de nuages de points incomplets. Pour atteindre cet objectif, plusieurs solutions sont proposées pour la segmentation automatique, l'identification des relations topologiques entre les composants de l'objet, la reconnaissance des caractéristiques et la création de modèles géométriques 3D complets. (1) Des solutions d'apprentissage automatique ont été proposées pour la segmentation sémantique automatique et la segmentation de type CAO afin de segmenter des objets aux structures complexes. (2) Nous avons proposé un algorithme pour identifier efficacement les relations topologiques entre les composants d'objet extraits des nuages de points afin d'assembler un modèle de Représentation Frontière. (3) L'intégration des connaissances sur les objets et la reconnaissance des caractéristiques a été développée pour inférer automatiquement les étiquettes sémantiques des objets et de leurs composants. Afin de traiter les informations incertitudes, une solution de raisonnement automatique incertain, basée sur des règles représentant la connaissance, a été développée pour reconnaître les composants du bâtiment à partir d'informations incertaines extraites des nuages de points. (4) Une méthode heuristique pour la création de modèles géométriques 3D complets a été conçue en utilisant les connaissances relatives aux bâtiments, les informations géométriques et topologiques des composants du bâtiment et les informations sémantiques obtenues à partir de la reconnaissance des caractéristiques. Enfin, le cadre proposé pour améliorer la modélisation 3D automatique à partir de nuages de points de zones urbaines a été validé par une étude de cas visant à créer un modèle de bâtiment 3D complet. L'expérimentation démontre que l'intégration des connaissances dans les étapes de la modélisation 3D est efficace pour créer un modèle de construction complet à partir de nuages de points incomplets.The processing of a very large set of LiDAR data is very costly and necessitates automatic 3D modeling approaches. In addition, incomplete point clouds caused by occlusion and uneven density and the uncertainties in the processing of LiDAR data make it difficult to automatic creation of semantically enriched 3D models. This research work aims at developing new solutions for the automatic creation of complete 3D geometric models with semantic labels from incomplete point clouds. A framework integrating knowledge about objects in urban scenes into 3D modeling is proposed for improving the completeness of 3D geometric models using qualitative reasoning based on semantic information of objects and their components, their geometric and spatial relations. Moreover, we aim at taking advantage of the qualitative knowledge of objects in automatic feature recognition and further in the creation of complete 3D geometric models from incomplete point clouds. To achieve this goal, several algorithms are proposed for automatic segmentation, the identification of the topological relations between object components, feature recognition and the creation of complete 3D geometric models. (1) Machine learning solutions have been proposed for automatic semantic segmentation and CAD-like segmentation to segment objects with complex structures. (2) We proposed an algorithm to efficiently identify topological relationships between object components extracted from point clouds to assemble a Boundary Representation model. (3) The integration of object knowledge and feature recognition has been developed to automatically obtain semantic labels of objects and their components. In order to deal with uncertain information, a rule-based automatic uncertain reasoning solution was developed to recognize building components from uncertain information extracted from point clouds. (4) A heuristic method for creating complete 3D geometric models was designed using building knowledge, geometric and topological relations of building components, and semantic information obtained from feature recognition. Finally, the proposed framework for improving automatic 3D modeling from point clouds of urban areas has been validated by a case study aimed at creating a complete 3D building model. Experiments demonstrate that the integration of knowledge into the steps of 3D modeling is effective in creating a complete building model from incomplete point clouds
    corecore