6,077 research outputs found

    Twin "Fano-Snowflakes" Over the Smallest Ring of Ternions

    Get PDF
    Given a finite associative ring with unity, RR, any free (left) cyclic submodule (FCS) generated by a uniunimodular (n+1n+1)-tuple of elements of RR represents a point of the nn-dimensional projective space over RR. Suppose that RR also features FCSs generated by (n+1n+1)-tuples that are notnot unimodular: what kind of geometry can be ascribed to such FCSs? Here, we (partially) answer this question for n=2n=2 when RR is the (unique) non-commutative ring of order eight. The corresponding geometry is dubbed a "Fano-Snowflake" due to its diagrammatic appearance and the fact that it contains the Fano plane in its center. There exist, in fact, two such configurations -- each being tied to either of the two maximal ideals of the ring -- which have the Fano plane in common and can, therefore, be viewed as twins. Potential relevance of these noteworthy configurations to quantum information theory and stringy black holes is also outlined.Comment: 6 pages, 1 table, 1 figure; v2 -- standard representation of the ring of ternions given, 1 figure and 3 references added; v3 -- published in SIGMA (Symmetry, Integrability and Geometry: Methods and Applications) at http://www.emis.de/journals/SIGMA

    M\"obius Invariants of Shapes and Images

    Full text link
    Identifying when different images are of the same object despite changes caused by imaging technologies, or processes such as growth, has many applications in fields such as computer vision and biological image analysis. One approach to this problem is to identify the group of possible transformations of the object and to find invariants to the action of that group, meaning that the object has the same values of the invariants despite the action of the group. In this paper we study the invariants of planar shapes and images under the M\"obius group PSL(2,C)\mathrm{PSL}(2,\mathbb{C}), which arises in the conformal camera model of vision and may also correspond to neurological aspects of vision, such as grouping of lines and circles. We survey properties of invariants that are important in applications, and the known M\"obius invariants, and then develop an algorithm by which shapes can be recognised that is M\"obius- and reparametrization-invariant, numerically stable, and robust to noise. We demonstrate the efficacy of this new invariant approach on sets of curves, and then develop a M\"obius-invariant signature of grey-scale images

    Implicit 3D Orientation Learning for 6D Object Detection from RGB Images

    Get PDF
    We propose a real-time RGB-based pipeline for object detection and 6D pose estimation. Our novel 3D orientation estimation is based on a variant of the Denoising Autoencoder that is trained on simulated views of a 3D model using Domain Randomization. This so-called Augmented Autoencoder has several advantages over existing methods: It does not require real, pose-annotated training data, generalizes to various test sensors and inherently handles object and view symmetries. Instead of learning an explicit mapping from input images to object poses, it provides an implicit representation of object orientations defined by samples in a latent space. Our pipeline achieves state-of-the-art performance on the T-LESS dataset both in the RGB and RGB-D domain. We also evaluate on the LineMOD dataset where we can compete with other synthetically trained approaches. We further increase performance by correcting 3D orientation estimates to account for perspective errors when the object deviates from the image center and show extended results.Comment: Code available at: https://github.com/DLR-RM/AugmentedAutoencode

    Perspective distortion modeling for image measurements

    Get PDF
    A perspective distortion modelling for monocular view that is based on the fundamentals of perspective projection is presented in this work. Perspective projection is considered to be the most ideal and realistic model among others, which depicts image formation in monocular vision. There are many approaches trying to model and estimate the perspective effects in images. Some approaches try to learn and model the distortion parameters from a set of training data that work only for a predefined structure. None of the existing methods provide deep understanding of the nature of perspective problems. Perspective distortions, in fact, can be described by three different perspective effects. These effects are pose, distance and foreshortening. They are the cause of the aberrant appearance of object shapes in images. Understanding these phenomena have long been an interesting topic for artists, designers and scientists. In many cases, this problem has to be necessarily taken into consideration when dealing with image diagnostics, high and accurate image measurement, as well as accurate pose estimation from images. In this work, a perspective distortion model for every effect is developed while elaborating the nature of perspective effects. A distortion factor for every effect is derived, then followed by proposed methods, which allows extracting the true target pose and distance, and correcting image measurements
    corecore