11 research outputs found

    A Practical Setup for Projection-based Augmented Maps

    Get PDF
    Projected Augmented Reality is a human-computer interaction scenario where synthetic data, rather than being rendered on a display, are directly projected on the real world. Differening from screen-based approaches, which only require the pose of the camera with respect to the world, this setup poses the additional hurdle of knowing the relative pose between capturing and projecting devices. In this chapter, the authors propose a thorough solution that addresses both camera and projector calibration using a simple fiducial marker design. Specifically, they introduce a novel Augmented Maps setup where the user can explore geographically located information by moving a physical inspection tool over a printed map. Since the tool presents both a projection surface and a 3D-localizable marker, it can be used to display suitable information about the area that it covers. The proposed setup has been evaluated in terms of accuracy of the calibration and ease of use declared by the users

    Theoretical and Numerical Analysis of 3D Reconstruction Using Point and Line Incidences

    Full text link
    We study the joint image of lines incident to points, meaning the set of image tuples obtained from fixed cameras observing a varying 3D point-line incidence. We prove a formula for the number of complex critical points of the triangulation problem that aims to compute a 3D point-line incidence from noisy images. Our formula works for an arbitrary number of images and measures the intrinsic difficulty of this triangulation. Additionally, we conduct numerical experiments using homotopy continuation methods, comparing different approaches of triangulation of such incidences. In our setup, exploiting the incidence relations gives both a faster point reconstruction and in three views more accurate.Comment: 27 pages, 5 Figures, 3 table

    Global Optimality via Tight Convex Relaxations for Pose Estimation in Geometric 3D Computer Vision

    Get PDF
    In this thesis, we address a set of fundamental problems whose core difficulty boils down to optimizing over 3D poses. This includes many geometric 3D registration problems, covering well-known problems with a long research history such as the Perspective-n-Point (PnP) problem and generalizations, extrinsic sensor calibration, or even the gold standard for Structure from Motion (SfM) pipelines: The relative pose problem from corresponding features. Likewise, this is also the case for a close relative of SLAM, Pose Graph Optimization (also commonly known as Motion Averaging in SfM). The crux of this thesis contribution revolves around the successful characterization and development of empirically tight (convex) semidefinite relaxations for many of the aforementioned core problems of 3D Computer Vision. Building upon these empirically tight relaxations, we are able to find and certify the globally optimal solution to these problems with algorithms whose performance ranges as of today from efficient, scalable approaches comparable to fast second-order local search techniques to polynomial time (worst case). So, to conclude, our research reveals that an important subset of core problems that has been historically regarded as hard and thus dealt with mostly in empirical ways, are indeed tractable with optimality guarantees.Artificial Intelligence (AI) drives a lot of services and products we use everyday. But for AI to bring its full potential into daily tasks, with technologies such as autonomous driving, augmented reality or mobile robots, AI needs to be not only intelligent but also perceptive. In particular, the ability to see and to construct an accurate model of the environment is an essential capability to build intelligent perceptive systems. The ideas developed in Computer Vision for the last decades in areas such as Multiple View Geometry or Optimization, put together to work into 3D reconstruction algorithms seem to be mature enough to nurture a range of emerging applications that already employ as of today 3D Computer Vision in the background. However, while there is a positive trend in the use of 3D reconstruction tools in real applications, there are also some fundamental limitations regarding reliability and performance guarantees that may hinder a wider adoption, e.g. in more critical applications involving people's safety such as autonomous navigation. State-of-the-art 3D reconstruction algorithms typically formulate the reconstruction problem as a Maximum Likelihood Estimation (MLE) instance, which entails solving a high-dimensional non-convex non-linear optimization problem. In practice, this is done via fast local optimization methods, that have enabled fast and scalable reconstruction pipelines, yet lack of guarantees on most of the building blocks leaving us with fundamentally brittle pipelines where no guarantees exist

    Pose estimation system based on monocular cameras

    Get PDF
    Our world is full of wonders. It is filled with mysteries and challenges, which through the ages inspired and called for the human civilization to grow itself, either philosophically or sociologically. In time, humans reached their own physical limitations; nevertheless, we created technology to help us overcome it. Like the ancient uncovered land, we are pulled into the discovery and innovation of our time. All of this is possible due to a very human characteristic - our imagination. The world that surrounds us is mostly already discovered, but with the power of computer vision (CV) and augmented reality (AR), we are able to live in multiple hidden universes alongside our own. With the increasing performance and capabilities of the current mobile devices, AR is what we dream it can be. There are still many obstacles, but this future is already our reality, and with the evolving technologies closing the gap between the real and the virtual world, soon it will be possible for us to surround ourselves into other dimensions, or fuse them with our own. This thesis focuses on the development of a system to predict the camera’s pose estimation in the real-world regarding to the virtual world axis. The work was developed as a sub-module integrated on the M5SAR project: Mobile Five Senses Augmented Reality System for Museums, aiming to a more immerse experience with the total or partial replacement of the environments’ surroundings. It is based mainly on man-made buildings indoors and their typical rectangular cuboid shape. With the possibility of knowing the user’s camera direction, we can then superimpose dynamic AR content, inviting the user to explore the hidden worlds. The M5SAR project introduced a new way to explore the existent historical museums by exploring the human’s five senses: hearing, smell, taste, touch, vision. With this innovative technology, the user is able to enhance their visitation and immerse themselves into a virtual world blended with our reality. A mobile device application was built containing an innovating framework: MIRAR - Mobile Image Recognition based Augmented Reality - containing object recognition, navigation, and additional AR information projection in order to enrich the users’ visit, providing an intuitive and compelling information regarding the available artworks, exploring the hearing and vision senses. A device specially designed was built to explore the additional three senses: smell, taste and touch which, when attached to a mobile device, either smartphone or tablet, would pair with it and automatically react in with the offered narrative related to the artwork, immersing the user with a sensorial experience. As mentioned above, the work presented on this thesis is relative to a sub-module of the MIRAR regarding environment detection and the superimposition of AR content. With the main goal being the full replacement of the walls’ contents, and with the possibility of keeping the artwork visible or not, it presented an additional challenge with the limitation of using only monocular cameras. Without the depth information, any 2D image of an environment, to a computer doesn’t represent the tridimensional layout of the real-world dimensions. Nevertheless, man-based building tends to follow a rectangular approach to divisions’ constructions, which allows for a prediction to where the vanishing point on any environment image may point, allowing the reconstruction of an environment’s layout from a 2D image. Furthermore, combining this information with an initial localization through an improved image recognition to retrieve the camera’s spatial position regarding to the real-world coordinates and the virtual-world, alas, pose estimation, allowed for the possibility of superimposing specific localized AR content over the user’s mobile device frame, in order to immerse, i.e., a museum’s visitor into another era correlated to the present artworks’ historical period. Through the work developed for this thesis, it was also presented a better planar surface in space rectification and retrieval, a hybrid and scalable multiple images matching system, a more stabilized outlier filtration applied to the camera’s axis, and a continuous tracking system that works with uncalibrated cameras and is able to achieve particularly obtuse angles and still maintain the surface superimposition. Furthermore, a novelty method using deep learning models for semantic segmentation was introduced for indoor layout estimation based on monocular images. Contrary to the previous developed methods, there is no need to perform geometric calculations to achieve a near state of the art performance with a fraction of the parameters required by similar methods. Contrary to the previous work presented on this thesis, this method performs well even in unseen and cluttered rooms if they follow the Manhattan assumption. An additional lightweight application to retrieve the camera pose estimation is presented using the proposed method.O nosso mundo está repleto de maravilhas. Está cheio de mistérios e desafios, os quais, ao longo das eras, inspiraram e impulsionaram a civilização humana a evoluir, seja filosófica ou sociologicamente. Eventualmente, os humanos foram confrontados com os seus limites físicos; desta forma, criaram tecnologias que permitiram superá-los. Assim como as terras antigas por descobrir, somos impulsionados à descoberta e inovação da nossa era, e tudo isso é possível graças a uma característica marcadamente humana: a nossa imaginação. O mundo que nos rodeia está praticamente todo descoberto, mas com o poder da visão computacional (VC) e da realidade aumentada (RA), podemos viver em múltiplos universos ocultos dentro do nosso. Com o aumento da performance e das capacidades dos dispositivos móveis da atualidade, a RA pode ser exatamente aquilo que sonhamos. Continuam a existir muitos obstáculos, mas este futuro já é o nosso presente, e com a evolução das tecnologias a fechar o fosso entre o mundo real e o mundo virtual, em breve será possível cercarmo-nos de outras dimensões, ou fundi-las dentro da nossa. Esta tese foca-se no desenvolvimento de um sistema de predição para a estimação da pose da câmara no mundo real em relação ao eixo virtual do mundo. Este trabalho foi desenvolvido como um sub-módulo integrado no projeto M5SAR: Mobile Five Senses Augmented Reality System for Museums, com o objetivo de alcançar uma experiência mais imersiva com a substituição total ou parcial dos limites do ambiente. Dedica-se ao interior de edifícios de arquitetura humana e a sua típica forma de retângulo cuboide. Com a possibilidade de saber a direção da câmara do dispositivo, podemos então sobrepor conteúdo dinâmico de RA, num convite ao utilizador para explorar os mundos ocultos. O projeto M5SAR introduziu uma nova forma de explorar os museus históricos existentes através da exploração dos cinco sentidos humanos: a audição, o cheiro, o paladar, o toque e a visão. Com essa tecnologia inovadora, o utilizador pode engrandecer a sua visita e mergulhar num mundo virtual mesclado com a nossa realidade. Uma aplicação para dispositivo móvel foi criada, contendo uma estrutura inovadora: MIRAR - Mobile Image Recognition based Augmented Reality - a possuir o reconhecimento de objetos, navegação e projeção de informação de RA adicional, de forma a enriquecer a visita do utilizador, a fornecer informação intuitiva e interessante em relação às obras de arte disponíveis, a explorar os sentidos da audição e da visão. Foi também desenhado um dispositivo para exploração em particular dos três outros sentidos adicionais: o cheiro, o toque e o sabor. Este dispositivo, quando afixado a um dispositivo móvel, como um smartphone ou tablet, emparelha e reage com este automaticamente com a narrativa relacionada à obra de arte, a imergir o utilizador numa experiência sensorial. Como já referido, o trabalho apresentado nesta tese é relativo a um sub-módulo do MIRAR, relativamente à deteção do ambiente e a sobreposição de conteúdo de RA. Sendo o objetivo principal a substituição completa dos conteúdos das paredes, e com a possibilidade de manter as obras de arte visíveis ou não, foi apresentado um desafio adicional com a limitação do uso de apenas câmaras monoculares. Sem a informação relativa à profundidade, qualquer imagem bidimensional de um ambiente, para um computador isso não se traduz na dimensão tridimensional das dimensões do mundo real. No entanto, as construções de origem humana tendem a seguir uma abordagem retangular às divisões dos edifícios, o que permite uma predição de onde poderá apontar o ponto de fuga de qualquer ambiente, a permitir a reconstrução da disposição de uma divisão através de uma imagem bidimensional. Adicionalmente, ao combinar esta informação com uma localização inicial através de um reconhecimento por imagem refinado, para obter a posição espacial da câmara em relação às coordenadas do mundo real e do mundo virtual, ou seja, uma estimativa da pose, foi possível alcançar a possibilidade de sobrepor conteúdo de RA especificamente localizado sobre a moldura do dispositivo móvel, de maneira a imergir, ou seja, colocar o visitante do museu dentro de outra era, relativa ao período histórico da obra de arte em questão. Ao longo do trabalho desenvolvido para esta tese, também foi apresentada uma melhor superfície planar na recolha e retificação espacial, um sistema de comparação de múltiplas imagens híbrido e escalável, um filtro de outliers mais estabilizado, aplicado ao eixo da câmara, e um sistema de tracking contínuo que funciona com câmaras não calibradas e que consegue obter ângulos particularmente obtusos, continuando a manter a sobreposição da superfície. Adicionalmente, um algoritmo inovador baseado num modelo de deep learning para a segmentação semântica foi introduzido na estimativa do traçado com base em imagens monoculares. Ao contrário de métodos previamente desenvolvidos, não é necessário realizar cálculos geométricos para obter um desempenho próximo ao state of the art e ao mesmo tempo usar uma fração dos parâmetros requeridos para métodos semelhantes. Inversamente ao trabalho previamente apresentado nesta tese, este método apresenta um bom desempenho mesmo em divisões sem vista ou obstruídas, caso sigam a mesma premissa Manhattan. Uma leve aplicação adicional para obter a posição da câmara é apresentada usando o método proposto

    Visual Localization with Lines

    Get PDF
    Mobile robots must be able to derive their current location from sensor measurements in order to navigate fully autonomously. Positioning sensors like GPS output a global position but their precision is not sufficient for many applications; and indoors no GPS signal is received at all. Cameras provide information-rich data and are already used in many systems, e.g. for object detection and recognition. Therefore, this thesis investigates the possibility of additionally using cameras for localization. State-of-the-art methods are based on point observations but as man-made environments mostly consist of planar and linear structures which are perceived as lines, the focus in this thesis is on the use of image lines to derive the camera trajectory. To achieve this goal, multiple view geometry algorithms for line-based pose and structure estimation have to be developed. A prerequisite for these algorithms is that correspondences between line observations in multiple images which originate from the same spatial line are established. This thesis proposes a novel line matching algorithm for matching under small baseline motion which is designed with one-to-many matching in mind to tackle the issue of varying line segmentation. In contrast to other line matching solutions, the algorithm proposed leverages optical flow calculation and hence obviates the need for an expensive descriptor calculation. A two-view relative pose estimation algorithm is introduced which extracts the spatial line directions using parallel line clustering on the image lines in order to calculate the relative rotation. In lieu of the "Manhattan world" assumption, which is required by state-of-the-art methods, the approach proposed is less restrictive as it needs only lines of different directions; the angle between the directions is not relevant. In addition, the method proposed is in the order of one magnitude faster to compute. A novel line triangulation method is proposed to derive the scene structure from the images. The method is derived from the spatial transformation of Plücker lines and allows prior knowledge of the spatial line, like the precalculated directions from the parallel line clustering, to be integrated. The problem of degenerate configurations is analyzed, too, and a solution is developed which incorporates the optical flow vectors from the matching step as spatial points into the estimation. Lastly, all components are combined to a visual odometry pipeline for monocular cameras. The pipeline uses image-to-image motion estimation to calculate the camera trajectory. A scale adjustment based on the trifocal tensor is introduced which ensures the consistent scale of the trajectory. To increase the robustness, a sliding-window bundle adjustment is employed. All components and the visual odometry pipeline proposed are evaluated and compared to state-of-the-art methods on real world data of indoor and outdoor scenes. The evaluation shows that line-based visual localization is suitable to solve the localization task
    corecore