37 research outputs found
Camera perspective distortion in model-based visual localisation.
114 p.This thesis starts with a proposal for a collaborative global visual localization system. Then, it centres in a specific visual localisation problem: perspective distortion in template matching.The thesis enriches 3D point cloud models with a surface normal associated with each 3D point. These normals are computed using a minimization algorithm.Based in this new model, the thesis proposes an algorithm to increase the accuracy of visual localisation. The algorithm improves for template matching processes using surface normals.The hypothesis, `Given a 3D point cloud, surface orientation of the 3D points in a template matching process increases the number of inliers points found by the localisation system, that is, perspective compensation.' is objectively proved using a ground truth model.The ground truth is achieved through the design of a framework which using computer vision and computer graphics techniques carries out experiments without the noise of a real system, and prove in an objective way the hypothesis
Camera perspective distortion in model-based visual localisation.
114 p.This thesis starts with a proposal for a collaborative global visual localization system. Then, it centres in a specific visual localisation problem: perspective distortion in template matching.The thesis enriches 3D point cloud models with a surface normal associated with each 3D point. These normals are computed using a minimization algorithm.Based in this new model, the thesis proposes an algorithm to increase the accuracy of visual localisation. The algorithm improves for template matching processes using surface normals.The hypothesis, `Given a 3D point cloud, surface orientation of the 3D points in a template matching process increases the number of inliers points found by the localisation system, that is, perspective compensation.' is objectively proved using a ground truth model.The ground truth is achieved through the design of a framework which using computer vision and computer graphics techniques carries out experiments without the noise of a real system, and prove in an objective way the hypothesis
Pose estimation system based on monocular cameras
Our world is full of wonders. It is filled with mysteries and challenges, which through
the ages inspired and called for the human civilization to grow itself, either philosophically
or sociologically. In time, humans reached their own physical limitations;
nevertheless, we created technology to help us overcome it. Like the ancient uncovered
land, we are pulled into the discovery and innovation of our time. All of this is
possible due to a very human characteristic - our imagination.
The world that surrounds us is mostly already discovered, but with the power of
computer vision (CV) and augmented reality (AR), we are able to live in multiple hidden
universes alongside our own. With the increasing performance and capabilities of
the current mobile devices, AR is what we dream it can be. There are still many obstacles,
but this future is already our reality, and with the evolving technologies closing
the gap between the real and the virtual world, soon it will be possible for us to surround
ourselves into other dimensions, or fuse them with our own.
This thesis focuses on the development of a system to predict the camera’s pose
estimation in the real-world regarding to the virtual world axis. The work was developed
as a sub-module integrated on the M5SAR project: Mobile Five Senses Augmented
Reality System for Museums, aiming to a more immerse experience with the
total or partial replacement of the environments’ surroundings. It is based mainly on
man-made buildings indoors and their typical rectangular cuboid shape. With the possibility
of knowing the user’s camera direction, we can then superimpose dynamic AR content, inviting the user to explore the hidden worlds.
The M5SAR project introduced a new way to explore the existent historical museums
by exploring the human’s five senses: hearing, smell, taste, touch, vision. With
this innovative technology, the user is able to enhance their visitation and immerse
themselves into a virtual world blended with our reality. A mobile device application
was built containing an innovating framework: MIRAR - Mobile Image Recognition
based Augmented Reality - containing object recognition, navigation, and additional
AR information projection in order to enrich the users’ visit, providing an intuitive
and compelling information regarding the available artworks, exploring the hearing
and vision senses. A device specially designed was built to explore the additional
three senses: smell, taste and touch which, when attached to a mobile device, either
smartphone or tablet, would pair with it and automatically react in with the offered
narrative related to the artwork, immersing the user with a sensorial experience.
As mentioned above, the work presented on this thesis is relative to a sub-module
of the MIRAR regarding environment detection and the superimposition of AR content.
With the main goal being the full replacement of the walls’ contents, and with the
possibility of keeping the artwork visible or not, it presented an additional challenge
with the limitation of using only monocular cameras. Without the depth information,
any 2D image of an environment, to a computer doesn’t represent the tridimensional
layout of the real-world dimensions. Nevertheless, man-based building tends to follow
a rectangular approach to divisions’ constructions, which allows for a prediction
to where the vanishing point on any environment image may point, allowing the reconstruction
of an environment’s layout from a 2D image. Furthermore, combining
this information with an initial localization through an improved image recognition
to retrieve the camera’s spatial position regarding to the real-world coordinates and
the virtual-world, alas, pose estimation, allowed for the possibility of superimposing
specific localized AR content over the user’s mobile device frame, in order to immerse,
i.e., a museum’s visitor into another era correlated to the present artworks’ historical
period. Through the work developed for this thesis, it was also presented a better planar surface in space rectification and retrieval, a hybrid and scalable multiple images
matching system, a more stabilized outlier filtration applied to the camera’s axis,
and a continuous tracking system that works with uncalibrated cameras and is able to
achieve particularly obtuse angles and still maintain the surface superimposition.
Furthermore, a novelty method using deep learning models for semantic segmentation
was introduced for indoor layout estimation based on monocular images. Contrary
to the previous developed methods, there is no need to perform geometric calculations
to achieve a near state of the art performance with a fraction of the parameters
required by similar methods. Contrary to the previous work presented on this thesis,
this method performs well even in unseen and cluttered rooms if they follow the Manhattan
assumption. An additional lightweight application to retrieve the camera pose
estimation is presented using the proposed method.O nosso mundo está repleto de maravilhas. Está cheio de mistérios e desafios, os quais,
ao longo das eras, inspiraram e impulsionaram a civilização humana a evoluir, seja
filosófica ou sociologicamente. Eventualmente, os humanos foram confrontados com
os seus limites fÃsicos; desta forma, criaram tecnologias que permitiram superá-los.
Assim como as terras antigas por descobrir, somos impulsionados à descoberta e inovação
da nossa era, e tudo isso é possÃvel graças a uma caracterÃstica marcadamente
humana: a nossa imaginação.
O mundo que nos rodeia está praticamente todo descoberto, mas com o poder da
visão computacional (VC) e da realidade aumentada (RA), podemos viver em múltiplos
universos ocultos dentro do nosso. Com o aumento da performance e das capacidades
dos dispositivos móveis da atualidade, a RA pode ser exatamente aquilo que
sonhamos. Continuam a existir muitos obstáculos, mas este futuro já é o nosso presente,
e com a evolução das tecnologias a fechar o fosso entre o mundo real e o mundo
virtual, em breve será possÃvel cercarmo-nos de outras dimensões, ou fundi-las dentro
da nossa.
Esta tese foca-se no desenvolvimento de um sistema de predição para a estimação
da pose da câmara no mundo real em relação ao eixo virtual do mundo. Este trabalho
foi desenvolvido como um sub-módulo integrado no projeto M5SAR: Mobile
Five Senses Augmented Reality System for Museums, com o objetivo de alcançar uma
experiência mais imersiva com a substituição total ou parcial dos limites do ambiente. Dedica-se ao interior de edifÃcios de arquitetura humana e a sua tÃpica forma
de retângulo cuboide. Com a possibilidade de saber a direção da câmara do dispositivo,
podemos então sobrepor conteúdo dinâmico de RA, num convite ao utilizador
para explorar os mundos ocultos.
O projeto M5SAR introduziu uma nova forma de explorar os museus históricos existentes
através da exploração dos cinco sentidos humanos: a audição, o cheiro, o paladar,
o toque e a visão. Com essa tecnologia inovadora, o utilizador pode engrandecer
a sua visita e mergulhar num mundo virtual mesclado com a nossa realidade. Uma
aplicação para dispositivo móvel foi criada, contendo uma estrutura inovadora: MIRAR
- Mobile Image Recognition based Augmented Reality - a possuir o reconhecimento
de objetos, navegação e projeção de informação de RA adicional, de forma a
enriquecer a visita do utilizador, a fornecer informação intuitiva e interessante em relação
à s obras de arte disponÃveis, a explorar os sentidos da audição e da visão. Foi
também desenhado um dispositivo para exploração em particular dos três outros sentidos
adicionais: o cheiro, o toque e o sabor. Este dispositivo, quando afixado a um
dispositivo móvel, como um smartphone ou tablet, emparelha e reage com este automaticamente
com a narrativa relacionada à obra de arte, a imergir o utilizador numa
experiência sensorial.
Como já referido, o trabalho apresentado nesta tese é relativo a um sub-módulo
do MIRAR, relativamente à deteção do ambiente e a sobreposição de conteúdo de RA.
Sendo o objetivo principal a substituição completa dos conteúdos das paredes, e com
a possibilidade de manter as obras de arte visÃveis ou não, foi apresentado um desafio
adicional com a limitação do uso de apenas câmaras monoculares. Sem a informação
relativa à profundidade, qualquer imagem bidimensional de um ambiente, para um
computador isso não se traduz na dimensão tridimensional das dimensões do mundo
real. No entanto, as construções de origem humana tendem a seguir uma abordagem
retangular à s divisões dos edifÃcios, o que permite uma predição de onde poderá apontar
o ponto de fuga de qualquer ambiente, a permitir a reconstrução da disposição de
uma divisão através de uma imagem bidimensional. Adicionalmente, ao combinar esta informação com uma localização inicial através de um reconhecimento por imagem
refinado, para obter a posição espacial da câmara em relação às coordenadas
do mundo real e do mundo virtual, ou seja, uma estimativa da pose, foi possÃvel alcançar
a possibilidade de sobrepor conteúdo de RA especificamente localizado sobre
a moldura do dispositivo móvel, de maneira a imergir, ou seja, colocar o visitante do
museu dentro de outra era, relativa ao perÃodo histórico da obra de arte em questão.
Ao longo do trabalho desenvolvido para esta tese, também foi apresentada uma melhor
superfÃcie planar na recolha e retificação espacial, um sistema de comparação de
múltiplas imagens hÃbrido e escalável, um filtro de outliers mais estabilizado, aplicado
ao eixo da câmara, e um sistema de tracking contÃnuo que funciona com câmaras não
calibradas e que consegue obter ângulos particularmente obtusos, continuando a manter
a sobreposição da superfÃcie.
Adicionalmente, um algoritmo inovador baseado num modelo de deep learning
para a segmentação semântica foi introduzido na estimativa do traçado com base em
imagens monoculares. Ao contrário de métodos previamente desenvolvidos, não é
necessário realizar cálculos geométricos para obter um desempenho próximo ao state
of the art e ao mesmo tempo usar uma fração dos parâmetros requeridos para métodos
semelhantes. Inversamente ao trabalho previamente apresentado nesta tese, este
método apresenta um bom desempenho mesmo em divisões sem vista ou obstruÃdas,
caso sigam a mesma premissa Manhattan. Uma leve aplicação adicional para obter a
posição da câmara é apresentada usando o método proposto
Characterization and modelling of complex motion patterns
Movement analysis is the principle of any interaction with the world and the survival of living beings completely depends on the effciency of such analysis. Visual systems have remarkably developed eficient mechanisms that analyze motion at different levels, allowing to recognize objects in dynamical and cluttered environments. In artificial vision, there exist a wide spectrum of applications for which the study of complex movements is crucial to recover salient information. Yet each domain may be different in terms of scenarios, complexity and relationships, a common denominator is that all of them require a dynamic understanding that captures the relevant information. Overall, current strategies are highly dependent on the appearance characterization and usually they are restricted to controlled scenarios. This thesis proposes a computational framework that is inspired in known motion perception mechanisms and structured as a set of modules. Each module is in due turn composed of a set of computational strategies that provide qualitative and quantitative descriptions of the dynamic associated to a particular movement. Diverse applications were herein considered and an extensive validation was performed for each of them. Each of the proposed strategies has shown to be reliable at capturing the dynamic patterns of different tasks, identifying, recognizing, tracking and even segmenting objects in sequences of video.Resumen. El análisis del movimiento es el principio de cualquier interacción con el mundo y la supervivencia de los seres vivos depende completamente de la eficiencia de este tipo de análisis. Los sistemas visuales notablemente han desarrollado mecanismos eficientes que analizan el movimiento en diferentes niveles, lo cual permite reconocer objetos en entornos dinámicos y saturados. En visión artificial existe un amplio espectro de aplicaciones para las cuales el estudio de los movimientos complejos es crucial para recuperar información saliente. A pesar de que cada dominio puede ser diferente en términos de los escenarios, la complejidad y las relaciones de los objetos en movimiento, un común denominador es que todos ellos requieren una comprensión dinámica para capturar información relevante. En general, las estrategias actuales son altamente dependientes de la caracterización de la apariencia y por lo general están restringidos a escenarios controlados. Esta tesis propone un marco computacional que se inspira en los mecanismos de percepción de movimiento conocidas y esta estructurado como un conjunto de módulos. Cada módulo esta a su vez compuesto por un conjunto de estrategias computacionales que proporcionan descripciones cualitativas y cuantitativas de la dinámica asociada a un movimiento particular. Diversas aplicaciones fueron consideradas en este trabajo y una extensa validación se llevó a cabo para cada uno de ellas. Cada una de las estrategias propuestas ha demostrado ser fiable en la captura de los patrones dinámicos de diferentes tareas identificando, reconociendo, siguiendo e incluso segmentando objetos en secuencias de video.Doctorad
Advances in top-down and bottom-up approaches to video-based camera tracking
Video-based camera tracking consists in trailing the three dimensional pose followed by a mobile camera using video as sole input. In order to estimate the pose of a camera with respect to a real scene, one or more three dimensional references are needed. Examples of such references are landmarks with known geometric shape, or objects for which a model is generated beforehand. By comparing what is seen by a camera with what is geometrically known from reality, it is possible to recover the pose of the camera that is sensing these references. In this thesis, we investigate the problem of camera tracking at two levels. Firstly, we work at the low level of feature point recognition. Feature points are used as references for tracking and we propose a method to robustly recognise them. More specifically, we introduce a rotation-discriminative region descriptor and an efficient rotation-discriminative method to match feature point descriptors. The descriptor is based on orientation gradient histograms and template intensity information. Secondly, we have worked at the higher level of camera tracking and propose a fusion of top-down (TDA) and bottom-up approaches (BUA). We combine marker-based tracking using a BUA and feature points recognised from a TDA into a particle filter. Feature points are recognised with the method described before. We take advantage of the identification of the rotation of points for tracking purposes. The goal of the fusion is to take advantage of their compensated strengths. In particular, we are interested in covering the main capabilities that a camera tracker should provide. These capabilities are automatic initialisation, automatic recovery after loss of track, and tracking beyond references known a priori. Experiments have been performed at the two levels of investigation. Firstly, tests have been conducted to evaluate the performance of the recognition method proposed. The assessment consists in a set of patches extracted from eight textured images. The images are rotated and matching is done for each patch. The results show that the method is capable of matching accurately despite the rotations. A comparison with similar techniques in the state of the art depicts the equal or even higher precision of our method with much lower computational cost. Secondly, experimental assessment of the tracking system is also conducted. The evaluation consists in four sequences with specific problematic situations namely, occlusions of the marker, illumination changes, and erratic and/or fast motion. Results show that the fusion tracker solves characteristic failure modes of the two combined approaches. A comparison with similar trackers shows competitive accuracy. In addition, the three capabilities stated earlier are fulfilled in our tracker, whereas the state of the art reveals that no other published tracker covers these three capabilities simultaneously. The camera tracking system has a potential application in the robotics domain. It has been successfully used as a man-machine interface and applied in Augmented Reality environments. In particular, the system has been used by students of the University of art and design Lausanne (ECAL) with the purpose of conceiving new interaction concepts. Moreover, in collaboration with ECAL and fabric | ch (studio for architecture & research), we have jointly developed the Augmented interactive Reality Toolkit (AiRToolkit). The system has also proved to be reliable in public events and is the basis of a game-oriented demonstrator installed in the Swiss National Museum of Audiovisual and Multimedia (Audiorama) in Montreux
On-the-fly dense 3D surface reconstruction for geometry-aware augmented reality.
Augmented Reality (AR) is an emerging technology that makes seamless connections between virtual space and the real world by superimposing computer-generated information onto the real-world environment. AR can provide additional information in a more intuitive and natural way than any other information-delivery method that a human has ever in- vented. Camera tracking is the enabling technology for AR and has been well studied for the last few decades. Apart from the tracking problems, sensing and perception of the surrounding environment are also very important and challenging problems. Although there are existing hardware solutions such as Microsoft Kinect and HoloLens that can sense and build the environmental structure, they are either too bulky or too expensive for AR. In this thesis, the challenging real-time dense 3D surface reconstruction technologies are studied and reformulated for the reinvention of basic position-aware AR towards geometry-aware and the outlook of context- aware AR. We initially propose to reconstruct the dense environmental surface using the sparse point from Simultaneous Localisation and Map- ping (SLAM), but this approach is prone to fail in challenging Minimally Invasive Surgery (MIS) scenes such as the presence of deformation and surgical smoke. We subsequently adopt stereo vision with SLAM for more accurate and robust results. With the success of deep learning technology in recent years, we present learning based single image re- construction and achieve the state-of-the-art results. Moreover, we pro- posed context-aware AR, one step further from purely geometry-aware AR towards the high-level conceptual interaction modelling in complex AR environment for enhanced user experience. Finally, a learning-based smoke removal method is proposed to ensure an accurate and robust reconstruction under extreme conditions such as the presence of surgical smoke
Application-driven visual computing towards industry 4.0 2018
245 p.La Tesis recoge contribuciones en tres campos: 1. Agentes Virtuales Interactivos: autónomos, modulares, escalables, ubicuos y atractivos para el usuario. Estos IVA pueden interactuar con los usuarios de manera natural.2. Entornos de RV/RA Inmersivos: RV en la planificación de la producción, el diseño de producto, la simulación de procesos, pruebas y verificación. El Operario Virtual muestra cómo la RV y los Co-bots pueden trabajar en un entorno seguro. En el Operario Aumentado la RA muestra información relevante al trabajador de una manera no intrusiva. 3. Gestión Interactiva de Modelos 3D: gestión online y visualización de modelos CAD multimedia, mediante conversión automática de modelos CAD a la Web. La tecnologÃa Web3D permite la visualización e interacción de estos modelos en dispositivos móviles de baja potencia.Además, estas contribuciones han permitido analizar los desafÃos presentados por Industry 4.0. La tesis ha contribuido a proporcionar una prueba de concepto para algunos de esos desafÃos: en factores humanos, simulación, visualización e integración de modelos