76 research outputs found

    Dense Visual Simultaneous Localisation and Mapping in Collaborative and Outdoor Scenarios

    Get PDF
    Dense visual simultaneous localisation and mapping (SLAM) systems can produce 3D reconstructions that are digital facsimiles of the physical space they describe. Systems that can produce dense maps with this level of fidelity in real time provide foundational spatial reasoning capabilities for many downstream tasks in autonomous robotics. Over the past 15 years, mapping small scale, indoor environments, such as desks and buildings, with a single slow moving, hand-held sensor has been one of the central focuses of dense visual SLAM research. However, most dense visual SLAM systems exhibit a number of limitations which mean they cannot be directly applied in collaborative or outdoors settings. The contribution of this thesis is to address these limitations with the development of new systems and algorithms for collaborative dense mapping, efficient dense alternation and outdoors operation with fast camera motion and wide field of view (FOV) cameras. We use ElasticFusion, a state-of-the-art dense SLAM system, as our starting point where each of these contributions is implemented as a novel extension to the system. We first present a collaborative dense SLAM system that allows a number of cameras starting with unknown initial relative positions to maintain local maps with the original ElasticFusion algorithm. Visual place recognition across local maps results in constraints that allow maps to be aligned into a common global reference frame, facilitating collaborative mapping and tracking of multiple cameras within a shared map. Within dense alternation based SLAM systems, the standard approach is to fuse every frame into the dense model without considering whether the information contained within the frame is already captured by the dense map and therefore redundant. As the number of cameras or the scale of the map increases, this approach becomes inefficient. In our second contribution, we address this inefficiency by introducing a novel information theoretic approach to keyframe selection that allows the system to avoid processing redundant information. We implement the procedure within ElasticFusion, demonstrating a marked reduction in the number of frames required by the system to estimate an accurate, denoised surface reconstruction. Before dense SLAM techniques can be applied in outdoor scenarios we must first address their reliance on active depth cameras, and their lack of suitability to fast camera motion. In our third contribution we present an outdoor dense SLAM system. The system overcomes the need for an active sensor by employing neural network-based depth inference to predict the geometry of the scene as it appears in each image. To address the issue of camera tracking during fast motion we employ a hybrid architecture, combining elements of both dense and sparse SLAM systems to perform camera tracking and to achieve globally consistent dense mapping. Automotive applications present a particularly important setting for dense visual SLAM systems. Such applications are characterised by their use of wide FOV cameras and are therefore not accurately modelled by the standard pinhole camera model. The fourth contribution of this thesis is to extend the above hybrid sparse-dense monocular SLAM system to cater for large FOV fisheye imagery. This is achieved by reformulating the mapping pipeline in terms of the Kannala-Brandt fisheye camera model. To estimate depth, we introduce a new version of the PackNet depth estimation neural network (Guizilini et al., 2020) adapted for fisheye inputs. To demonstrate the effectiveness of our contributions, we present experimental results, computed by processing the synthetic ICL-NUIM dataset of Handa et al. (2014) as well as the real-world TUM-RGBD dataset of Sturm et al. (2012). For outdoor SLAM we show the results of our system processing the autonomous driving KITTI and KITTI-360 datasets of Geiger et al. (2012a) and Liao et al. (2021) respectively

    Learning the surroundings: 3D scene understanding from omnidirectional images

    Get PDF
    Las redes neuronales se han extendido por todo el mundo, siendo utilizadas en una gran variedad de aplicaciones. Estos métodos son capaces de reconocer música y audio, generar textos completos a partir de ideas simples u obtener información detallada y relevante de imágenes y videos. Las posibilidades que ofrecen las redes neuronales y métodos de aprendizaje profundo son incontables, convirtiéndose en la principal herramienta de investigación y nuevas aplicaciones en nuestra vida diaria. Al mismo tiempo, las imágenes omnidireccionales se están extendiendo dentro de la industria y nuestra sociedad, causando que la visión omnidireccional gane atención. A partir de imágenes 360 capturamos toda la información que rodea a la cámara en una sola toma.La combinación del aprendizaje profundo y la visión omnidireccional ha atraído a muchos investigadores. A partir de una única imagen omnidireccional se obtiene suficiente información del entorno para que una red neuronal comprenda sus alrededores y pueda interactuar con el entorno. Para aplicaciones como navegación y conducción autónoma, el uso de cámaras omnidireccionales proporciona información en torno del robot, person o vehículo, mientras que las cámaras convencionales carecen de esta información contextual debido a su reducido campo de visión. Aunque algunas aplicaciones pueden incluir varias cámaras convencionales para aumentar el campo de visión del sistema, tareas en las que el peso es importante (P.ej. guiado de personas con discapacidad visual o navegación de drones autónomos), un número reducido de dispositivos es altamente deseable.En esta tesis nos centramos en el uso conjunto de cámaras omnidireccionales, aprendizaje profundo, geometría y fotometría. Evaluamos diferentes enfoques para tratar con imágenes omnidireccionales, adaptando métodos a los modelos de proyección omnidireccionales y proponiendo nuevas soluciones para afrontar los retos de este tipo de imágenes. Para la comprensión de entornos interiores, proponemos una nueva red neuronal que obtiene segmentación semántica y mapas de profundidad de forma conjunta a partir de un único panoramaequirectangular. Nuestra red logra, con un nuevo enfoque convolucional, aprovechar la información del entorno proporcionada por la imagen panorámica y explotar la información combinada de semántica y profundidad. En el mismo tema, combinamos aprendizaje profundo y soluciones geométricas para recuperar el diseño estructural, junto con su escala, de entornos de interior a partir de un único panorama no central. Esta combinación de métodos proporciona una implementación rápida, debido a la red neuronal, y resultados precisos, gracias a lassoluciones geométricas. Además, también proponemos varios enfoques para la adaptación de redes neuronales a la distorsión de modelos de proyección omnidireccionales para la navegación y la adaptación del dominio soluciones previas. En términos generales, esta tesis busca encontrar soluciones novedosas e innovadoras para aprovechar las ventajas de las cámaras omnidireccionales y superar los desafíos que plantean.Neural networks have become widespread all around the world and are used for many different applications. These new methods are able to recognize music and audio, generate full texts from simple ideas and obtain detailed and relevant information from images and videos. The possibilities of neural networks and deep learning methods are uncountable, becoming the main tool for research and new applications in our daily-life. At the same time, omnidirectional and 360 images are also becoming widespread in industry and in consumer society, causing omnidirectional computer vision to gain attention. From 360 images, we capture all the information surrounding the camera in a single shot. The combination of deep learning methods and omnidirectional computer vision have attracted many researchers to this new field. From a single omnidirectional image, we obtain enough information of the environment to make a neural network understand its surroundings and interact with the environment. For applications such as navigation and autonomous driving, the use of omnidirectional cameras provide information all around the robot, person or vehicle, while conventional perspective cameras lack this context information due to their narrow field of view. Even if some applications can include several conventional cameras to increase the system's field of view, tasks where weight is more important (i.e. guidance of visually impaired people or navigation of autonomous drones), the less cameras we need to include, the better. In this thesis, we focus in the joint use of omnidirectional cameras, deep learning, geometry and photometric methods. We evaluate different approaches to handle omnidirectional images, adapting previous methods to the distortion of omnidirectional projection models and also proposing new solutions to tackle the challenges of this kind of images. For indoor scene understanding, we propose a novel neural network that jointly obtains semantic segmentation and depth maps from single equirectangular panoramas. Our network manages, with a new convolutional approach, to leverage the context information provided by the panoramic image and exploit the combined information of semantics and depth. In the same topic, we combine deep learning and geometric solvers to recover the scaled structural layout of indoor environments from single non-central panoramas. This combination provides a fast implementation, thanks to the learning approach, and accurate result, due to the geometric solvers. Additionally, we also propose several approaches of network adaptation to the distortion of omnidirectional projection models for outdoor navigation and domain adaptation of previous solutions. All in all, this thesis looks for finding novel and innovative solutions to take advantage of omnidirectional cameras while overcoming the challenges they pose.<br /

    Interactive mixed reality rendering in a distributed ray tracing framework

    Get PDF
    The recent availability of interactive ray tracing opened the way for new applications and for improving existing ones in terms of quality. Since today CPUs are still too slow for this purpose, the necessary computing power is obtained by connecting a number of machines and using distributed algorithms. Mixed reality rendering - the realm of convincingly combining real and virtual parts to a new composite scene - needs a powerful rendering method to obtain a photorealistic result. The ray tracing algorithm thus provides an excellent basis for photorealistic rendering and also advantages over other methods. It is worth to explore its abilities for interactive mixed reality rendering. This thesis shows the applicability of interactive ray tracing for mixed (MR) and augmented reality (AR) applications on the basis of the OpenRT framework. Two extensions to the OpenRT system are introduced and serve as basic building blocks: streaming video textures and in-shader AR view compositing. Streaming video textures allow for inclusion of the real world into interactive applications in terms of imagery. The AR view compositing mechanism is needed to fully exploit the advantages of modular shading in a ray tracer. A number of example applications from the entire spectrum of the Milgram Reality-Virtuality continuum illustrate the practical implications. An implementation of a classic AR scenario, inserting a virtual object into live video, shows how a differential rendering method can be used in combination with a custom build real-time lightprobe device to capture the incident light and include it into the rendering process to achieve convincing shading and shadows. Another field of mixed reality rendering is the insertion of real actors into a virtual scene in real-time. Two methods - video billboards and a live 3D visual hull reconstruction - are discussed. The implementation of live mixed reality systems is based on a number of technologies beside rendering and a comprehensive understanding of related methods and hardware is necessary. Large parts of this thesis hence deal with the discussion of technical implementations and design alternatives. A final summary discusses the benefits and drawbacks of interactive ray tracing for mixed reality rendering.Die Verfügbarkeit von interaktivem Ray-Tracing ebnet den Weg für neue Anwendungen, aber auch für die Verbesserung der Qualität bestehener Methoden. Da die heute verfügbaren CPUs noch zu langsam sind, ist es notwendig, mehrere Maschinen zu verbinden und verteilte Algorithmen zu verwenden. Mixed Reality Rendering - die Technik der überzeugenden Kombination von realen und synthetischen Teilen zu einer neuen Szene - braucht eine leistungsfähige Rendering-Methode um photorealistische Ergebnisse zu erzielen. Der Ray-Tracing-Algorithmus bietet hierfür eine exzellente Basis, aber auch Vorteile gegenüber anderen Methoden. Es ist naheliegend, die Möglichkeiten von Ray-Tracing für Mixed-Reality-Anwendungen zu erforschen. Diese Arbeit zeigt die Anwendbarkeit von interaktivem Ray-Tracing für Mixed-Reality (MR) und Augmented-Reality (AR) Anwendungen anhand des OpenRT-Systems. Zwei Erweiterungen dienen als Grundbausteine: Videotexturen und In-Shader AR View Compositing. Videotexturen erlauben die reale Welt in Form von Bildern in den Rendering-Prozess mit einzubeziehen. Der View-Compositing-Mechanismus is notwendig um die Modularität einen Ray-Tracers voll auszunutzen. Eine Reihe von Beispielanwendungen von beiden Enden des Milgramschen Reality-Virtuality-Kontinuums verdeutlichen die praktischen Aspekte. Eine Implementierung des klassischen AR-Szenarios, das Einfügen eines virtuellen Objektes in eine Live-Übertragung zeigt, wie mittels einer Differential Rendering Methode und einem selbstgebauten Gerät zur Erfassung des einfallenden Lichts realistische Beleuchtung und Schatten erzielt werden können. Ein anderer Anwendungsbereich ist das Einfügen einer realen Person in eine künstliche Szene. Hierzu werden zwei Methoden besprochen: Video-Billboards und eine interaktive 3D Rekonstruktion. Da die Implementierung von Mixed-Reality-Anwendungen Kentnisse und Verständnis einer ganzen Reihe von Technologien nebem dem eigentlichen Rendering voraus setzt, ist eine Diskussion der technischen Grundlagen ein wesentlicher Bestandteil dieser Arbeit. Dies ist notwenig, um die Entscheidungen für bestimmte Designalternativen zu verstehen. Den Abschluss bildet eine Diskussion der Vor- und Nachteile von interaktivem Ray-Tracing für Mixed Reality Anwendungen

    Interactive mixed reality rendering in a distributed ray tracing framework

    Get PDF
    The recent availability of interactive ray tracing opened the way for new applications and for improving existing ones in terms of quality. Since today CPUs are still too slow for this purpose, the necessary computing power is obtained by connecting a number of machines and using distributed algorithms. Mixed reality rendering - the realm of convincingly combining real and virtual parts to a new composite scene - needs a powerful rendering method to obtain a photorealistic result. The ray tracing algorithm thus provides an excellent basis for photorealistic rendering and also advantages over other methods. It is worth to explore its abilities for interactive mixed reality rendering. This thesis shows the applicability of interactive ray tracing for mixed (MR) and augmented reality (AR) applications on the basis of the OpenRT framework. Two extensions to the OpenRT system are introduced and serve as basic building blocks: streaming video textures and in-shader AR view compositing. Streaming video textures allow for inclusion of the real world into interactive applications in terms of imagery. The AR view compositing mechanism is needed to fully exploit the advantages of modular shading in a ray tracer. A number of example applications from the entire spectrum of the Milgram Reality-Virtuality continuum illustrate the practical implications. An implementation of a classic AR scenario, inserting a virtual object into live video, shows how a differential rendering method can be used in combination with a custom build real-time lightprobe device to capture the incident light and include it into the rendering process to achieve convincing shading and shadows. Another field of mixed reality rendering is the insertion of real actors into a virtual scene in real-time. Two methods - video billboards and a live 3D visual hull reconstruction - are discussed. The implementation of live mixed reality systems is based on a number of technologies beside rendering and a comprehensive understanding of related methods and hardware is necessary. Large parts of this thesis hence deal with the discussion of technical implementations and design alternatives. A final summary discusses the benefits and drawbacks of interactive ray tracing for mixed reality rendering.Die Verfügbarkeit von interaktivem Ray-Tracing ebnet den Weg für neue Anwendungen, aber auch für die Verbesserung der Qualität bestehener Methoden. Da die heute verfügbaren CPUs noch zu langsam sind, ist es notwendig, mehrere Maschinen zu verbinden und verteilte Algorithmen zu verwenden. Mixed Reality Rendering - die Technik der überzeugenden Kombination von realen und synthetischen Teilen zu einer neuen Szene - braucht eine leistungsfähige Rendering-Methode um photorealistische Ergebnisse zu erzielen. Der Ray-Tracing-Algorithmus bietet hierfür eine exzellente Basis, aber auch Vorteile gegenüber anderen Methoden. Es ist naheliegend, die Möglichkeiten von Ray-Tracing für Mixed-Reality-Anwendungen zu erforschen. Diese Arbeit zeigt die Anwendbarkeit von interaktivem Ray-Tracing für Mixed-Reality (MR) und Augmented-Reality (AR) Anwendungen anhand des OpenRT-Systems. Zwei Erweiterungen dienen als Grundbausteine: Videotexturen und In-Shader AR View Compositing. Videotexturen erlauben die reale Welt in Form von Bildern in den Rendering-Prozess mit einzubeziehen. Der View-Compositing-Mechanismus is notwendig um die Modularität einen Ray-Tracers voll auszunutzen. Eine Reihe von Beispielanwendungen von beiden Enden des Milgramschen Reality-Virtuality-Kontinuums verdeutlichen die praktischen Aspekte. Eine Implementierung des klassischen AR-Szenarios, das Einfügen eines virtuellen Objektes in eine Live-Übertragung zeigt, wie mittels einer Differential Rendering Methode und einem selbstgebauten Gerät zur Erfassung des einfallenden Lichts realistische Beleuchtung und Schatten erzielt werden können. Ein anderer Anwendungsbereich ist das Einfügen einer realen Person in eine künstliche Szene. Hierzu werden zwei Methoden besprochen: Video-Billboards und eine interaktive 3D Rekonstruktion. Da die Implementierung von Mixed-Reality-Anwendungen Kentnisse und Verständnis einer ganzen Reihe von Technologien nebem dem eigentlichen Rendering voraus setzt, ist eine Diskussion der technischen Grundlagen ein wesentlicher Bestandteil dieser Arbeit. Dies ist notwenig, um die Entscheidungen für bestimmte Designalternativen zu verstehen. Den Abschluss bildet eine Diskussion der Vor- und Nachteile von interaktivem Ray-Tracing für Mixed Reality Anwendungen

    Patterns and Pattern Languages for Mobile Augmented Reality

    Get PDF
    Mixed Reality is a relatively new field in computer science which uses technology as a medium to provide modified or enhanced views of reality or to virtually generate a new reality. Augmented Reality is a branch of Mixed Reality which blends the real-world as viewed through a computer interface with virtual objects generated by a computer. The 21st century commodification of mobile devices with multi-core Central Processing Units, Graphics Processing Units, high definition displays and multiple sensors controlled by capable Operating Systems such as Android and iOS means that Mobile Augmented Reality applications have become increasingly feasible. Mobile Augmented Reality is a multi-disciplinary field requiring a synthesis of many technologies such as computer graphics, computer vision, machine learning and mobile device programming while also requiring theoretical knowledge of diverse fields such as Linear Algebra, Projective and Differential Geometry, Probability and Optimisation. This multi-disciplinary nature has led to a fragmentation of knowledge into various specialisations, making it difficult to integrate different solution components into a coherent architecture. Software design patterns provide a solution space of tried and tested best practices for a specified problem within a given context. The solution space is non-prescriptive and is described in terms of relationships between roles that can be assigned to software components. Architectural patterns are used to specify high level designs of complete systems, as opposed to domain or tactical level patterns that address specific lower level problem areas. Pattern Languages comprise multiple software patterns combining in multiple possible sequences to form a language with the individual patterns forming the language vocabulary while the valid sequences through the patterns define the grammar. Pattern Languages provide flexible generalised solutions within a particular domain that can be customised to solve problems of differing characteristics and levels of iii complexity within the domain. The specification of one or more Pattern Languages tailored to the Mobile Augmented Reality domain can therefore provide a generalised guide for the design and architecture of Mobile Augmented Reality applications from an architectural level down to the ”nuts-and-bolts” implementation level. While there is a large body of research into the technical specialisations pertaining to Mobile Augmented Reality, there is a dearth of up-to-date literature covering Mobile Augmented Reality design. This thesis fills this vacuum by: 1. Providing architectural patterns that provide the spine on which the design of Mobile Augmented Reality artefacts can be based; 2. Documenting existing patterns within the context of Mobile Augmented Reality; 3. Identifying new patterns specific to Mobile Augmented Reality; and 4. Combining the patterns into Pattern Languages for Detection & Tracking, Rendering & Interaction and Data Access for Mobile Augmented Reality. The resulting Pattern Languages support design at multiple levels of complexity from an object-oriented framework down to specific one-off Augmented Reality applications. The practical contribution of this thesis is the specification of architectural patterns and Pattern Language that provide a unified design approach for both the overall architecture and the detailed design of Mobile Augmented Reality artefacts. The theoretical contribution is a design theory for Mobile Augmented Reality gleaned from the extraction of patterns and creation of a pattern language or languages

    Augmented Reality as a Potential Tool for Filmmaking

    Get PDF
    Augmented Reality (AR) has been used for a wide variety of industries. The purpose of this study was to determine the suitability of this technology for use in filmmaking. One of the problems on a film set is the time taken to block a scene. Blocking involves the placement of subjects and props within a scene. Different ideas have been used for blocking including previzualisation and Virtual Reality (VR). This study proposesed the use of AR as a tool to solve this problem. Marker-based and Markerless AR were assessed in turn to determine their suitability for addressing the problem. The use of AR markers and QR codes were examined in comparison with the use of Simultaneous Localization and Mapping (SLAM) imple mentations. The marker-based AR requires a physical object to scan and markerless is done via the mapping of GPS coordinates. Experiments were conducted on the accu racy and code required for each type of AR. These involved calculating the distances from the marker and the code required to create the virtual content. Surveys and expert interviews were conducted with filmakers and people working in the AR industry to determine the usability and feasibility of the proposed application. This provided a qualitative approach to the technology as the acceptance of any new system is of equal importance to how it functions

    Augmented reality device for first response scenarios

    Get PDF
    A prototype of a wearable computer system is proposed and implemented using commercial off-shelf components. The system is designed to allow the user to access location-specific information about an environment, and to provide capability for user tracking. Areas of applicability include primarily first response scenarios, with possible applications in maintenance or construction of buildings and other structures. Necessary preparation of the target environment prior to system\u27s deployment is limited to noninvasive labeling using optical fiducial markers. The system relies on computational vision methods for registration of labels and user position. With the system the user has access to on-demand information relevant to a particular real-world location. Team collaboration is assisted by user tracking and real-time visualizations of team member positions within the environment. The user interface and display methods are inspired by Augmented Reality1 (AR) techniques, incorporating a video-see-through Head Mounted Display (HMD) and fingerbending sensor glove.*. 1Augmented reality (AR) is a field of computer research which deals with the combination of real world and computer generated data. At present, most AR research is concerned with the use of live video imagery which is digitally processed and augmented by the addition of computer generated graphics. Advanced research includes the use of motion tracking data, fiducial marker recognition using machine vision, and the construction of controlled environments containing any number of sensors and actuators. (Source: Wikipedia) *This dissertation is a compound document (contains both a paper copy and a CD as part of the dissertation). The CD requires the following system requirements: Adobe Acrobat; Microsoft Office; Windows MediaPlayer or RealPlayer

    Full coverage displays for non-immersive applications

    Get PDF
    Full Coverage Displays (FCDs), which cover the interior surface of a room with display pixels, can create novel user interfaces taking advantage of natural aspects of human perception and memory which we make use of in our everyday lives. However, past research has generally focused on FCDs for immersive experiences, the required hardware is generally prohibitively expensive for the average potential user, configuration is complicated for developers and end users, and building applications which conform to the room layout is often difficult. The goals of this thesis are: to create an affordable, easy to use (for developers and end users) FCD toolkit for non-immersive applications; to establish efficient pointing techniques in FCD environments; and to explore suitable ways to direct attention to out-of-view targets in FCDs. In this thesis I initially present and evaluate my own "ASPECTA Toolkit" which was designed to meet the above requirements. Users during the main evaluation were generally positive about their experiences, all completing the task in less than three hours. Further evaluation was carried out through interviews with researchers who used ASPECTA in their own work. These revealed similarly positive results, with feedback from users driving improvements to the toolkit. For my exploration into pointing techniques, Mouse and Ray-Cast approaches were chosen as most appropriate for FCDs. An evaluation showed that the Ray-Cast approach was fastest overall, while a mouse-based approach showed a small advantage in the front hemisphere of the room. For attention redirection I implemented and evaluated a set of four visual techniques. The results suggest that techniques which are static and lead all the way to the target may have an advantage and that the cognitive processing time of a technique is an important consideration."This work was supported by the EPSRC (grant number EP/L505079/1) and SurfNet (NSERC)." - Acknowledgement
    corecore