45 research outputs found
Digital Holography Data Compression
Digital holography processing is a research topic related to the development of novel visual immersive applications. The huge amount of information conveyed by a digital hologram and the different properties of holographic data with respect to conventional photographic data require a comprehension of the performances and limitations of current image and video standard techniques. This paper proposes an architecture for objective evaluation of the performances of the state-of-the-art compression techniques applied to digital holographic data
Soccer on Your Tabletop
We present a system that transforms a monocular video of a soccer game into a
moving 3D reconstruction, in which the players and field can be rendered
interactively with a 3D viewer or through an Augmented Reality device. At the
heart of our paper is an approach to estimate the depth map of each player,
using a CNN that is trained on 3D player data extracted from soccer video
games. We compare with state of the art body pose and depth estimation
techniques, and show results on both synthetic ground truth benchmarks, and
real YouTube soccer footage.Comment: CVPR'18. Project: http://grail.cs.washington.edu/projects/soccer
Augmented Reality and Its Application
Augmented Reality (AR) is a discipline that includes the interactive experience of a real-world environment, in which real-world objects and elements are enhanced using computer perceptual information. It has many potential applications in education, medicine, and engineering, among other fields. This book explores these potential uses, presenting case studies and investigations of AR for vocational training, emergency response, interior design, architecture, and much more
Recent Advances in the Processing and Rendering Algorithms for Computer-Generated Holography
Digital holography represents a novel media which promises to revolutionize the way the users interacts with content. This paper presents an in-depth review of the state-of-the-art algorithms for advanced processing and rendering of computer-generated holography. Open-access holographic data are selected and characterized as references for the experimental analysis. The design of a tool for digital hologram rendering and quality evaluation is presented and implemented as an open-source reference software, with the aim to encourage the approach to the holography research area, and simplify the rendering and quality evaluation tasks. Exploration studies focused on the reproducibility of the results are reported, showing a practical application of the proposed architecture for standardization activities. A final discussion on the results obtained is reported, also highlighting the future developments of the reconstruction software that is made publicly available with this work
Coherent and Holographic Imaging Methods for Immersive Near-Eye Displays
Lähinäytöt on suunniteltu tarjoamaan realistisia kolmiulotteisia katselukokemuksia, joille on merkittävää tarvetta esimerkiksi työkoneiden etäkäytössä ja 3D-suunnittelussa. Nykyaikaiset lähinäytöt tuottavat kuitenkin edelleen ristiriitaisia visuaalisia vihjeitä, jotka heikentävät immersiivistä kokemusta ja haittaavat niiden miellyttävää käyttöä. Merkittävänä ratkaisuvaihtoehtona pidetään koherentin valon, kuten laservalon, käyttöä näytön valaistukseen, millä voidaan korjata nykyisten lähinäyttöjen puutteita. Erityisesti koherentti valaistus mahdollistaa holografisen kuvantamisen, jota käyttävät holografiset näytöt voivat tarkasti jäljitellä kolmiulotteisten mallien todellisia valoaaltoja. Koherentin valon käyttäminen näyttöjen valaisemiseen aiheuttaa kuitenkin huomiota vaativaa korkean kontrastin häiriötä pilkkukuvioiden muodossa. Lisäksi holografisten näyttöjen laskentamenetelmät ovat laskennallisesti vaativia ja asettavat uusia haasteita analyysin, pilkkuhäiriön ja valon mallintamisen suhteen.
Tässä väitöskirjassa tutkitaan laskennallisia menetelmiä lähinäytöille koherentissa kuvantamisjärjestelmässä käyttäen signaalinkäsittelyä, koneoppimista sekä geometrista (säde) ja fysikaalista (aalto) optiikan mallintamista. Työn ensimmäisessä osassa keskitytään holografisten kuvantamismuotojen analysointiin sekä kehitetään hologrammien laskennallisia menetelmiä. Holografian korkeiden laskentavaatimusten ratkaisemiseksi otamme käyttöön holografiset stereogrammit holografisen datan likimääräisenä esitysmuotona. Tarkastelemme kyseisen esitysmuodon visuaalista oikeellisuutta kehittämällä analyysikehyksen holografisen stereogrammin tarjoamien visuaalisten vihjeiden tarkkuudelle akkommodaatiota varten suhteessa sen suunnitteluparametreihin. Lisäksi ehdotamme signaalinkäsittelyratkaisua pilkkuhäiriön vähentämiseksi, ratkaistaksemme nykyisten menetelmien valon mallintamiseen liittyvät visuaalisia artefakteja aiheuttavat ongelmat. Kehitämme myös uudenlaisen holografisen kuvantamismenetelmän, jolla voidaan mallintaa tarkasti valon käyttäytymistä haastavissa olosuhteissa, kuten peiliheijastuksissa.
Väitöskirjan toisessa osassa lähestytään koherentin näyttökuvantamisen laskennallista taakkaa koneoppimisen avulla. Kehitämme koherentin akkommodaatioinvariantin lähinäytön suunnittelukehyksen, jossa optimoidaan yhtäaikaisesti näytön staattista optiikka ja näytön kuvan esikäsittelyverkkoa. Lopuksi nopeutamme ehdottamaamme uutta holografista kuvantamismenetelmää koneoppimisen avulla reaaliaikaisia sovelluksia varten. Kyseiseen ratkaisuun sisältyy myös tehokkaan menettelyn kehittäminen funktionaalisten satunnais-3D-ympäristöjen tuottamiseksi. Kehittämämme menetelmä mahdollistaa suurten synteettisten moninäkökulmaisten kuvien datasettien tuottamisen, joilla voidaan kouluttaa sopivia neuroverkkoja mallintamaan holografista kuvantamismenetelmäämme reaaliajassa.
Kaiken kaikkiaan tässä työssä kehitettyjen menetelmien osoitetaan olevan erittäin kilpailukykyisiä uusimpien koherentin valon lähinäyttöjen laskentamenetelmien kanssa. Työn tuloksena nähdään kaksi vaihtoehtoista lähestymistapaa ristiriitaisten visuaalisten vihjeiden aiheuttamien nykyisten lähinäyttöongelmien ratkaisemiseksi joko staattisella tai dynaamisella optiikalla ja reaaliaikaiseen käyttöön soveltuvilla laskentamenetelmillä. Esitetyt tulokset ovat näin ollen tärkeitä seuraavan sukupolven immersiivisille lähinäytöille.Near-eye displays have been designed to provide realistic 3D viewing experience, strongly demanded in applications, such as remote machine operation, entertainment, and 3D design. However, contemporary near-eye displays still generate conflicting visual cues which degrade the immersive experience and hinders their comfortable use. Approaches using coherent, e.g., laser light for display illumination have been considered prominent for tackling the current near-eye display deficiencies. Coherent illumination enables holographic imaging whereas holographic displays are expected to accurately recreate the true light waves of a desired 3D scene. However, the use of coherent light for driving displays introduces additional high contrast noise in the form of speckle patterns, which has to be taken care of. Furthermore, imaging methods for holographic displays are computationally demanding and impose new challenges in analysis, speckle noise and light modelling.
This thesis examines computational methods for near-eye displays in the coherent imaging regime using signal processing, machine learning, and geometrical (ray) and physical (wave) optics modeling. In the first part of the thesis, we concentrate on analysis of holographic imaging modalities and develop corresponding computational methods. To tackle the high computational demands of holography, we adopt holographic stereograms as an approximative holographic data representation. We address the visual correctness of such representation by developing a framework for analyzing the accuracy of accommodation visual cues provided by a holographic stereogram in relation to its design parameters. Additionally, we propose a signal processing solution for speckle noise reduction to overcome existing issues in light modelling causing visual artefacts. We also develop a novel holographic imaging method to accurately model lighting effects in challenging conditions, such as mirror reflections.
In the second part of the thesis, we approach the computational complexity aspects of coherent display imaging through deep learning. We develop a coherent accommodation-invariant near-eye display framework to jointly optimize static display optics and a display image pre-processing network. Finally, we accelerate the corresponding novel holographic imaging method via deep learning aimed at real-time applications. This includes developing an efficient procedure for generating functional random 3D scenes for forming a large synthetic data set of multiperspective images, and training a neural network to approximate the holographic imaging method under the real-time processing constraints.
Altogether, the methods developed in this thesis are shown to be highly competitive with the state-of-the-art computational methods for coherent-light near-eye displays. The results of the work demonstrate two alternative approaches for resolving the existing near-eye display problems of conflicting visual cues using either static or dynamic optics and computational methods suitable for real-time use. The presented results are therefore instrumental for the next-generation immersive near-eye displays
Multi-touch Detection and Semantic Response on Non-parametric Rear-projection Surfaces
The ability of human beings to physically touch our surroundings has had a profound impact on our daily lives. Young children learn to explore their world by touch; likewise, many simulation and training applications benefit from natural touch interactivity. As a result, modern interfaces supporting touch input are ubiquitous. Typically, such interfaces are implemented on integrated touch-display surfaces with simple geometry that can be mathematically parameterized, such as planar surfaces and spheres; for more complicated non-parametric surfaces, such parameterizations are not available. In this dissertation, we introduce a method for generalizable optical multi-touch detection and semantic response on uninstrumented non-parametric rear-projection surfaces using an infrared-light-based multi-camera multi-projector platform. In this paradigm, touch input allows users to manipulate complex virtual 3D content that is registered to and displayed on a physical 3D object. Detected touches trigger responses with specific semantic meaning in the context of the virtual content, such as animations or audio responses. The broad problem of touch detection and response can be decomposed into three major components: determining if a touch has occurred, determining where a detected touch has occurred, and determining how to respond to a detected touch. Our fundamental contribution is the design and implementation of a relational lookup table architecture that addresses these challenges through the encoding of coordinate relationships among the cameras, the projectors, the physical surface, and the virtual content. Detecting the presence of touch input primarily involves distinguishing between touches (actual contact events) and hovers (near-contact proximity events). We present and evaluate two algorithms for touch detection and localization utilizing the lookup table architecture. One of the algorithms, a bounded plane sweep, is additionally able to estimate hover-surface distances, which we explore for interactions above surfaces. The proposed method is designed to operate with low latency and to be generalizable. We demonstrate touch-based interactions on several physical parametric and non-parametric surfaces, and we evaluate both system accuracy and the accuracy of typical users in touching desired targets on these surfaces. In a formative human-subject study, we examine how touch interactions are used in the context of healthcare and present an exploratory application of this method in patient simulation. A second study highlights the advantages of touch input on content-matched physical surfaces achieved by the proposed approach, such as decreases in induced cognitive load, increases in system usability, and increases in user touch performance. In this experiment, novice users were nearly as accurate when touching targets on a 3D head-shaped surface as when touching targets on a flat surface, and their self-perception of their accuracy was higher
Indoor Mapping and Reconstruction with Mobile Augmented Reality Sensor Systems
Augmented Reality (AR) ermöglicht es, virtuelle, dreidimensionale Inhalte direkt
innerhalb der realen Umgebung darzustellen. Anstatt jedoch beliebige virtuelle
Objekte an einem willkürlichen Ort anzuzeigen, kann AR Technologie auch genutzt
werden, um Geodaten in situ an jenem Ort darzustellen, auf den sich die Daten
beziehen. Damit eröffnet AR die Möglichkeit, die reale Welt durch virtuelle, ortbezogene
Informationen anzureichern. Im Rahmen der vorliegenen Arbeit wird diese
Spielart von AR als "Fused Reality" definiert und eingehend diskutiert.
Der praktische Mehrwert, den dieses Konzept der Fused Reality bietet, lässt sich
gut am Beispiel seiner Anwendung im Zusammenhang mit digitalen Gebäudemodellen
demonstrieren, wo sich gebäudespezifische Informationen - beispielsweise der
Verlauf von Leitungen und Kabeln innerhalb der Wände - lagegerecht am realen
Objekt darstellen lassen. Um das skizzierte Konzept einer Indoor Fused Reality
Anwendung realisieren zu können, müssen einige grundlegende Bedingungen erfüllt
sein. So kann ein bestimmtes Gebäude nur dann mit ortsbezogenen Informationen
augmentiert werden, wenn von diesem Gebäude ein digitales Modell verfügbar ist.
Zwar werden größere Bauprojekt heutzutage oft unter Zuhilfename von Building
Information Modelling (BIM) geplant und durchgeführt, sodass ein digitales Modell
direkt zusammen mit dem realen Gebäude ensteht, jedoch sind im Falle älterer
Bestandsgebäude digitale Modelle meist nicht verfügbar. Ein digitales Modell eines
bestehenden Gebäudes manuell zu erstellen, ist zwar möglich, jedoch mit großem
Aufwand verbunden. Ist ein passendes Gebäudemodell vorhanden, muss ein AR
Gerät außerdem in der Lage sein, die eigene Position und Orientierung im Gebäude
relativ zu diesem Modell bestimmen zu können, um Augmentierungen lagegerecht
anzeigen zu können.
Im Rahmen dieser Arbeit werden diverse Aspekte der angesprochenen Problematik
untersucht und diskutiert. Dabei werden zunächst verschiedene Möglichkeiten
diskutiert, Indoor-Gebäudegeometrie mittels Sensorsystemen zu erfassen. Anschließend
wird eine Untersuchung präsentiert, inwiefern moderne AR Geräte, die
in der Regel ebenfalls über eine Vielzahl an Sensoren verfügen, ebenfalls geeignet
sind, als Indoor-Mapping-Systeme eingesetzt zu werden. Die resultierenden Indoor
Mapping Datensätze können daraufhin genutzt werden, um automatisiert
Gebäudemodelle zu rekonstruieren. Zu diesem Zweck wird ein automatisiertes,
voxel-basiertes Indoor-Rekonstruktionsverfahren vorgestellt. Dieses wird außerdem
auf der Grundlage vierer zu diesem Zweck erfasster Datensätze mit zugehörigen
Referenzdaten quantitativ evaluiert. Desweiteren werden verschiedene
Möglichkeiten diskutiert, mobile AR Geräte innerhalb eines Gebäudes und des zugehörigen
Gebäudemodells zu lokalisieren. In diesem Kontext wird außerdem auch
die Evaluierung einer Marker-basierten Indoor-Lokalisierungsmethode präsentiert.
Abschließend wird zudem ein neuer Ansatz, Indoor-Mapping Datensätze an den
Achsen des Koordinatensystems auszurichten, vorgestellt
Marker-free surgical navigation of rod bending using a stereo neural network and augmented reality in spinal fusion
The instrumentation of spinal fusion surgeries includes pedicle screw placement and rod implantation. While several surgical navigation approaches have been proposed for pedicle screw placement, less attention has been devoted towards the guidance of patient-specific adaptation of the rod implant. We propose a marker-free and intuitive Augmented Reality (AR) approach to navigate the bending process required for rod implantation. A stereo neural network is trained from the stereo video streams of the Microsoft HoloLens in an end-to-end fashion to determine the location of corresponding pedicle screw heads. From the digitized screw head positions, the optimal rod shape is calculated, translated into a set of bending parameters, and used for guiding the surgeon with a novel navigation approach. In the AR-based navigation, the surgeon is guided step-by-step in the use of the surgical tools to achieve an optimal result. We have evaluated the performance of our method on human cadavers against two benchmark methods, namely conventional freehand bending and marker-based bending navigation in terms of bending time and rebending maneuvers. We achieved an average bending time of 231s with 0.6 rebending maneuvers per rod compared to 476s (3.5 rebendings) and 348s (1.1 rebendings) obtained by our freehand and marker-based benchmarks, respectively
Recommended from our members
Holoscopic 3D image depth estimation and segmentation techniques
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonToday’s 3D imaging techniques offer significant benefits over conventional 2D imaging techniques. The presence of natural depth information in the scene affords the observer an overall improved sense of reality and naturalness. A variety of systems attempting to reach this goal have been designed by many independent research groups, such as stereoscopic and auto-stereoscopic systems. Though the images displayed by such systems tend to cause eye strain, fatigue and headaches after prolonged viewing as users are required to focus on the screen plane/accommodation to converge their eyes to a point in space in a different plane/convergence. Holoscopy is a 3D technology that targets overcoming the above limitations of current 3D technology and was recently developed at Brunel University. This work is part W4.1 of the 3D VIVANT project that is funded by the EU under the ICT program and coordinated by Dr. Aman Aggoun at Brunel University, West London, UK. The objective of the work described in this thesis is to develop estimation and segmentation techniques that are capable of estimating precise 3D depth, and are applicable for holoscopic 3D imaging system. Particular emphasis is given to the task of automatic techniques i.e. favours algorithms with broad generalisation abilities, as no constraints are placed on the setting. Algorithms that provide invariance to most appearance based variation of objects in the scene (e.g. viewpoint changes, deformable objects, presence of noise and changes in lighting). Moreover, have the ability to estimate depth information from both types of holoscopic 3D images i.e. Unidirectional and Omni-directional which gives horizontal parallax and full parallax (vertical and horizontal), respectively. The main aim of this research is to develop 3D depth estimation and 3D image segmentation techniques with great precision. In particular, emphasis on automation of thresholding techniques and cues identifications for development of robust algorithms. A method for depth-through-disparity feature analysis has been built based on the existing correlation between the pixels at a one micro-lens pitch which has been exploited to extract the viewpoint images (VPIs). The corresponding displacement among the VPIs has been exploited to estimate the depth information map via setting and extracting reliable sets of local features. ii Feature-based-point and feature-based-edge are two novel automatic thresholding techniques for detecting and extracting features that have been used in this approach. These techniques offer a solution to the problem of setting and extracting reliable features automatically to improve the performance of the depth estimation related to the generalizations, speed and quality. Due to the resolution limitation of the extracted VPIs, obtaining an accurate 3D depth map is challenging. Therefore, sub-pixel shift and integration is a novel interpolation technique that has been used in this approach to generate super-resolution VPIs. By shift and integration of a set of up-sampled low resolution VPIs, the new information contained in each viewpoint is exploited to obtain a super resolution VPI. This produces a high resolution perspective VPI with wide Field Of View (FOV). This means that the holoscopic 3D image system can be converted into a multi-view 3D image pixel format. Both depth accuracy and a fast execution time have been achieved that improved the 3D depth map. For a 3D object to be recognized the related foreground regions and depth information map needs to be identified. Two novel unsupervised segmentation methods that generate interactive depth maps from single viewpoint segmentation were developed. Both techniques offer new improvements over the existing methods due to their simple use and being fully automatic; therefore, producing the 3D depth interactive map without human interaction. The final contribution is a performance evaluation, to provide an equitable measurement for the extent of the success of the proposed techniques for foreground object segmentation, 3D depth interactive map creation and the generation of 2D super-resolution viewpoint techniques. The no-reference image quality assessment metrics and their correlation with the human perception of quality are used with the help of human participants in a subjective manner