34 research outputs found

    A compressive light field projection system

    Get PDF
    For about a century, researchers and experimentalists have strived to bring glasses-free 3D experiences to the big screen. Much progress has been made and light field projection systems are now commercially available. Unfortunately, available display systems usually employ dozens of devices making such setups costly, energy inefficient, and bulky. We present a compressive approach to light field synthesis with projection devices. For this purpose, we propose a novel, passive screen design that is inspired by angle-expanding Keplerian telescopes. Combined with high-speed light field projection and nonnegative light field factorization, we demonstrate that compressive light field projection is possible with a single device. We build a prototype light field projector and angle-expanding screen from scratch, evaluate the system in simulation, present a variety of results, and demonstrate that the projector can alternatively achieve super-resolved and high dynamic range 2D image display when used with a conventional screen.MIT Media Lab ConsortiumNatural Sciences and Engineering Research Council of Canada (NSERC Postdoctoral Fellowship)National Science Foundation (U.S.) (Grant NSF grant 0831281

    Head Tracked Multi User Autostereoscopic 3D Display Investigations

    Get PDF
    The research covered in this thesis encompasses a consideration of 3D television requirements and a survey of stereoscopic and autostereoscopic methods. This confirms that although there is a lot of activity in this area, very little of this work could be considered suitable for television. The principle of operation, design of the components of the optical system and evaluation of two EU-funded (MUTED & HELIUM3D projects) glasses-free (autostereoscopic) displays is described. Four iterations of the display were built in MUTED, with the results of the first used in designing the second, third and fourth versions. The first three versions of the display use two-49 element arrays, one for the left eye and one for the right. A pattern of spots is projected onto the back of the arrays and these are converted into a series of collimated beams that form exit pupils after passing through the LCD. An exit pupil is a region in the viewing field where either a left or a right image is seen across the complete area of the screen; the positions of these are controlled by a multi-user head tracker. A laser projector was used in the first two versions and, although this projector operated on holographic principles in order to obtain the spot pattern required to produce the exit pupils, it should be noted that images seen by the viewers are not produced holographically so the overall display cannot be described as holographic. In the third version, the laser projector is replaced with a conventional LCOS projector to address the stability and brightness issues discovered in the second version. In 2009, true 120Hz displays became available; this led to the development of a fourth version of the MUTED display that uses 120Hz projector and LCD to overcome the problems of projector instability, produces full-resolution images and simplifies the display hardware. HELIUM3D: A multi-user autostereoscopic display based on laser scanning is also described in this thesis. This display also operates by providing head-tracked exit pupils. It incorporates a red, green and blue (RGB) laser illumination source that illuminates a light engine. Light directions are controlled by a spatial light modulator and are directed to the users’ eyes via a front screen assembly incorporating a novel Gabor superlens. In this work is described that covered the development of demonstrators that showed the principle of temporal multiplexing and a version of the final display that had limited functionality; the reason for this was the delivery of components required for a display with full functionality

    Situated Displays in Telecommunication

    Get PDF
    In face to face conversation, numerous cues of attention, eye contact, and gaze direction provide important channels of information. These channels create cues that include turn taking, establish a sense of engagement, and indicate the focus of conversation. However, some subtleties of gaze can be lost in common videoconferencing systems, because the single perspective view of the camera doesn't preserve the spatial characteristics of the face to face situation. In particular, in group conferencing, the `Mona Lisa effect' makes all observers feel that they are looked at when the remote participant looks at the camera. In this thesis, we present designs and evaluations of four novel situated teleconferencing systems, which aim to improve the teleconferencing experience. Firstly, we demonstrate the effectiveness of a spherical video telepresence system in that it allows a single observer at multiple viewpoints to accurately judge where the remote user is placing their gaze. Secondly, we demonstrate the gaze-preserving capability of a cylindrical video telepresence system, but for multiple observers at multiple viewpoints. Thirdly, we demonstrated the further improvement of a random hole autostereoscopic multiview telepresence system in conveying gaze by adding stereoscopic cues. Lastly, we investigate the influence of display type and viewing angle on how people place their trust during avatar-mediated interaction. The results show the spherical avatar telepresence system has the ability to be viewed qualitatively similarly from all angles and demonstrate how trust can be altered depending on how one views the avatar. Together these demonstrations motivate the further study of novel display configurations and suggest parameters for the design of future teleconferencing systems

    투명한 매질에서의 광 경로 분석을 이용한 집약적 3차원 디스플레이

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2017. 2. 이병호.본 박사학위 논문에서는 광학적으로 투명한 매질에서의 광 경로 분석을 바탕으로 집약적인 3차원 디스플레이 시스템을 구현하는 접근 방법에 대하여 논의한다. 3차원 영상 장치를 구성하는 요소와 시청자 사이의 물리적인 거리를 줄이는 것은 집약적인 3차원 디스플레이 시스템을 구현하는 직관적인 방법이다. 또한, 기존 시스템의 크기를 유지하면서 더 많은 양의 3차원 영상 정보를 표현하는 것 또한 집약적 3차원 디스플레이 시스템을 의미한다. 높은 대역폭과 작은 구조를 가진 집약적 3차원 디스플레이 시스템을 구현하기 위하여 다음의 두 가지 광학 현상을 이용한다. 등방성 물질에서의 전반사 특성과 이방성 물질에서의 복굴절 특성이다. 가시광 영역에서 빛을 투과시키는 두 매질의 고유 광학 특성을 기존의 3차원 디스플레이 시스템에 적용하기 위하여 광 경로 추적을 통하여 분석한다. 광 도파로의 전반사 특성은 집약적 다중 투사 3차원 디스플레이 시스템을 구현하기 위하여 사용한다. 투사 광학계의 영상 정보는 광 도파로로 입사, 내부에서 전반사를 통하여 진행하고, 이에 수평 투사 거리는 광 도파로의 두께로 제한된다. 다수의 전반사 이후 영상 정보는 광 도파로의 출사 면을 통해 빠져나가고, 렌즈에 의하여 최적 시청 지점에서 시점을 형성한다. 광 도파로 내부에서의 광 경로를 등가 모델을 통하여 조사하고, 이를 통해 다수의 투사 광학계로부터 생성된 다수의 시점 영상이 왜곡되는 것을 분석하고 보정한다. 10개의 시점을 제공하는 집약적 다중 투사 3차원 디스플레이 시스템을 통해 제안된 방법을 검증한다. 향상된 대역폭 특성을 가진 다중 투사 3차원 디스플레이와 다중 초점 헤드 마운트 디스플레이 구현을 위한 이방성 판을 이용한 편광 다중화 방법을 제안한다. 빛의 편광 상태, 이방성 판의 광축 방향에 따라 광 경로가 달라진다. 측면 방향으로의 광 경로 전환은 다중 투사 3차원 디스플레이 기술과 결합하여 시점을 측면 방향으로 두 배로 증가시킨다. 깊이 방향으로의 광 경로 전환은 헤드 마운트 디스플레이에서 다중 초점 기능을 구현한다. 광 경로 추적 시뮬레이션을 통해 이방성 판의 모양, 광축, 파장 등의 다양한 파라미터 변화에 따른 광 경로 전환을 분석한다. 각각의 기능에 맞도록 설계된 이방성 판과 편광 회전자를 실시간으로 결합하여, 다중 투사 3차원 디스플레이와 다중 초점 헤드 마운트 디스플레이의 대역폭이 2배 증가한다. 각 시스템에 대한 시작품을 제작하고, 제안된 방법을 실험적으로 검증한다. 본 논문에서는 광 도파로와 복굴절 물질을 이용하여 그 광 경로를 분석, 대형의 다중 투사 3차원 디스플레이 시스템과 개인 사용자의 헤드 마운트 디스플레이 시스템의 크기를 감소시키고, 표현 가능한 정보량을 증가시키는 방법을 제안한다. 광 도파로와 이방성 판은 기존의 3차원 디스플레이 시스템과 쉽게 결합이 가능하며, 제안된 방법은 향후 소형뿐만 아니라 중대형 3차원 디스플레이 시스템의 집약화에 기여할 수 있을 것으로 기대된다.This dissertation investigates approaches for realizing compact three-dimensional (3D) display systems based on optical path analysis in optically transparent medium. Reducing the physical distance between 3D display apparatuses and an observer is an intuitive method to realize compact 3D display systems. In addition, it is considered compact 3D display systems when they present more 3D data than conventional systems while preserving the size of the systems. For implementing compact 3D display systems with high bandwidth and minimized structure, two optical phenomena are investigated: one is the total internal reflection (TIR) in isotropic materials and the other is the double refraction in birefringent crystals. Both materials are optically transparent in visible range and ray tracing simulations for analyzing the optical path in the materials are performed to apply the unique optical phenomenon into conventional 3D display systems. An optical light-guide with the TIR is adopted to realize a compact multi-projection 3D display system. A projection image originated from the projection engine is incident on the optical light-guide and experiences multiple folds by the TIR. The horizontal projection distance of the system is effectively reduced as the thickness of the optical light-guide. After multiple folds, the projection image is emerged from the exit surface of the optical light-guide and collimated to form a viewing zone at the optimum viewing position. The optical path governed by the TIR is analyzed by adopting an equivalent model of the optical light-guide. Through the equivalent model, image distortion for multiple view images in the optical light-guide is evaluated and compensated. For verifying the feasibility of the proposed system, a ten-view multi-projection 3D display system with minimized projection distance is implemented. To improve the bandwidth of multi-projection 3D display systems and head-mounted display (HMD) systems, a polarization multiplexing technique with the birefringent plate is proposed. With the polarization state of the image and the direction of optic axis of the birefringent plate, the optical path of rays varies in the birefringent material. The optical path switching in the lateral direction is applied in the multi-projection system to duplicate the viewing zone in the lateral direction. Likewise, a multi-focal function in the HMD is realized by adopting the optical path switching in the longitudinal direction. For illuminating the detailed optical path switching and the image characteristic such as an astigmatism and a color dispersion in the birefringent material, ray tracing simulations with the change of optical structure, the optic axis, and wavelengths are performed. By combining the birefringent material and a polarization rotation device, the bandwidth of both the multi-projection 3D display and the HMD is doubled in real-time. Prototypes of both systems are implemented and the feasibility of the proposed systems is verified through experiments. In this dissertation, the optical phenomena of the TIR and the double refraction realize the compact 3D display systems: the multi-projection 3D display for public and the multi-focal HMD display for individual. The optical components of the optical light-guide and the birefringent plate can be easily combined with the conventional 3D display system and it is expected that the proposed method can contribute to the realization of future 3D display systems with compact size and high bandwidth.Chapter 1 Introduction 10 1.1 Overview of modern 3D display providing high quality 3D images 10 1.2 Motivation of this dissertation 15 1.3 Scope and organization 18 Chapter 2 Compact multi-projection 3D displays with optical path analysis of total internal reflection 20 2.1 Introduction 20 2.2 Principle of compact multi-projection 3D display system using optical light-guide 23 2.2.1 Multi-projection 3D display system 23 2.2.2 Optical light-guide for multi-projection 3D display system 26 2.2.3 Analysis on image characteristics of projection images in optical light-guide 34 2.2.4 Pre-distortion method for view image compensation 44 2.3 Implementation of prototype of multi-projection 3D display system with reduced projection distance 47 2.4 Summary and discussion 52 Chapter 3 Compact multi-projection 3D displays with optical path analysis of double refraction 53 3.1 Introduction 53 3.2 Principle of viewing zone duplication in multi-projection 3D display system 57 3.2.1 Polarization-dependent optical path switching in birefringent crystal 57 3.2.2 Analysis on image formation through birefringent plane-parallel plate 60 3.2.3 Full-color generation of dual projection 64 3.3 Implementation of prototype of viewing zone duplication of multi-projection 3D display system 68 3.3.1 Experimental setup for viewing zone duplication of multi-projection 3D display system 68 3.3.2 Luminance distribution measurement of viewing zone duplication of multi-projection 3D display system 74 3.4 Summary and discussion 79 Chapter 4 Compact multi-focal 3D HMDs with optical path analysis of double refraction 81 4.1 Introduction 81 4.2 Principle of multi-focal 3D HMD system 86 4.2.1 Multi-focal 3D HMD system using Savart plate 86 4.2.2 Astigmatism compensation by modified Savart plate 89 4.2.3 Analysis on lateral chromatic aberration of extraordinary plane 96 4.2.4 Additive type compressive light field display 101 4.3 Implementation of prototype of multi-focal 3D HMD system 104 4.4 Summary and discussion 112 Chapter 5 Conclusion 114 Bibliography 117 Appendix 129 초 록 130Docto

    Methods for Light Field Display Profiling and Scalable Super-Multiview Video Coding

    Get PDF
    Light field 3D displays reproduce the light field of real or synthetic scenes, as observed by multiple viewers, without the necessity of wearing 3D glasses. Reproducing light fields is a technically challenging task in terms of optical setup, content creation, distributed rendering, among others; however, the impressive visual quality of hologramlike scenes, in full color, with real-time frame rates, and over a very wide field of view justifies the complexity involved. Seeing objects popping far out from the screen plane without glasses impresses even those viewers who have experienced other 3D displays before.Content for these displays can either be synthetic or real. The creation of synthetic (rendered) content is relatively well understood and used in practice. Depending on the technique used, rendering has its own complexities, quite similar to the complexity of rendering techniques for 2D displays. While rendering can be used in many use-cases, the holy grail of all 3D display technologies is to become the future 3DTVs, ending up in each living room and showing realistic 3D content without glasses. Capturing, transmitting, and rendering live scenes as light fields is extremely challenging, and it is necessary if we are about to experience light field 3D television showing real people and natural scenes, or realistic 3D video conferencing with real eye-contact.In order to provide the required realism, light field displays aim to provide a wide field of view (up to 180°), while reproducing up to ~80 MPixels nowadays. Building gigapixel light field displays is realistic in the next few years. Likewise, capturing live light fields involves using many synchronized cameras that cover the same display wide field of view and provide the same high pixel count. Therefore, light field capture and content creation has to be well optimized with respect to the targeted display technologies. Two major challenges in this process are addressed in this dissertation.The first challenge is how to characterize the display in terms of its capabilities to create light fields, that is how to profile the display in question. In clearer terms this boils down to finding the equivalent spatial resolution, which is similar to the screen resolution of 2D displays, and angular resolution, which describes the smallest angle, the color of which the display can control individually. Light field is formalized as 4D approximation of the plenoptic function in terms of geometrical optics through spatiallylocalized and angularly-directed light rays in the so-called ray space. Plenoptic Sampling Theory provides the required conditions to sample and reconstruct light fields. Subsequently, light field displays can be characterized in the Fourier domain by the effective display bandwidth they support. In the thesis, a methodology for displayspecific light field analysis is proposed. It regards the display as a signal processing channel and analyses it as such in spectral domain. As a result, one is able to derive the display throughput (i.e. the display bandwidth) and, subsequently, the optimal camera configuration to efficiently capture and filter light fields before displaying them.While the geometrical topology of optical light sources in projection-based light field displays can be used to theoretically derive display bandwidth, and its spatial and angular resolution, in many cases this topology is not available to the user. Furthermore, there are many implementation details which cause the display to deviate from its theoretical model. In such cases, profiling light field displays in terms of spatial and angular resolution has to be done by measurements. Measurement methods that involve the display showing specific test patterns, which are then captured by a single static or moving camera, are proposed in the thesis. Determining the effective spatial and angular resolution of a light field display is then based on an automated analysis of the captured images, as they are reproduced by the display, in the frequency domain. The analysis reveals the empirical limits of the display in terms of pass-band both in the spatial and angular dimension. Furthermore, the spatial resolution measurements are validated by subjective tests confirming that the results are in line with the smallest features human observers can perceive on the same display. The resolution values obtained can be used to design the optimal capture setup for the display in question.The second challenge is related with the massive number of views and pixels captured that have to be transmitted to the display. It clearly requires effective and efficient compression techniques to fit in the bandwidth available, as an uncompressed representation of such a super-multiview video could easily consume ~20 gigabits per second with today’s displays. Due to the high number of light rays to be captured, transmitted and rendered, distributed systems are necessary for both capturing and rendering the light field. During the first attempts to implement real-time light field capturing, transmission and rendering using a brute force approach, limitations became apparent. Still, due to the best possible image quality achievable with dense multi-camera light field capturing and light ray interpolation, this approach was chosen as the basis of further work, despite the massive amount of bandwidth needed. Decompression of all camera images in all rendering nodes, however, is prohibitively time consuming and is not scalable. After analyzing the light field interpolation process and the data-access patterns typical in a distributed light field rendering system, an approach to reduce the amount of data required in the rendering nodes has been proposed. This approach, on the other hand, requires rectangular parts (typically vertical bars in case of a Horizontal Parallax Only light field display) of the captured images to be available in the rendering nodes, which might be exploited to reduce the time spent with decompression of video streams. However, partial decoding is not readily supported by common image / video codecs. In the thesis, approaches aimed at achieving partial decoding are proposed for H.264, HEVC, JPEG and JPEG2000 and the results are compared.The results of the thesis on display profiling facilitate the design of optimal camera setups for capturing scenes to be reproduced on 3D light field displays. The developed super-multiview content encoding also facilitates light field rendering in real-time. This makes live light field transmission and real-time teleconferencing possible in a scalable way, using any number of cameras, and at the spatial and angular resolution the display actually needs for achieving a compelling visual experience

    Review on Augmented Reality in Oral and Cranio-Maxillofacial Surgery: Toward 'Surgery-Specific' Head-Up Displays

    Get PDF
    In recent years, there has been an increasing interest towards the augmented reality as applied to the surgical field. We conducted a systematic review of literature classifying the augmented reality applications in oral and cranio-maxillofacial surgery (OCMS) in order to pave the way to future solutions that may ease the adoption of AR guidance in surgical practice. Publications containing the terms 'augmented reality' AND 'maxillofacial surgery', and the terms 'augmented reality' AND 'oral surgery' were searched in the PubMed database. Through the selected studies, we performed a preliminary breakdown according to general aspects, such as surgical subspecialty, year of publication and country of research; then, a more specific breakdown was provided according to technical features of AR-based devices, such as virtual data source, visualization processing mode, tracking mode, registration technique and AR display type. The systematic search identified 30 eligible publications. Most studies (14) were in orthognatic surgery, the minority (2) concerned traumatology, while 6 studies were in oncology and 8 in general OCMS. In 8 of 30 studies the AR systems were based on a head-mounted approach using smart glasses or headsets. In most of these cases (7), a video-see-through mode was implemented, while only 1 study described an optical-see-through mode. In the remaining 22 studies, the AR content was displayed on 2D displays (10), full-parallax 3D displays (6) and projectors (5). In 1 case the AR display type is not specified. AR applications are of increasing interest and adoption in oral and cranio-maxillofacial surgery, however, the quality of the AR experience represents the key requisite for a successful result. Widespread use of AR systems in the operating room may be encouraged by the availability of 'surgery-specific' head-mounted devices that should guarantee the accuracy required for surgical tasks and the optimal ergonomics

    Angular and spatial light modulation by single digital micromirror device for multi-image output and nearly-doubled étendue

    Get PDF
    The "Angular Spatial Light Modulator" (ASLM) achieves simultaneous angular and spatial light modulation at a plane by combining Digital Micromirror Device (DMD) based programmable blazed grating beam steering and binary pattern sequencing. The ASLM system multiplies the number of effective output pixels of the DMD for increased spatial and/or angular degrees of freedom, and nearly-doubles the etendue output of the DMD. We implement multiple illumination and projection schemes to demonstrate ASLM-based extended FOV display, light-field projection, and multi-view display. We also implement time-multiplexed pupil segmented illumination to extend the pattern steering to two dimensions. (C) 2019 Optical Society of America under the terms of the OSA Open Access Publishing AgreementUniversity of Arizona; Department of Defense (DoD) National Defense Science and Engineering Graduate (NDSEG); Air Force Research Laboratory (AFRL); ARCS FoundationOpen access journalThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at [email protected]

    Perceptually Optimized Visualization on Autostereoscopic 3D Displays

    Get PDF
    The family of displays, which aims to visualize a 3D scene with realistic depth, are known as "3D displays". Due to technical limitations and design decisions, such displays create visible distortions, which are interpreted by the human vision as artefacts. In absence of visual reference (e.g. the original scene is not available for comparison) one can improve the perceived quality of the representations by making the distortions less visible. This thesis proposes a number of signal processing techniques for decreasing the visibility of artefacts on 3D displays. The visual perception of depth is discussed, and the properties (depth cues) of a scene which the brain uses for assessing an image in 3D are identified. Following the physiology of vision, a taxonomy of 3D artefacts is proposed. The taxonomy classifies the artefacts based on their origin and on the way they are interpreted by the human visual system. The principles of operation of the most popular types of 3D displays are explained. Based on the display operation principles, 3D displays are modelled as a signal processing channel. The model is used to explain the process of introducing distortions. It also allows one to identify which optical properties of a display are most relevant to the creation of artefacts. A set of optical properties for dual-view and multiview 3D displays are identified, and a methodology for measuring them is introduced. The measurement methodology allows one to derive the angular visibility and crosstalk of each display element without the need for precision measurement equipment. Based on the measurements, a methodology for creating a quality profile of 3D displays is proposed. The quality profile can be either simulated using the angular brightness function or directly measured from a series of photographs. A comparative study introducing the measurement results on the visual quality and position of the sweet-spots of eleven 3D displays of different types is presented. Knowing the sweet-spot position and the quality profile allows for easy comparison between 3D displays. The shape and size of the passband allows depth and textures of a 3D content to be optimized for a given 3D display. Based on knowledge of 3D artefact visibility and an understanding of distortions introduced by 3D displays, a number of signal processing techniques for artefact mitigation are created. A methodology for creating anti-aliasing filters for 3D displays is proposed. For multiview displays, the methodology is extended towards so-called passband optimization which addresses Moiré, fixed-pattern-noise and ghosting artefacts, which are characteristic for such displays. Additionally, design of tuneable anti-aliasing filters is presented, along with a framework which allows the user to select the so-called 3d sharpness parameter according to his or her preferences. Finally, a set of real-time algorithms for view-point-based optimization are presented. These algorithms require active user-tracking, which is implemented as a combination of face and eye-tracking. Once the observer position is known, the image on a stereoscopic display is optimised for the derived observation angle and distance. For multiview displays, the combination of precise light re-direction and less-precise face-tracking is used for extending the head parallax. For some user-tracking algorithms, implementation details are given, regarding execution of the algorithm on a mobile device or on desktop computer with graphical accelerator

    Stereoscopic high dynamic range imaging

    Get PDF
    Two modern technologies show promise to dramatically increase immersion in virtual environments. Stereoscopic imaging captures two images representing the views of both eyes and allows for better depth perception. High dynamic range (HDR) imaging accurately represents real world lighting as opposed to traditional low dynamic range (LDR) imaging. HDR provides a better contrast and more natural looking scenes. The combination of the two technologies in order to gain advantages of both has been, until now, mostly unexplored due to the current limitations in the imaging pipeline. This thesis reviews both fields, proposes stereoscopic high dynamic range (SHDR) imaging pipeline outlining the challenges that need to be resolved to enable SHDR and focuses on capture and compression aspects of that pipeline. The problems of capturing SHDR images that would potentially require two HDR cameras and introduce ghosting, are mitigated by capturing an HDR and LDR pair and using it to generate SHDR images. A detailed user study compared four different methods of generating SHDR images. Results demonstrated that one of the methods may produce images perceptually indistinguishable from the ground truth. Insights obtained while developing static image operators guided the design of SHDR video techniques. Three methods for generating SHDR video from an HDR-LDR video pair are proposed and compared to the ground truth SHDR videos. Results showed little overall error and identified a method with the least error. Once captured, SHDR content needs to be efficiently compressed. Five SHDR compression methods that are backward compatible are presented. The proposed methods can encode SHDR content to little more than that of a traditional single LDR image (18% larger for one method) and the backward compatibility property encourages early adoption of the format. The work presented in this thesis has introduced and advanced capture and compression methods for the adoption of SHDR imaging. In general, this research paves the way for a novel field of SHDR imaging which should lead to improved and more realistic representation of captured scenes

    Augmented Reality

    Get PDF
    Augmented Reality (AR) is a natural development from virtual reality (VR), which was developed several decades earlier. AR complements VR in many ways. Due to the advantages of the user being able to see both the real and virtual objects simultaneously, AR is far more intuitive, but it's not completely detached from human factors and other restrictions. AR doesn't consume as much time and effort in the applications because it's not required to construct the entire virtual scene and the environment. In this book, several new and emerging application areas of AR are presented and divided into three sections. The first section contains applications in outdoor and mobile AR, such as construction, restoration, security and surveillance. The second section deals with AR in medical, biological, and human bodies. The third and final section contains a number of new and useful applications in daily living and learning
    corecore