132 research outputs found

    Towards High-Frequency Tracking and Fast Edge-Aware Optimization

    Full text link
    This dissertation advances the state of the art for AR/VR tracking systems by increasing the tracking frequency by orders of magnitude and proposes an efficient algorithm for the problem of edge-aware optimization. AR/VR is a natural way of interacting with computers, where the physical and digital worlds coexist. We are on the cusp of a radical change in how humans perform and interact with computing. Humans are sensitive to small misalignments between the real and the virtual world, and tracking at kilo-Hertz frequencies becomes essential. Current vision-based systems fall short, as their tracking frequency is implicitly limited by the frame-rate of the camera. This thesis presents a prototype system which can track at orders of magnitude higher than the state-of-the-art methods using multiple commodity cameras. The proposed system exploits characteristics of the camera traditionally considered as flaws, namely rolling shutter and radial distortion. The experimental evaluation shows the effectiveness of the method for various degrees of motion. Furthermore, edge-aware optimization is an indispensable tool in the computer vision arsenal for accurate filtering of depth-data and image-based rendering, which is increasingly being used for content creation and geometry processing for AR/VR. As applications increasingly demand higher resolution and speed, there exists a need to develop methods that scale accordingly. This dissertation proposes such an edge-aware optimization framework which is efficient, accurate, and algorithmically scales well, all of which are much desirable traits not found jointly in the state of the art. The experiments show the effectiveness of the framework in a multitude of computer vision tasks such as computational photography and stereo.Comment: PhD thesi

    Scalable and Extensible Augmented Reality with Applications in Civil Infrastructure Systems.

    Full text link
    In Civil Infrastructure System (CIS) applications, the requirement of blending synthetic and physical objects distinguishes Augmented Reality (AR) from other visualization technologies in three aspects: 1) it reinforces the connections between people and objects, and promotes engineers’ appreciation about their working context; 2) It allows engineers to perform field tasks with the awareness of both the physical and synthetic environment; 3) It offsets the significant cost of 3D Model Engineering by including the real world background. The research has successfully overcome several long-standing technical obstacles in AR and investigated technical approaches to address fundamental challenges that prevent the technology from being usefully deployed in CIS applications, such as the alignment of virtual objects with the real environment continuously across time and space; blending of virtual entities with their real background faithfully to create a sustained illusion of co- existence; integrating these methods to a scalable and extensible computing AR framework that is openly accessible to the teaching and research community, and can be readily reused and extended by other researchers and engineers. The research findings have been evaluated in several challenging CIS applications where the potential of having a significant economic and social impact is high. Examples of validation test beds implemented include an AR visual excavator-utility collision avoidance system that enables spotters to ”see” buried utilities hidden under the ground surface, thus helping prevent accidental utility strikes; an AR post-disaster reconnaissance framework that enables building inspectors to rapidly evaluate and quantify structural damage sustained by buildings in seismic events such as earthquakes or blasts; and a tabletop collaborative AR visualization framework that allows multiple users to observe and interact with visual simulations of engineering processes.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/96145/1/dsuyang_1.pd

    The matrix revisited: A critical assessment of virtual reality technologies for modeling, simulation, and training

    Get PDF
    A convergence of affordable hardware, current events, and decades of research have advanced virtual reality (VR) from the research lab into the commercial marketplace. Since its inception in the 1960s, and over the next three decades, the technology was portrayed as a rarely used, high-end novelty for special applications. Despite the high cost, applications have expanded into defense, education, manufacturing, and medicine. The promise of VR for entertainment arose in the early 1990\u27s and by 2016 several consumer VR platforms were released. With VR now accessible in the home and the isolationist lifestyle adopted due to the COVID-19 global pandemic, VR is now viewed as a potential tool to enhance remote education. Drawing upon over 17 years of experience across numerous VR applications, this dissertation examines the optimal use of VR technologies in the areas of visualization, simulation, training, education, art, and entertainment. It will be demonstrated that VR is well suited for education and training applications, with modest advantages in simulation. Using this context, the case is made that VR can play a pivotal role in the future of education and training in a globally connected world

    Immersive Automotive Stereo Vision

    Get PDF
    Kürzlich wurde das erste In-Car Augmented Reality (AR) System eingeführt. Das System beinhaltet das Rendern von verschiedenen 3D Objekten auf einem Live-Video, welches auf einem Zentraldisplay in der Mittelkonsole des Fahrzeuges angezeigt wird. Ziel dieser Arbeit ist es ein System zu entwickeln, welches nicht nur 2D-Videos augmentieren kann, sondern eine 3D-Rekonstruktion der aktuellen Fahrzeugumgebung erstellen kann. Dies ermöglicht eine Vielzahl von verschiedenen Anwendungen, u.a. die Anzeige dieses 3D-Scans auf einem Head-mounted Display (HMD) als Teil einer Mixed Reality (MR) Anwendung. Eine MR-Anwendung bedarf einer überzeugenden und immersiven Darstellung der Umgebung mit einer hohen Renderfrequenz. Wir beschränken uns auf eine einzelne Front-Stereokamera, welche vorne am Auto verbaut oder montiert ist, um diese Aufgabe zu bewältigen. Hierzu fusionieren wir die Stereomessungen temporär. Zuerst analysieren wir von Grund auf die Effekte der temporalen Stereofusion. Wir schätzen die erreichbare Genauigkeit ab und zeigen Einschränkungen der temporalen Fusion und unseren Annahmen auf. Wir leiten außerdem ein 1D Extended Information Filter und ein 3D Extended Kalman Filter her, um Stereomessungen temporär zu vereinen. Die Filter verbesserten den Tiefenfehler in Simulationen wesentlich. Die Ergebnisse der Analyse integrieren wir in ein neuartiges 3D-Rekonstruktions- Framework, bei dem jeder Punkt mit einem Filter modelliert wird. Das sog. “Warping” von Pixeln von einem Bild zu einem anderen Bild ermöglicht die temporäre Fusion von Messungen nach einem Clustering-Schritt, welcher uns erlaubt verschiedene Tiefenebenen pro Pixel gesondert zu betrachten. Das Framework funktioniert als punkt-basierte Rekonstruktion oder alternativ als mesh-basierte Erweiterung. Hierfür triangulieren wir Tiefenbilder, um die 3DSzene nur mit RGB- und Tiefenbildern als Input auf der GPU zu rendern. Wir können die Eigenschaften von urbanen Szenen und der Kamerabewegung ausnutzen, um Pixel zu identifizieren und zu rendern, welche nicht mehr in zukünftigen Frames beobachtet werden. Das ermöglicht uns diesen Teil der Szene in der größten beobachteten Auflösung zu rekonstruieren. Solche Randpixel formen einen Schlauch (“Tube”) über mehrere Frames, weshalb wir dieses Mesh als Tube Mesh bezeichnen. Unser Framework erlaubt es uns auch die rechenintensiven Filter-Propagationen komplett auf die GPU auszulagern. DesWeiteren demonstrieren wir ein Verfahren, um einen vollen, dynamischen, virtuellen Himmel mithilfe der gleichen Kamera zu erstellen, welcher ergänzend zu der 3D-Szenenrekonstruktion als Hintergrund gezeigt werden kann. Wir evaluieren unsere Methoden gegen andere Verfahren in einem umfangreichen Benchmark auf dem populären “KITTI Visual Odometry”-Datensatz und dem synthethischen SYNTHIA-Datensatz. Neben Stereofehlern im Bild vergleichen wir auch die Performanz der Verfahren für die Rekonstruktion von bestimmten Strukturen in den Referenz-Tiefenbildern, sowie ihre Fähigkeit die Erscheinung der 3D-Szene aus unterschiedlichen Blickwinkeln vorherzusagen auf dem SYNTHIA-Datensatz. Unsere Methode zeigt signifikante Verbesserungen des Disparitätsfehlers sowie des Bildfehlers aus unterschiedlichen Blickwinkeln. Außerdem erzielen wir eine so hohe Rendergeschwindigkeit, dass die Anforderung der Bildwiederholrate von modernen HMDs erfüllt wird. Zum Schluss zeigen wir Herausforderungen in der Evaluation auf, untersuchen die Auswirkungen des Weglassens einzelner Komponenten unseres Frameworks und schließen mit einer qualitativen Demonstration von unterschiedlichen Datensätzen ab, inklusive der Diskussion von Fehlerfällen.Recently, the first in-car augmented reality (AR) system has been introduced to the market. It features various virtual 3D objects drawn on top of a 2D live video feed, which is displayed on a central display inside the vehicle. Our goal with this thesis is to develop an approach that allows to not only augment a 2D video, but to reconstruct a 3D scene of the surrounding driving environment of the vehicle. This opens up various possibilities including the display of this 3D scan on a head-mounted display (HMD) as part of a Mixed Reality (MR) application, which requires a convincing and immersive visualization of the surroundings with high rendering speed. To accomplish this task, we limit ourselves to the use of a single front-mounted stereo camera on a vehicle and fuse stereo measurements temporally. First, we analyze the effects of temporal stereo fusion thoroughly. We estimate the theoretically achievable accuracy and highlight limitations of temporal fusion and our assumptions. We also derive a 1D extended information filter and a 3D extended Kalman filter to fuse measurements temporally, which substantially improves the depth error in our simulations. We integrate these results in a novel dense 3D reconstruction framework, which models each point as a probabilistic filter. Projecting 3D points to the newest image allows us to fuse measurements temporally after a clustering stage, which also gives us the ability to handle multiple depth layers per pixel. The 3D reconstruction framework is point-based, but it also has a mesh-based extension. For that, we leverage a novel depth image triangulation method to render the scene on the GPU using only RGB and depth images as input. We can exploit the nature of urban scenery and the vehicle movement by first identifying and then rendering pixels of the previous stereo camera frame that are no longer seen in the current frame. These pixels at the previous image border form a tube over multiple frames, which we call a tube mesh, and have the highest possible observable resolution. We are also able to offload intensive filter propagation computations completely to the GPU. Furthermore, we demonstrate a way to create a dense, dynamic virtual sky background from the same camera to accompany our reconstructed 3D scene. We evaluate our method against other approaches in an extensive benchmark on the popular KITTI visual odometry dataset and on the synthetic SYNTHIA dataset. Besides stereo error metrics in image space, we also compare how the approaches perform regarding the available depth structure in the reference depth image and in their ability to predict the appearance of the scene from different viewing angles on SYNTHIA. Our method shows significant improvements in terms of disparity and view prediction errors. We also achieve such a high rendering speed that we can fulfill the framerate requirements of modern HMDs. Finally, we highlight challenges in the evaluation, perform ablation studies of our framework and conclude with a qualitative showcase on different datasets including the discussion of failure cases

    The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions

    Full text link
    The Metaverse offers a second world beyond reality, where boundaries are non-existent, and possibilities are endless through engagement and immersive experiences using the virtual reality (VR) technology. Many disciplines can benefit from the advancement of the Metaverse when accurately developed, including the fields of technology, gaming, education, art, and culture. Nevertheless, developing the Metaverse environment to its full potential is an ambiguous task that needs proper guidance and directions. Existing surveys on the Metaverse focus only on a specific aspect and discipline of the Metaverse and lack a holistic view of the entire process. To this end, a more holistic, multi-disciplinary, in-depth, and academic and industry-oriented review is required to provide a thorough study of the Metaverse development pipeline. To address these issues, we present in this survey a novel multi-layered pipeline ecosystem composed of (1) the Metaverse computing, networking, communications and hardware infrastructure, (2) environment digitization, and (3) user interactions. For every layer, we discuss the components that detail the steps of its development. Also, for each of these components, we examine the impact of a set of enabling technologies and empowering domains (e.g., Artificial Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on its advancement. In addition, we explain the importance of these technologies to support decentralization, interoperability, user experiences, interactions, and monetization. Our presented study highlights the existing challenges for each component, followed by research directions and potential solutions. To the best of our knowledge, this survey is the most comprehensive and allows users, scholars, and entrepreneurs to get an in-depth understanding of the Metaverse ecosystem to find their opportunities and potentials for contribution

    Augmented reality device for first response scenarios

    Get PDF
    A prototype of a wearable computer system is proposed and implemented using commercial off-shelf components. The system is designed to allow the user to access location-specific information about an environment, and to provide capability for user tracking. Areas of applicability include primarily first response scenarios, with possible applications in maintenance or construction of buildings and other structures. Necessary preparation of the target environment prior to system\u27s deployment is limited to noninvasive labeling using optical fiducial markers. The system relies on computational vision methods for registration of labels and user position. With the system the user has access to on-demand information relevant to a particular real-world location. Team collaboration is assisted by user tracking and real-time visualizations of team member positions within the environment. The user interface and display methods are inspired by Augmented Reality1 (AR) techniques, incorporating a video-see-through Head Mounted Display (HMD) and fingerbending sensor glove.*. 1Augmented reality (AR) is a field of computer research which deals with the combination of real world and computer generated data. At present, most AR research is concerned with the use of live video imagery which is digitally processed and augmented by the addition of computer generated graphics. Advanced research includes the use of motion tracking data, fiducial marker recognition using machine vision, and the construction of controlled environments containing any number of sensors and actuators. (Source: Wikipedia) *This dissertation is a compound document (contains both a paper copy and a CD as part of the dissertation). The CD requires the following system requirements: Adobe Acrobat; Microsoft Office; Windows MediaPlayer or RealPlayer

    Methods for Light Field Display Profiling and Scalable Super-Multiview Video Coding

    Get PDF
    Light field 3D displays reproduce the light field of real or synthetic scenes, as observed by multiple viewers, without the necessity of wearing 3D glasses. Reproducing light fields is a technically challenging task in terms of optical setup, content creation, distributed rendering, among others; however, the impressive visual quality of hologramlike scenes, in full color, with real-time frame rates, and over a very wide field of view justifies the complexity involved. Seeing objects popping far out from the screen plane without glasses impresses even those viewers who have experienced other 3D displays before.Content for these displays can either be synthetic or real. The creation of synthetic (rendered) content is relatively well understood and used in practice. Depending on the technique used, rendering has its own complexities, quite similar to the complexity of rendering techniques for 2D displays. While rendering can be used in many use-cases, the holy grail of all 3D display technologies is to become the future 3DTVs, ending up in each living room and showing realistic 3D content without glasses. Capturing, transmitting, and rendering live scenes as light fields is extremely challenging, and it is necessary if we are about to experience light field 3D television showing real people and natural scenes, or realistic 3D video conferencing with real eye-contact.In order to provide the required realism, light field displays aim to provide a wide field of view (up to 180°), while reproducing up to ~80 MPixels nowadays. Building gigapixel light field displays is realistic in the next few years. Likewise, capturing live light fields involves using many synchronized cameras that cover the same display wide field of view and provide the same high pixel count. Therefore, light field capture and content creation has to be well optimized with respect to the targeted display technologies. Two major challenges in this process are addressed in this dissertation.The first challenge is how to characterize the display in terms of its capabilities to create light fields, that is how to profile the display in question. In clearer terms this boils down to finding the equivalent spatial resolution, which is similar to the screen resolution of 2D displays, and angular resolution, which describes the smallest angle, the color of which the display can control individually. Light field is formalized as 4D approximation of the plenoptic function in terms of geometrical optics through spatiallylocalized and angularly-directed light rays in the so-called ray space. Plenoptic Sampling Theory provides the required conditions to sample and reconstruct light fields. Subsequently, light field displays can be characterized in the Fourier domain by the effective display bandwidth they support. In the thesis, a methodology for displayspecific light field analysis is proposed. It regards the display as a signal processing channel and analyses it as such in spectral domain. As a result, one is able to derive the display throughput (i.e. the display bandwidth) and, subsequently, the optimal camera configuration to efficiently capture and filter light fields before displaying them.While the geometrical topology of optical light sources in projection-based light field displays can be used to theoretically derive display bandwidth, and its spatial and angular resolution, in many cases this topology is not available to the user. Furthermore, there are many implementation details which cause the display to deviate from its theoretical model. In such cases, profiling light field displays in terms of spatial and angular resolution has to be done by measurements. Measurement methods that involve the display showing specific test patterns, which are then captured by a single static or moving camera, are proposed in the thesis. Determining the effective spatial and angular resolution of a light field display is then based on an automated analysis of the captured images, as they are reproduced by the display, in the frequency domain. The analysis reveals the empirical limits of the display in terms of pass-band both in the spatial and angular dimension. Furthermore, the spatial resolution measurements are validated by subjective tests confirming that the results are in line with the smallest features human observers can perceive on the same display. The resolution values obtained can be used to design the optimal capture setup for the display in question.The second challenge is related with the massive number of views and pixels captured that have to be transmitted to the display. It clearly requires effective and efficient compression techniques to fit in the bandwidth available, as an uncompressed representation of such a super-multiview video could easily consume ~20 gigabits per second with today’s displays. Due to the high number of light rays to be captured, transmitted and rendered, distributed systems are necessary for both capturing and rendering the light field. During the first attempts to implement real-time light field capturing, transmission and rendering using a brute force approach, limitations became apparent. Still, due to the best possible image quality achievable with dense multi-camera light field capturing and light ray interpolation, this approach was chosen as the basis of further work, despite the massive amount of bandwidth needed. Decompression of all camera images in all rendering nodes, however, is prohibitively time consuming and is not scalable. After analyzing the light field interpolation process and the data-access patterns typical in a distributed light field rendering system, an approach to reduce the amount of data required in the rendering nodes has been proposed. This approach, on the other hand, requires rectangular parts (typically vertical bars in case of a Horizontal Parallax Only light field display) of the captured images to be available in the rendering nodes, which might be exploited to reduce the time spent with decompression of video streams. However, partial decoding is not readily supported by common image / video codecs. In the thesis, approaches aimed at achieving partial decoding are proposed for H.264, HEVC, JPEG and JPEG2000 and the results are compared.The results of the thesis on display profiling facilitate the design of optimal camera setups for capturing scenes to be reproduced on 3D light field displays. The developed super-multiview content encoding also facilitates light field rendering in real-time. This makes live light field transmission and real-time teleconferencing possible in a scalable way, using any number of cameras, and at the spatial and angular resolution the display actually needs for achieving a compelling visual experience

    Enhancing interaction in mixed reality

    Get PDF
    With continuous technological innovation, we observe mixed reality emerging from research labs into the mainstream. The arrival of capable mixed reality devices transforms how we are entertained, consume information, and interact with computing systems, with the most recent being able to present synthesized stimuli to any of the human senses and substantially blur the boundaries between the real and virtual worlds. In order to build expressive and practical mixed reality experiences, designers, developers, and stakeholders need to understand and meet its upcoming challenges. This research contributes a novel taxonomy for categorizing mixed reality experiences and guidelines for designing mixed reality experiences. We present the results of seven studies examining the challenges and opportunities of mixed reality experiences, the impact of modalities and interaction techniques on the user experience, and how to enhance the experiences. We begin with a study determining user attitudes towards mixed reality in domestic and educational environments, followed by six research probes that each investigate an aspect of reality or virtuality. In the first, a levitating steerable projector enables us to investigate how the real world can be enhanced without instrumenting the user. We show that the presentation of in-situ instructions for navigational tasks leads to a significantly higher ability to observe and recall real-world landmarks. With the second probe, we enhance the perception of reality by superimposing information usually not visible to the human eye. In amplifying the human vision, we enable users to perceive thermal radiation visually. Further, we examine the effect of substituting physical components with non-functional tangible proxies or entirely virtual representations. With the third research probe, we explore how to enhance virtuality to enable a user to input text on a physical keyboard while being immersed in the virtual world. Our prototype tracked the user’s hands and keyboard to enable generic text input. Our analysis of text entry performance showed the importance and effect of different hand representations. We then investigate how to touch virtuality by simulating generic haptic feedback for virtual reality and show how tactile feedback through quadcopters can significantly increase the sense of presence. Our final research probe investigates the usability and input space of smartphones within mixed reality environments, pairing the user’s smartphone as an input device with a secondary physical screen. Based on our learnings from these individual research probes, we developed a novel taxonomy for categorizing mixed reality experiences and guidelines for designing mixed reality experiences. The taxonomy is based on the human sensory system and human capabilities of articulation. We showcased its versatility and set our research probes into perspective by organizing them inside the taxonomic space. The design guidelines are divided into user-centered and technology-centered. It is our hope that these will contribute to the bright future of mixed reality systems while emphasizing the new underlining interaction paradigm.Mixed Reality (vermischte Realitäten) gehen aufgrund kontinuierlicher technologischer Innovationen langsam von der reinen Forschung in den Massenmarkt über. Mit der Einführung von leistungsfähigen Mixed-Reality-Geräten verändert sich die Art und Weise, wie wir Unterhaltungsmedien und Informationen konsumieren und wie wir mit Computersystemen interagieren. Verschiedene existierende Geräte sind in der Lage, jeden der menschlichen Sinne mit synthetischen Reizen zu stimulieren. Hierdurch verschwimmt zunehmend die Grenze zwischen der realen und der virtuellen Welt. Um eindrucksstarke und praktische Mixed-Reality-Erfahrungen zu kreieren, müssen Designer und Entwicklerinnen die künftigen Herausforderungen und neuen Möglichkeiten verstehen. In dieser Dissertation präsentieren wir eine neue Taxonomie zur Kategorisierung von Mixed-Reality-Erfahrungen sowie Richtlinien für die Gestaltung von solchen. Wir stellen die Ergebnisse von sieben Studien vor, in denen die Herausforderungen und Chancen von Mixed-Reality-Erfahrungen, die Auswirkungen von Modalitäten und Interaktionstechniken auf die Benutzererfahrung und die Möglichkeiten zur Verbesserung dieser Erfahrungen untersucht werden. Wir beginnen mit einer Studie, in der die Haltung der nutzenden Person gegenüber Mixed Reality in häuslichen und Bildungsumgebungen analysiert wird. In sechs weiteren Fallstudien wird jeweils ein Aspekt der Realität oder Virtualität untersucht. In der ersten Fallstudie wird mithilfe eines schwebenden und steuerbaren Projektors untersucht, wie die Wahrnehmung der realen Welt erweitert werden kann, ohne dabei die Person mit Technologie auszustatten. Wir zeigen, dass die Darstellung von in-situ-Anweisungen für Navigationsaufgaben zu einer deutlich höheren Fähigkeit führt, Sehenswürdigkeiten der realen Welt zu beobachten und wiederzufinden. In der zweiten Fallstudie erweitern wir die Wahrnehmung der Realität durch Überlagerung von Echtzeitinformationen, die für das menschliche Auge normalerweise unsichtbar sind. Durch die Erweiterung des menschlichen Sehvermögens ermöglichen wir den Anwender:innen, Wärmestrahlung visuell wahrzunehmen. Darüber hinaus untersuchen wir, wie sich das Ersetzen von physischen Komponenten durch nicht funktionale, aber greifbare Replikate oder durch die vollständig virtuelle Darstellung auswirkt. In der dritten Fallstudie untersuchen wir, wie virtuelle Realitäten verbessert werden können, damit eine Person, die in der virtuellen Welt verweilt, Text auf einer physischen Tastatur eingeben kann. Unser Versuchsdemonstrator detektiert die Hände und die Tastatur, zeigt diese in der vermischen Realität an und ermöglicht somit die verbesserte Texteingaben. Unsere Analyse der Texteingabequalität zeigte die Wichtigkeit und Wirkung verschiedener Handdarstellungen. Anschließend untersuchen wir, wie man Virtualität berühren kann, indem wir generisches haptisches Feedback für virtuelle Realitäten simulieren. Wir zeigen, wie Quadrokopter taktiles Feedback ermöglichen und dadurch das Präsenzgefühl deutlich steigern können. Unsere letzte Fallstudie untersucht die Benutzerfreundlichkeit und den Eingaberaum von Smartphones in Mixed-Reality-Umgebungen. Hierbei wird das Smartphone der Person als Eingabegerät mit einem sekundären physischen Bildschirm verbunden, um die Ein- und Ausgabemodalitäten zu erweitern. Basierend auf unseren Erkenntnissen aus den einzelnen Fallstudien haben wir eine neuartige Taxonomie zur Kategorisierung von Mixed-Reality-Erfahrungen sowie Richtlinien für die Gestaltung von solchen entwickelt. Die Taxonomie basiert auf dem menschlichen Sinnessystem und den Artikulationsfähigkeiten. Wir stellen die vielseitige Verwendbarkeit vor und setzen unsere Fallstudien in Kontext, indem wir sie innerhalb des taxonomischen Raums einordnen. Die Gestaltungsrichtlinien sind in nutzerzentrierte und technologiezentrierte Richtlinien unterteilt. Es ist unsere Anliegen, dass diese Gestaltungsrichtlinien zu einer erfolgreichen Zukunft von Mixed-Reality-Systemen beitragen und gleichzeitig die neuen Interaktionsparadigmen hervorheben
    • …
    corecore