1,036 research outputs found

    Spatial Mapping Using HoloLens 2. A Proposal for Improvement and an Analysis of Inner Workings

    Get PDF
    In this thesis we focus on improving the spatial mapping of Microsoft's augmented reality glasses HoloLens 2. Firstly, an in depth analysis of inner workings and limitations, based on public resources, is conducted. This is then followed by a series of experiments in a small and simple indoor environment, the experiments are designed to extract additional information about the mapping which could not be found through public resources. Some of the experiments have also been conducted with a light detection and ranging (LiDAR) device of the type Velodyne VLP-16. A comparison between the two indicate that HoloLens 2 is able to perform at the same level. The information from the analysis and experiments provide a strong foundation for improvement of the mapping. Only a simple algorithm have been implemented and tested, but in chapter 6 a series of recommendations and ideas for how to proceed with this project are listed. The implemented algorithm uses plane fitting to "pull" points within a certain distance onto the plane. This helps to improve structures that were originally flat, such as walls and floors

    REAL-TIME CAPTURE AND RENDERING OF PHYSICAL SCENE WITH AN EFFICIENTLY CALIBRATED RGB-D CAMERA NETWORK

    Get PDF
    From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. With the recent explosive growth of Augmented Reality (AR) and Virtual Reality (VR) platforms, utilizing camera RGB-D camera networks to capture and render dynamic physical space can enhance immersive experiences for users. To maximize coverage and minimize costs, practical applications often use a small number of RGB-D cameras and sparsely place them around the environment for data capturing. While sparse color camera networks have been studied for decades, the problems of extrinsic calibration of and rendering with sparse RGB-D camera networks are less well understood. Extrinsic calibration is difficult because of inappropriate RGB-D camera models and lack of shared scene features. Due to the significant camera noise and sparse coverage of the scene, the quality of rendering 3D point clouds is much lower compared with synthetic models. Adding virtual objects whose rendering depend on the physical environment such as those with reflective surfaces further complicate the rendering pipeline. In this dissertation, I propose novel solutions to tackle these challenges faced by RGB-D camera systems. First, I propose a novel extrinsic calibration algorithm that can accurately and rapidly calibrate the geometric relationships across an arbitrary number of RGB-D cameras on a network. Second, I propose a novel rendering pipeline that can capture and render, in real-time, dynamic scenes in the presence of arbitrary-shaped reflective virtual objects. Third, I have demonstrated a teleportation application that uses the proposed system to merge two geographically separated 3D captured scenes into the same reconstructed environment. To provide a fast and robust calibration for a sparse RGB-D camera network, first, the correspondences between different camera views are established by using a spherical calibration object. We show that this approach outperforms other techniques based on planar calibration objects. Second, instead of modeling camera extrinsic using rigid transformation that is optimal only for pinhole cameras, different view transformation functions including rigid transformation, polynomial transformation, and manifold regression are systematically tested to determine the most robust mapping that generalizes well to unseen data. Third, the celebrated bundle adjustment procedure is reformulated to minimize the global 3D projection error so as to fine-tune the initial estimates. To achieve a realistic mirror rendering, a robust eye detector is used to identify the viewer\u27s 3D location and render the reflective scene accordingly. The limited field of view obtained from a single camera is overcome by our calibrated RGB-D camera network system that is scalable to capture an arbitrarily large environment. The rendering is accomplished by raytracing light rays from the viewpoint to the scene reflected by the virtual curved surface. To the best of our knowledge, the proposed system is the first to render reflective dynamic scenes from real 3D data in large environments. Our scalable client-server architecture is computationally efficient - the calibration of a camera network system, including data capture, can be done in minutes using only commodity PCs

    Towards an Inclusive Virtual Dressing Room for Wheelchair-Bound Customers

    Get PDF

    A Changing Dynamic: The Business-to-Individual Relationship

    Get PDF
    As the Digital Revolution sweeps the world over, modern marketing strategies have had to adapt and change. Power has shifted from brands into the hands of consumers. This paper explores how this relationship is changing, and offers a glimpse of how this new relationship manifests itself in two forms of digital marketing strategy: the digitization of the in-store experience and the gamification of the modern day loyalty program. These strategies are intended to enhance the consumer experience and ensure that any interaction they have with the brand is a fulfilling one. This paper also discusses how digital technology has broken down the barriers of communication, leading to a global Digital Democracy based on democratic values and transparency of action. This demonstrates the power shift on a global, impactful scale

    A gaze-contingent framework for perceptually-enabled applications in healthcare

    Get PDF
    Patient safety and quality of care remain the focus of the smart operating room of the future. Some of the most influential factors with a detrimental effect are related to suboptimal communication among the staff, poor flow of information, staff workload and fatigue, ergonomics and sterility in the operating room. While technological developments constantly transform the operating room layout and the interaction between surgical staff and machinery, a vast array of opportunities arise for the design of systems and approaches, that can enhance patient safety and improve workflow and efficiency. The aim of this research is to develop a real-time gaze-contingent framework towards a "smart" operating suite, that will enhance operator's ergonomics by allowing perceptually-enabled, touchless and natural interaction with the environment. The main feature of the proposed framework is the ability to acquire and utilise the plethora of information provided by the human visual system to allow touchless interaction with medical devices in the operating room. In this thesis, a gaze-guided robotic scrub nurse, a gaze-controlled robotised flexible endoscope and a gaze-guided assistive robotic system are proposed. Firstly, the gaze-guided robotic scrub nurse is presented; surgical teams performed a simulated surgical task with the assistance of a robot scrub nurse, which complements the human scrub nurse in delivery of surgical instruments, following gaze selection by the surgeon. Then, the gaze-controlled robotised flexible endoscope is introduced; experienced endoscopists and novice users performed a simulated examination of the upper gastrointestinal tract using predominately their natural gaze. Finally, a gaze-guided assistive robotic system is presented, which aims to facilitate activities of daily living. The results of this work provide valuable insights into the feasibility of integrating the developed gaze-contingent framework into clinical practice without significant workflow disruptions.Open Acces

    3D-LIVE : live interactions through 3D visual environments

    Get PDF
    This paper explores Future Internet (FI) 3D-Media technologies and Internet of Things (IoT) in real and virtual environments in order to sense and experiment Real-Time interaction within live situations. The combination of FI testbeds and Living Labs (LL) would enable both researchers and users to explore capacities to enter the 3D Tele-Immersive (TI) application market and to establish new requirements for FI technology and infrastructure. It is expected that combining both FI technology pull and TI market pull would promote and accelerate the creation and adoption, by user communities such as sport practitioners, of innovative TI Services within sport events

    Computer-Assisted Interactive Documentary and Performance Arts in Illimitable Space

    Get PDF
    This major component of the research described in this thesis is 3D computer graphics, specifically the realistic physics-based softbody simulation and haptic responsive environments. Minor components include advanced human-computer interaction environments, non-linear documentary storytelling, and theatre performance. The journey of this research has been unusual because it requires a researcher with solid knowledge and background in multiple disciplines; who also has to be creative and sensitive in order to combine the possible areas into a new research direction. [...] It focuses on the advanced computer graphics and emerges from experimental cinematic works and theatrical artistic practices. Some development content and installations are completed to prove and evaluate the described concepts and to be convincing. [...] To summarize, the resulting work involves not only artistic creativity, but solving or combining technological hurdles in motion tracking, pattern recognition, force feedback control, etc., with the available documentary footage on film, video, or images, and text via a variety of devices [....] and programming, and installing all the needed interfaces such that it all works in real-time. Thus, the contribution to the knowledge advancement is in solving these interfacing problems and the real-time aspects of the interaction that have uses in film industry, fashion industry, new age interactive theatre, computer games, and web-based technologies and services for entertainment and education. It also includes building up on this experience to integrate Kinect- and haptic-based interaction, artistic scenery rendering, and other forms of control. This research work connects all the research disciplines, seemingly disjoint fields of research, such as computer graphics, documentary film, interactive media, and theatre performance together.Comment: PhD thesis copy; 272 pages, 83 figures, 6 algorithm

    基于Kinect的三维虚拟试衣系统的服装效果研究

    Get PDF
    本论文描述并解释详细的创新关于使用Kinect设备实现三维真人虚拟试衣系统的研究,尤其是实现三维真人试衣系统中的衣服大小调整的方法研究。我们的系统可以使用用户的姿势控制三维衣服模型,显示实时视频图,以及根据用户的位置匹配三维衣服模型的位置和动作。三维衣服模型会与实时的跟踪用户的姿势和位置。然后根据用户在屏幕中的大小调整衣服模型在屏幕中的大小。本文也描述了未来的计划以及下一步可以研究的工作
    corecore