21 research outputs found
Recreating Sheffield's Medieval Castle in situ using Outdoor Augmented Reality
Augmented Reality (AR) experiences generally function well indoors, inside buildings, where, typically, lighting conditions are stable, the scale of the environment is small and fixed, and markers can be easily placed. This is not the case for outdoor AR experiences. In this paper, we present practical solutions for an AR application that virtually restores Sheffieldâs medieval castle to the Castlegate area in Sheffield city centre where it once stood. A simplified 3D model of the area, together with sensor fusion, is used to support a user alignment process and subsequent orientation tracking. Rendering realism is improved by using directional lighting matching that of the sun, a virtual ground plane and depth masking based on the same model used in the alignment stage. The depth masking ensures the castle sits correctly in front of or behind real buildings, as necessary, thus addressing the occlusion problem. The Unity game engine is used for development and the resulting app runs in real-time on recent high-spec Android mobile phones
A Universal Volumetric Haptic Actuation Platform
In this paper, we report a method of implementing a universal volumetric haptic actuation platform which can be adapted to fit a wide variety of visual displays with flat surfaces. This platform aims to enable the simulation of the 3D features of input interfaces. This goal is achieved using four readily available stepper motors in a diagonal cross configuration with which we can quickly change the position of a surface in a manner that can render these volumetric features. In our research, we use a Microsoft Surface Go tablet placed on the haptic enhancement actuation platform to replicate the exploratory features of virtual keyboard keycaps displayed on the touchscreen. We ask seven participants to explore the surface of a virtual keypad comprised of 12 keycaps. As a second task, random key positions are announced one at a time, which the participant is expected to locate. These experiments are used to understand how and with what fidelity the volumetric feedback could improve performance (detection time, track length, and error rate) of detecting the specific keycaps location with haptic feedback and in the absence of visual feedback. Participants complete the tasks with great success (p < 0.05). In addition, their ability to feel convex keycaps is confirmed within the subjective comments.Peer reviewe
Decaf: Monocular Deformation Capture for Face and Hand Interactions
Existing methods for 3D tracking from monocular RGB videos predominantly
consider articulated and rigid objects. Modelling dense non-rigid object
deformations in this setting remained largely unaddressed so far, although such
effects can improve the realism of the downstream applications such as AR/VR
and avatar communications. This is due to the severe ill-posedness of the
monocular view setting and the associated challenges. While it is possible to
naively track multiple non-rigid objects independently using 3D templates or
parametric 3D models, such an approach would suffer from multiple artefacts in
the resulting 3D estimates such as depth ambiguity, unnatural intra-object
collisions and missing or implausible deformations. Hence, this paper
introduces the first method that addresses the fundamental challenges depicted
above and that allows tracking human hands interacting with human faces in 3D
from single monocular RGB videos. We model hands as articulated objects
inducing non-rigid face deformations during an active interaction. Our method
relies on a new hand-face motion and interaction capture dataset with realistic
face deformations acquired with a markerless multi-view camera system. As a
pivotal step in its creation, we process the reconstructed raw 3D shapes with
position-based dynamics and an approach for non-uniform stiffness estimation of
the head tissues, which results in plausible annotations of the surface
deformations, hand-face contact regions and head-hand positions. At the core of
our neural approach are a variational auto-encoder supplying the hand-face
depth prior and modules that guide the 3D tracking by estimating the contacts
and the deformations. Our final 3D hand and face reconstructions are realistic
and more plausible compared to several baselines applicable in our setting,
both quantitatively and qualitatively.
https://vcai.mpi-inf.mpg.de/projects/Deca
Virtual Reality and Augmented Reality - 15th EuroVR International Conference, EuroVR 2018, London, UK, October 22-23, 2018, Proceedings
International audienc
Digital Twins in Built Environments: An Investigation of the Characteristics, Applications, and Challenges
The concept of digital twins is proposed as a new technology-led advancement to support the processes of the design, construction, and operation of built assets. Commonalities between the emerging definitions of digital twins describe them as digital or cyber environments that are bidirectionally-linked to their physical or real-life replica to enable simulation and data-centric decision making. Studies have started to investigate their role in the digitalization of asset delivery, including the management of built assets at different levels within the building and infrastructure sectors. However, questions persist regarding their actual applications and implementation challenges, including their integration with other digital technologies (i.e., building information modeling, virtual and augmented reality, Internet of Things, artificial intelligence, and cloud computing). Within the built environment context, this study seeks to analyze the definitions and characteristics of a digital twin, its interactions with other digital technologies used in built asset delivery and operation, and its applications and challenges. To achieve this aim, the research utilizes a thorough literature review and semi-structured interviews with ten industry experts. The literature review explores the merits and the relevance of digital twins relative to existing digital technologies and highlights potential applications and challenges for their implementation. The data from the semi-structured interviews are classified into five themes: definitions and enablers of digital twins, applications and benefits, implementation challenges, existing practical applications, and future development. The findings provide a point of departure for future research aimed at clarifying the relationship between digital twins and other digital technologies and their key implementation challenges
The Application of Mixed Reality Within Civil Nuclear Manufacturing and Operational Environments
This thesis documents the design and application of Mixed Reality (MR) within a nuclear
manufacturing cell through the creation of a Digitally Assisted Assembly Cell (DAAC). The
DAAC is a proof of concept system, combining full body tracking within a room sized
environment and bi-directional feedback mechanism to allow communication between users within
the Virtual Environment (VE) and a manufacturing cell. This allows for training, remote assistance,
delivery of work instructions, and data capture within a manufacturing cell.
The research underpinning the DAAC encompasses four main areas; the nuclear industry, Virtual
Reality (VR) and MR technology, MR within manufacturing, and finally the 4 th Industrial
Revolution (IR4.0). Using an array of Kinect sensors, the DAAC was designed to capture user
movements within a real manufacturing cell, which can be transferred in real time to a VE, creating
a digital twin of the real cell. Users can interact with each other via digital assets and laser pointers
projected into the cell, accompanied by a built-in Voice over Internet Protocol (VoIP) system. This
allows for the capture of implicit knowledge from operators within the real manufacturing cell, as
well as transfer of that knowledge to future operators. Additionally, users can connect to the VE
from anywhere in the world. In this way, experts are able to communicate with the users in the real
manufacturing cell and assist with their training. The human tracking data fills an identified gap in
the IR4.0 network of Cyber Physical System (CPS), and could allow for future optimisations
within manufacturing systems, Material Resource Planning (MRP) and Enterprise Resource
Planning (ERP).
This project is a demonstration of how MR could prove valuable within nuclear manufacture. The
DAAC is designed to be low cost. It is hoped this will allow for its use by groups who have
traditionally been priced out of MR technology. This could help Small to Medium Enterprises
(SMEs) close the double digital divide between themselves and larger global corporations. For
larger corporations it offers the benefit of being low cost, and, is consequently, easier to roll out
across the value chain. Skills developed in one area can also be transferred to others across the
internet, as users from one manufacturing cell can watch and communicate with those in another.
However, as a proof of concept, the DAAC is at Technology Readiness Level (TRL) five or six and,
prior to its wider application, further testing is required to asses and improve the technology.
The work was patented in both the UK (S. R EDDISH et al., 2017a), the US (S. R EDDISH et al.,
2017b) and China (S. R EDDISH et al., 2017c). The patents are owned by Rolls-Royce and cover
the methods of bi-directional feedback from which users can interact from the digital to the real
and vice versa.
Stephen Reddish
Mixed Mode Realities in Nuclear Manufacturing
Key words: Mixed Mode Reality, Virtual Reality, Augmented Reality, Nuclear, Manufacture,
Digital Twin, Cyber Physical Syste