16,295 research outputs found
Virtual reality simulation for the optimization of endovascular procedures : current perspectives
Endovascular technologies are rapidly evolving, often - requiring coordination and cooperation between clinicians and technicians from diverse specialties. These multidisciplinary interactions lead to challenges that are reflected in the high rate of errors occurring during endovascular procedures. Endovascular virtual reality (VR) simulation has evolved from simple benchtop devices to full physic simulators with advanced haptics and dynamic imaging and physiological controls. The latest developments in this field include the use of fully immersive simulated hybrid angiosuites to train whole endovascular teams in crisis resource management and novel technologies that enable practitioners to build VR simulations based on patient-specific anatomy. As our understanding of the skills, both technical and nontechnical, required for optimal endovascular performance improves, the requisite tools for objective assessment of these skills are being developed and will further enable the use of VR simulation in the training and assessment of endovascular interventionalists and their entire teams. Simulation training that allows deliberate practice without danger to patients may be key to bridging the gap between new endovascular technology and improved patient outcomes
Steered mixture-of-experts for light field images and video : representation and coding
Research in light field (LF) processing has heavily increased over the last decade. This is largely driven by the desire to achieve the same level of immersion and navigational freedom for camera-captured scenes as it is currently available for CGI content. Standardization organizations such as MPEG and JPEG continue to follow conventional coding paradigms in which viewpoints are discretely represented on 2-D regular grids. These grids are then further decorrelated through hybrid DPCM/transform techniques. However, these 2-D regular grids are less suited for high-dimensional data, such as LFs. We propose a novel coding framework for higher-dimensional image modalities, called Steered Mixture-of-Experts (SMoE). Coherent areas in the higher-dimensional space are represented by single higher-dimensional entities, called kernels. These kernels hold spatially localized information about light rays at any angle arriving at a certain region. The global model consists thus of a set of kernels which define a continuous approximation of the underlying plenoptic function. We introduce the theory of SMoE and illustrate its application for 2-D images, 4-D LF images, and 5-D LF video. We also propose an efficient coding strategy to convert the model parameters into a bitstream. Even without provisions for high-frequency information, the proposed method performs comparable to the state of the art for low-to-mid range bitrates with respect to subjective visual quality of 4-D LF images. In case of 5-D LF video, we observe superior decorrelation and coding performance with coding gains of a factor of 4x in bitrate for the same quality. At least equally important is the fact that our method inherently has desired functionality for LF rendering which is lacking in other state-of-the-art techniques: (1) full zero-delay random access, (2) light-weight pixel-parallel view reconstruction, and (3) intrinsic view interpolation and super-resolution
Measurements by A LEAP-Based Virtual Glove for the hand rehabilitation
Hand rehabilitation is fundamental after stroke or surgery. Traditional rehabilitation
requires a therapist and implies high costs, stress for the patient, and subjective evaluation of
the therapy effectiveness. Alternative approaches, based on mechanical and tracking-based gloves,
can be really effective when used in virtual reality (VR) environments. Mechanical devices are often
expensive, cumbersome, patient specific and hand specific, while tracking-based devices are not
affected by these limitations but, especially if based on a single tracking sensor, could suffer from
occlusions. In this paper, the implementation of a multi-sensors approach, the Virtual Glove (VG),
based on the simultaneous use of two orthogonal LEAP motion controllers, is described. The VG is
calibrated and static positioning measurements are compared with those collected with an accurate
spatial positioning system. The positioning error is lower than 6 mm in a cylindrical region of interest
of radius 10 cm and height 21 cm. Real-time hand tracking measurements are also performed, analysed
and reported. Hand tracking measurements show that VG operated in real-time (60 fps), reduced
occlusions, and managed two LEAP sensors correctly, without any temporal and spatial discontinuity
when skipping from one sensor to the other. A video demonstrating the good performance of VG
is also collected and presented in the Supplementary Materials. Results are promising but further
work must be done to allow the calculation of the forces exerted by each finger when constrained by
mechanical tools (e.g., peg-boards) and for reducing occlusions when grasping these tools. Although
the VG is proposed for rehabilitation purposes, it could also be used for tele-operation of tools and
robots, and for other VR applications
- …