1,021 research outputs found

    Construction and Evaluation of an Ultra Low Latency Frameless Renderer for VR.

    Get PDF
    © 2016 IEEE.Latency-the delay between a users action and the response to this action-is known to be detrimental to virtual reality. Latency is typically considered to be a discrete value characterising a delay, constant in time and space-but this characterisation is incomplete. Latency changes across the display during scan-out, and how it does so is dependent on the rendering approach used. In this study, we present an ultra-low latency real-time ray-casting renderer for virtual reality, implemented on an FPGA. Our renderer has a latency of 1 ms from tracker to pixel. Its frameless nature means that the region of the display with the lowest latency immediately follows the scan-beam. This is in contrast to frame-based systems such as those using typical GPUs, for which the latency increases as scan-out proceeds. Using a series of high and low speed videos of our system in use, we confirm its latency of 1 ms. We examine how the renderer performs when driving a traditional sequential scan-out display on a readily available HMO, the Oculus Rift OK2. We contrast this with an equivalent apparatus built using a GPU. Using captured human head motion and a set of image quality measures, we assess the ability of these systems to faithfully recreate the stimuli of an ideal virtual reality system-one with a zero latency tracker, renderer and display running at 1 kHz. Finally, we examine the results of these quality measures, and how each rendering approach is affected by velocity of movement and display persistence. We find that our system, with a lower average latency, can more faithfully draw what the ideal virtual reality system would. Further, we find that with low display persistence, the sensitivity to velocity of both systems is lowered, but that it is much lower for ours

    Hyperion: A 3D Visualization Platform for Optical Design of Folded Systems

    Get PDF
    Hyperion is a 3D visualization platform for optical design. It provides a fully immersive, intuitive, and interactive 3D user experience by leveraging existing AR/VR technologies. It enables the visualization of models of folded freeform optical systems in a dynamic 3D environment. The frontend user experience is supported by the computational ray-tracing engine of Eikonal+, an optical design research software currently being developed. We have built a cross-platform light-weight version of Eikonal+ that can communicate with any user interface or other scientific software. We have also demonstrated a prototype of the Hyperion 3D user experience using a Hololens AR display.Keywords — Unreal Engine, panoramic video, games, Cinematography, Lighting, Composition, VR

    Robot Autonomy for Surgery

    Full text link
    Autonomous surgery involves having surgical tasks performed by a robot operating under its own will, with partial or no human involvement. There are several important advantages of automation in surgery, which include increasing precision of care due to sub-millimeter robot control, real-time utilization of biosignals for interventional care, improvements to surgical efficiency and execution, and computer-aided guidance under various medical imaging and sensing modalities. While these methods may displace some tasks of surgical teams and individual surgeons, they also present new capabilities in interventions that are too difficult or go beyond the skills of a human. In this chapter, we provide an overview of robot autonomy in commercial use and in research, and present some of the challenges faced in developing autonomous surgical robots

    Adaptive frameless raycasting for interactive volume visualization

    Get PDF
    There have been many successful attempts to improve ray casting and ray tracing performance in the last decades. Many of these improvements form important steps towards high-performance interactive visualisation. However, growing challenges keep pace with enhancements: display resolutions skyrocket with modern technology and applications become more and more sophisticated. With the limits of Moore's law moving into sight, there have been many considerations about speeding up well-known algorithms, including a plenitude of publications on frameless rendering. In frameless renderers sampling is not synchronised with display refreshes. That allows for both spatially and temporally varying sample rates. One basic approach simply randomises samples entirely. This increases liveliness and reduces input delay, but also leads to distorted and blurred images during movements. Dayal et al. tackle this problem by focusing samples on complex regions and by applying approximating filters to reconstruct an image from incoherent buffer content. Their frameless ray tracer vastly reduces latency and yet produces outstanding image quality. In this thesis we transfer the concepts to volume ray casting. Volume data often poses different challenges due to its lack of plains and surfaces, and its fine granularity. We experiment with both Dayal's sampling and reconstruction techniques and examine their applicability on volume data. In particular, we examine whether their adaptive sampler performs as well on volume data and which adaptions might be necessary. Further, we develop another reconstruction filter which is designed to remove artefacts that frequently occur in our frameless renderer. Instead of assuming certain properties due to local sampling rates and colour gradients, our filter detects artefacts by their age signature in the buffer. Our filter seems to be more targeted and yet requires only constant time per pixel.In den letzten Jahrzehnten gab es zahlreiche Versuche, die Effizienz von Ray-Casting und Ray-Tracing zu verbessern. Viele dieser Verbesserungen bilden wichtige Schritte hin zu leistungsstarken, interaktiven Visualisierungen. Mit der Performanz steigen aber auch die Herausforderungen: die technisch möglichen Bildschirmauflösungen liegen um ein vieles höher und Anwendungen stellen immer größere Anforderungen an die Software. Da die Hardware langsam an die Grenzen von Moores Gesetz stößt, liegt der wissenschaftliche Fokus immer deutlicher auf der Verbesserung der Algorithmen, zum Beispiel durch frameless Rendering. Beim frameless Rendering ist das Sampling nicht mit dem Anzeigeprozess synchronisiert. Das bietet zusätzliche Freiheiten für Algorithmen: räumliche und zeitliche Abtastraten können so variieren. Ein grundlegender Ansatz randomisiert Samples mit einer Gleichverteilung. Das führt zu kleineren Eingabeverzögerungen und erhöht die Lebhaftigkeit der Visualisierung. Gleichermaßen werden aber Bilder durch Bewegungen verzerrt. Dayal et al. bewältigen dieses Problem durch zielgerichtetes Sampling (guided sampling). Dabei werden hohe Abtastraten auf komplexe Bildregionen fokussiert und in einfachen Bildregionen Rechenzeit eingespart. Außerdem werden Bildraumfilter verwendet, um aus den inkohärenten Daten ein möglichst wahrheitsgetreues Bild zu approximieren. Der frameless Ray-Tracer von Dayal et al. bietet stark reduzierte Latenz bei hervorragender Bildqualität. In dieser Arbeit übertragen wir die Konzepte auf Ray-Casting von Volumendaten. Volumendaten bieten oft andere Herausforderungen, da sie keinerlei Oberflächen aufweisen und oft sehr feingranulär sind. Wir experimentieren mit Dayals Sampling- und Rekonstruktionsmethoden und untersuchen deren Eignung für Volumendaten. Insbesondere untersuchen wir, ob deren adaptiver Sampler Volumendaten ebenso gut verarbeiten kann und welche Anpassungen eventuell nötig sind. Des Weiteren entwickeln wir einen eigenen Rekonstruktionsfilter, welcher speziell auf häufige Bildartefakte beim Rendern von Volumendaten ausgelegt ist. Anstatt, wie Dayal, den Filter an die lokale Abtastrate und Farbgradienten anzupassen, werden durch unseren Filter Artefakte anhand ihrer Alterssignatur erkannt. Dabei scheint unser Ansatz zielgerichteter und benötigt dennoch nur konstante Laufzeit pro Pixel

    Neurosurgical Ultrasound Pose Estimation Using Image-Based Registration and Sensor Fusion - A Feasibility Study

    Get PDF
    Modern neurosurgical procedures often rely on computer-assisted real-time guidance using multiple medical imaging modalities. State-of-the-art commercial products enable the fusion of pre-operative with intra-operative images (e.g., magnetic resonance [MR] with ultrasound [US] images), as well as the on-screen visualization of procedures in progress. In so doing, US images can be employed as a template to which pre-operative images can be registered, to correct for anatomical changes, to provide live-image feedback, and consequently to improve confidence when making resection margin decisions near eloquent regions during tumour surgery. In spite of the potential for tracked ultrasound to improve many neurosurgical procedures, it is not widely used. State-of-the-art systems are handicapped by optical tracking’s need for consistent line-of-sight, keeping tracked rigid bodies clean and rigidly fixed, and requiring a calibration workflow. The goal of this work is to improve the value offered by co-registered ultrasound images without the workflow drawbacks of conventional systems. The novel work in this thesis includes: the exploration and development of a GPU-enabled 2D-3D multi-modal registration algorithm based on the existing LC2 metric; and the use of this registration algorithm in the context of a sensor and image-fusion algorithm. The work presented here is a motivating step in a vision towards a heterogeneous tracking framework for image-guided interventions where the knowledge from intraoperative imaging, pre-operative imaging, and (potentially disjoint) wireless sensors in the surgical field are seamlessly integrated for the benefit of the surgeon. The technology described in this thesis, inspired by advances in robot localization demonstrate how inaccurate pose data from disjoint sources can produce a localization system greater than the sum of its parts

    Symposium Program 2019 Final

    Get PDF
    • …
    corecore