113 research outputs found

    Microscope Embedded Neurosurgical Training and Intraoperative System

    Get PDF
    In the recent years, neurosurgery has been strongly influenced by new technologies. Computer Aided Surgery (CAS) offers several benefits for patients\u27 safety but fine techniques targeted to obtain minimally invasive and traumatic treatments are required, since intra-operative false movements can be devastating, resulting in patients deaths. The precision of the surgical gesture is related both to accuracy of the available technological instruments and surgeon\u27s experience. In this frame, medical training is particularly important. From a technological point of view, the use of Virtual Reality (VR) for surgeon training and Augmented Reality (AR) for intra-operative treatments offer the best results. In addition, traditional techniques for training in surgery include the use of animals, phantoms and cadavers. The main limitation of these approaches is that live tissue has different properties from dead tissue and that animal anatomy is significantly different from the human. From the medical point of view, Low-Grade Gliomas (LGGs) are intrinsic brain tumours that typically occur in younger adults. The objective of related treatment is to remove as much of the tumour as possible while minimizing damage to the healthy brain. Pathological tissue may closely resemble normal brain parenchyma when looked at through the neurosurgical microscope. The tactile appreciation of the different consistency of the tumour compared to normal brain requires considerable experience on the part of the neurosurgeon and it is a vital point. The first part of this PhD thesis presents a system for realistic simulation (visual and haptic) of the spatula palpation of the LGG. This is the first prototype of a training system using VR, haptics and a real microscope for neurosurgery. This architecture can be also adapted for intra-operative purposes. In this instance, a surgeon needs the basic setup for the Image Guided Therapy (IGT) interventions: microscope, monitors and navigated surgical instruments. The same virtual environment can be AR rendered onto the microscope optics. The objective is to enhance the surgeon\u27s ability for a better intra-operative orientation by giving him a three-dimensional view and other information necessary for a safe navigation inside the patient. The last considerations have served as motivation for the second part of this work which has been devoted to improving a prototype of an AR stereoscopic microscope for neurosurgical interventions, developed in our institute in a previous work. A completely new software has been developed in order to reuse the microscope hardware, enhancing both rendering performances and usability. Since both AR and VR share the same platform, the system can be referred to as Mixed Reality System for neurosurgery. All the components are open source or at least based on a GPL license

    Developing serious games for cultural heritage: a state-of-the-art review

    Get PDF
    Although the widespread use of gaming for leisure purposes has been well documented, the use of games to support cultural heritage purposes, such as historical teaching and learning, or for enhancing museum visits, has been less well considered. The state-of-the-art in serious game technology is identical to that of the state-of-the-art in entertainment games technology. As a result, the field of serious heritage games concerns itself with recent advances in computer games, real-time computer graphics, virtual and augmented reality and artificial intelligence. On the other hand, the main strengths of serious gaming applications may be generalised as being in the areas of communication, visual expression of information, collaboration mechanisms, interactivity and entertainment. In this report, we will focus on the state-of-the-art with respect to the theories, methods and technologies used in serious heritage games. We provide an overview of existing literature of relevance to the domain, discuss the strengths and weaknesses of the described methods and point out unsolved problems and challenges. In addition, several case studies illustrating the application of methods and technologies used in cultural heritage are presented

    Serious Games in Cultural Heritage

    Get PDF
    Although the widespread use of gaming for leisure purposes has been well documented, the use of games to support cultural heritage purposes, such as historical teaching and learning, or for enhancing museum visits, has been less well considered. The state-of-the-art in serious game technology is identical to that of the state-of-the-art in entertainment games technology. As a result the field of serious heritage games concerns itself with recent advances in computer games, real-time computer graphics, virtual and augmented reality and artificial intelligence. On the other hand, the main strengths of serious gaming applications may be generalised as being in the areas of communication, visual expression of information, collaboration mechanisms, interactivity and entertainment. In this report, we will focus on the state-of-the-art with respect to the theories, methods and technologies used in serious heritage games. We provide an overview of existing literature of relevance to the domain, discuss the strengths and weaknesses of the described methods and point out unsolved problems and challenges. In addition, several case studies illustrating the application of methods and technologies used in cultural heritage are presented

    The delta radiance field

    Get PDF
    The wide availability of mobile devices capable of computing high fidelity graphics in real-time has sparked a renewed interest in the development and research of Augmented Reality applications. Within the large spectrum of mixed real and virtual elements one specific area is dedicated to produce realistic augmentations with the aim of presenting virtual copies of real existing objects or soon to be produced products. Surprisingly though, the current state of this area leaves much to be desired: Augmenting objects in current systems are often presented without any reconstructed lighting whatsoever and therefore transfer an impression of being glued over a camera image rather than augmenting reality. In light of the advances in the movie industry, which has handled cases of mixed realities from one extreme end to another, it is a legitimate question to ask why such advances did not fully reflect onto Augmented Reality simulations as well. Generally understood to be real-time applications which reconstruct the spatial relation of real world elements and virtual objects, Augmented Reality has to deal with several uncertainties. Among them, unknown illumination and real scene conditions are the most important. Any kind of reconstruction of real world properties in an ad-hoc manner must likewise be incorporated into an algorithm responsible for shading virtual objects and transferring virtual light to real surfaces in an ad-hoc fashion. The immersiveness of an Augmented Reality simulation is, next to its realism and accuracy, primarily dependent on its responsiveness. Any computation affecting the final image must be computed in real-time. This condition rules out many of the methods used for movie production. The remaining real-time options face three problems: The shading of virtual surfaces under real natural illumination, the relighting of real surfaces according to the change in illumination due to the introduction of a new object into a scene, and the believable global interaction of real and virtual light. This dissertation presents contributions to answer the problems at hand. Current state-of-the-art methods build on Differential Rendering techniques to fuse global illumination algorithms into AR environments. This simple approach has a computationally costly downside, which limits the options for believable light transfer even further. This dissertation explores new shading and relighting algorithms built on a mathematical foundation replacing Differential Rendering. The result not only presents a more efficient competitor to the current state-of-the-art in global illumination relighting, but also advances the field with the ability to simulate effects which have not been demonstrated by contemporary publications until now

    Analysis of Visualisation and Interaction Tools Authors

    Get PDF
    This document provides an in-depth analysis of visualization and interaction tools employed in the context of Virtual Museum. This analysis is required to identify and design the tools and the different components that will be part of the Common Implementation Framework (CIF). The CIF will be the base of the web-based services and tools to support the development of Virtual Museums with particular attention to online Virtual Museum.The main goal is to provide to the stakeholders and developers an useful platform to support and help them in the development of their projects, despite the nature of the project itself. The design of the Common Implementation Framework (CIF) is based on an analysis of the typical workflow ofthe V-MUST partners and their perceived limitations of current technologies. This document is based also on the results of the V-MUST technical questionnaire (presented in the Deliverable 4.1). Based on these two source of information, we have selected some important tools (mainly visualization tools) and services and we elaborate some first guidelines and ideas for the design and development of the CIF, that shall provide a technological foundation for the V-MUST Platform, together with the V-MUST repository/repositories and the additional services defined in the WP4. Two state of the art reports, one about user interface design and another one about visualization technologies have been also provided in this document

    Visual Computing als Basis fĂŒr Prozessinnovation im Produktlebenszyklus

    Get PDF
    Aus der EinfĂŒhrung: "Die Informationstechnik ist seit den AnfĂ€ngen von CAD vor ca. 50 Jahren ein wesentlicher Impulsgeber fĂŒr die Produktentwicklung und hat maßgeblichen Anteil an Prozessinnovationen wie dem Global Engineering oder der Digitalen Fabrik. LĂ€ngst geht es aber heute nicht mehr allein um die Geometriebeschreibung zukĂŒnftiger Produkte, sondern um die möglichst umfassende Begleitung und ErgĂ€nzung des realen Produkts durch das virtuelle Produkt: von der ersten Idee bis zum Recycling. Die umfassende Vision des virtuellen Produkts als Pendant zum realen Produkt (Spur & Krause 1997) ist untrennbar mit dem Fortschritt der Informationstechnologie verbunden.

    Integration of robotics and 3D visualization to modernize the Expeditionary Warfare Demonstrator (EWD)

    Get PDF
    In the summer of 2008, the Commandant of the Marine Corps (CMC) released a message to all Marines and Sailors detailing plans to revitalize U.S. naval amphibious competency. Current responsibilities in Iraq and Afghanistan have significantly reduced available training time causing overall amphibious readiness to suffer. In response, this thesis evaluates 3D visualization techniques and other virtual environment technologies available to support these mission-critical training goals. The focus of this research is to modernize the Expeditionary Warfare Demonstrator (EWD) located aboard Naval Amphibious Base (NAB) Little Creek, Virginia. The EWD has been used to demonstrate doctrine, tactics, and procedures for all phases of amphibious operations to large groups of Navy, Marine Corps, Joint, Coalition and civilian personnel for the last 55 years. However, it no longer reflects current doctrine and is therefore losing credibility and effectiveness. In its current configuration, the EWD is limited to a single training scenario since the display’s ship models rely on a static pulley system to show movement and the terrain display ashore is fixed. To address these shortfalls, this thesis first recommends the usage of the wireless communication capability within Sun’s Small Programmable Object Technology (SunSPOT) to create robotic vehicles to replace the current ship models. This enables large-group visualization and situational awareness of the numerous coordinated surface maneuvers needed to support Marines as they move from ship to shore. The second recommendation is to improve visualization ashore through the creation of Extensible 3D Graphics (X3D) scenes depicting high-fidelity 3D models and enhanced 3D terrain displays for any location. This thesis shows how to create these scenes and project them from overhead in order to modernize the gymnasium-sized EWD into an amphibious wargaming table suitable for both amphibious staff training and operational planning. Complimentary use of BASE-IT projection tables and digital 3D holography can further provide smallgroup, close-up views of key battlespace locations. It is now possible to upgrade an aging training tool by implementing the technologies recommended in this thesis to support the critical training and tactical needs of the integrated Navy and Marine Corps amphibious fighting force.http://archive.org/details/integrationofrob109454520Outstanding ThesisUS Marine Corps (USMC) author.Approved for public release; distribution is unlimited

    Enhancing the E-Commerce Experience through Haptic Feedback Interaction

    Get PDF
    The sense of touch is important in our everyday lives and its absence makes it difficult to explore and manipulate everyday objects. Existing online shopping practice lacks the opportunity for physical evaluation, that people often use and value when making product choices. However, with recent advances in haptic research and technology, it is possible to simulate various physical properties such as heaviness, softness, deformation, and temperature. The research described here investigates the use of haptic feedback interaction to enhance e-commerce product evaluation, particularly haptic weight and texture evaluation. While other properties are equally important, besides being fundamental to the shopping experience of many online products, weight and texture can be simulated using cost-effective devices. Two initial psychophysical experiments were conducted using free motion haptic exploration in order to more closely resemble conventional shopping. One experiment was to measure weight force thresholds and another to measure texture force thresholds. The measurements can provide better understanding of haptic device limitation for online shopping in terms of the availability of different stimuli to represent physical products. The outcomes of the initial psychophysical experimental studies were then used to produce various absolute stimuli that were used in a comparative experimental study to evaluate user experience of haptic product evaluation. Although free haptic exploration was exercised on both psychophysical experiments, results were relatively consistent with previous work on haptic discrimination. The threshold for weight force discrimination represented as downward forces was 10 percent. The threshold for texture force discrimination represented as friction forces was 14.1 percent, when using dynamic coefficient of friction at any level of static coefficient of friction. On the other hand, the comparative experimental study to evaluate user experience of haptic product information indicated that haptic product evaluation does not change user performance significantly. However, although there was an increase in the time taken to complete the task, the number of button click actions tended to decrease. The results showed that haptic product evaluation could significantly increase the confidence of shopping decision. Nevertheless, the availability of haptic product evaluation does not necessarily impose different product choices but it complements other selection criteria such as price and appearance. The research findings from this work are a first step towards exploring haptic-based environments in e-commerce environments. The findings not only lay the foundation for designing online haptic shopping but also provide empirical support to research in this direction
    • 

    corecore