717 research outputs found

    BiopSym: a simulator for enhanced learning of ultrasound-guided prostate biopsy

    Full text link
    This paper describes a simulator of ultrasound-guided prostate biopsies for cancer diagnosis. When performing biopsy series, the clinician has to move the ultrasound probe and to mentally integrate the real-time bi-dimensional images into a three-dimensional (3D) representation of the anatomical environment. Such a 3D representation is necessary to sample regularly the prostate in order to maximize the probability of detecting a cancer if any. To make the training of young physicians easier and faster we developed a simulator that combines images computed from three-dimensional ultrasound recorded data to haptic feedback. The paper presents the first version of this simulator

    Imperfection for Realistic Image Synthesis

    Get PDF
    The precision of image synthesis techniques for rendering naturalistic scenes often works contrary to the realism of the everyday world. Pristine, crystalline, uniform and perfect may describe the most idealized computer images: the surfaces are smooth, neat and crisp in appearance. Efforts to produce realism have recently focused on light and the interaction of light with surfaces. The radiosity methods have shown that proper treatment of light is often critical to the proper visual effect in an image. Even the best of these images is nearly surrealistic in its precision, and thus belies its synthetic origins

    Video conferencing: an effective solution to long distance student placement support?

    Get PDF
    Background Within many health related degree programmes, students receive support during placements via visiting tutors. Literature discusses the importance of this support but economic and environmental arguments indicate a need for alternatives to supporting a student in situ. This project investigated the logistics of and perceptions towards using video conferencing as a means of providing this support. Methods A pilot project was undertaken in which an in situ, support meeting was replaced with a meeting via video link. All participants completed evaluative questionnaires and students attended a follow up focus group in order to explore responses in more depth. Results and discussion Use of the medium identified key logistical hurdles in implementing technology into existing support systems. All participants expressed enthusiasm for the medium with educators expressing a preference. Students identified concerns over the use of this medium for failing placements but could not identify why. As a result of evaluation, this project has raised a number of questions relating to the fitness for purpose of video conferencing in this context. Conclusion Future research aims to respond to the questions raised in evaluating the value and purpose of placement support and the nature of conversations via the video conferencing medium

    Extending the range of facial types

    Get PDF
    We describe, in case study form, techniques to extend the range of facial types and movement using a parametric facial animation system originally developed to model and control synthetic 3D faces limited to a normal range of human shape and motion. These techniques have allowed us to create a single authoring system that can create and animate a wide range of facial types that range from realistic, stylized, cartoon-like, or a combination thereof, all from the same control system. Additionally we describe image processing and 3D deformation tools that allow for a greater range of facial types and facial animation output

    3D Face Synthesis with KINECT

    Get PDF
    This work describes the process of face synthesis by image morphing from less expensive 3D sensors such as KINECT that are prone to sensor noise. Its main aim is to create a useful face database for future face recognition studies.Peer reviewe

    Autonomous Secondary Gaze Behaviours

    Get PDF
    In this paper we describe secondary behaviour, this is behaviour that is generated autonomously for an avatar. The user will control various aspects of the avatars behaviour but a truly expressive avatar must produce more complex behaviour than a user could specify in real time. Secondary behaviour provides some of this expressive behaviour autonomously. However, though it is produced autonomously it must produce behaviour that is appropriate to the actions that the user is controlling (the primary behaviour) and it must produce behaviour that corresponds to what the user wants. We describe an architecture which achieves these to aims by tagging the primary behaviour with messages to be sent to the secondary behaviour and by allowing the user to design various aspects of the secondary behaviour before starting to use the avatar. We have implemented this general architecture in a system which adds gaze behaviour to user designed actions

    Improvements on a simple muscle-based 3D face for realistic facial expressions

    Get PDF
    Facial expressions play an important role in face-to-face communication. With the development of personal computers capable of rendering high quality graphics, computer facial animation has produced more and more realistic facial expressions to enrich human-computer communication. In this paper, we present a simple muscle-based 3D face model that can produce realistic facial expressions in real time. We extend Waters' (1987) muscle model to generate bulges and wrinkles and to improve the combination of multiple muscle actions. In addition, we present techniques to reduce the computation burden on the muscle mode
    • …
    corecore