510 research outputs found
3D Medical Collaboration Technology to Enhance Emergency Healthcare
Two-dimensional (2D) videoconferencing has been explored widely in the past 15â20 years to support collaboration in healthcare. Two issues that arise in most evaluations of 2D videoconferencing in telemedicine are the difficulty obtaining optimal camera views and poor depth perception. To address these problems, we are exploring the use of a small array of cameras to reconstruct dynamic three-dimensional (3D) views of a remote environment and of events taking place within. The 3D views could be sent across wired or wireless networks to remote healthcare professionals equipped with fixed displays or with mobile devices such as personal digital assistants (PDAs). The remote professionalsâ viewpoints could be specified manually or automatically (continuously) via user head or PDA tracking, giving the remote viewers head-slaved or hand-slaved virtual cameras for monoscopic or stereoscopic viewing of the dynamic reconstructions. We call this idea remote 3D medical collaboration. In this article we motivate and explain the vision for 3D medical collaboration technology; we describe the relevant computer vision, computer graphics, display, and networking research; we present a proof-of-concept prototype system; and we present evaluation results supporting the general hypothesis that 3D remote medical collaboration technology could offer benefits over conventional 2D videoconferencing in emergency healthcare
Web-based Stereoscopic Collaboration for Medical Visualization
Medizinische Volumenvisualisierung ist ein wertvolles Werkzeug zur Betrachtung von Volumen- daten in der medizinischen Praxis und Lehre. Eine interaktive, stereoskopische und kollaborative Darstellung in Echtzeit ist notwendig, um die Daten vollstaÌndig und im Detail verstehen zu koÌnnen. Solche Visualisierung von hochaufloÌsenden Daten ist jedoch wegen hoher Hardware- Anforderungen fast nur an speziellen Visualisierungssystemen moÌglich. Remote-Visualisierung wird verwendet, um solche Visualisierung peripher nutzen zu koÌnnen. Dies benoÌtigt jedoch fast immer komplexe Software-Deployments, wodurch eine universelle ad-hoc Nutzbarkeit erschwert wird. Aus diesem Sachverhalt ergibt sich folgende Hypothese: Ein hoch performantes Remote- Visualisierungssystem, welches fuÌr Stereoskopie und einfache Benutzbarkeit spezialisiert ist, kann fuÌr interaktive, stereoskopische und kollaborative medizinische Volumenvisualisierung genutzt werden.
Die neueste Literatur uÌber Remote-Visualisierung beschreibt Anwendungen, welche nur reine Webbrowser benoÌtigen. Allerdings wird bei diesen kein besonderer Schwerpunkt auf die perfor- mante Nutzbarkeit von jedem Teilnehmer gesetzt, noch die notwendige Funktion bereitgestellt, um mehrere stereoskopische PraÌsentationssysteme zu bedienen. Durch die Bekanntheit von Web- browsern, deren einfach Nutzbarkeit und weite Verbreitung hat sich folgende spezifische Frage ergeben: KoÌnnen wir ein System entwickeln, welches alle Aspekte unterstuÌtzt, aber nur einen reinen Webbrowser ohne zusaÌtzliche Software als Client benoÌtigt?
Ein Proof of Concept wurde durchgefuÌhrt um die Hypothese zu verifizieren. Dazu gehoÌrte eine Prototyp-Entwicklung, deren praktische Anwendung, deren Performanzmessung und -vergleich.
Der resultierende Prototyp (CoWebViz) ist eines der ersten Webbrowser basierten Systeme, welches fluÌssige und interaktive Remote-Visualisierung in Realzeit und ohne zusaÌtzliche Soft- ware ermoÌglicht. Tests und Vergleiche zeigen, dass der Ansatz eine bessere Performanz hat als andere aÌhnliche getestete Systeme. Die simultane Nutzung verschiedener stereoskopischer PraÌsen- tationssysteme mit so einem einfachen Remote-Visualisierungssystem ist zur Zeit einzigartig. Die Nutzung fuÌr die normalerweise sehr ressourcen-intensive stereoskopische und kollaborative Anatomieausbildung, gemeinsam mit interkontinentalen Teilnehmern, zeigt die Machbarkeit und den vereinfachenden Charakter des Ansatzes. Die Machbarkeit des Ansatzes wurde auch durch die erfolgreiche Nutzung fuÌr andere AnwendungsfaÌlle gezeigt, wie z.B. im Grid-computing und in der Chirurgie
DualStream: Spatially Sharing Selves and Surroundings using Mobile Devices and Augmented Reality
In-person human interaction relies on our spatial perception of each other
and our surroundings. Current remote communication tools partially address each
of these aspects. Video calls convey real user representations but without
spatial interactions. Augmented and Virtual Reality (AR/VR) experiences are
immersive and spatial but often use virtual environments and characters instead
of real-life representations. Bridging these gaps, we introduce DualStream, a
system for synchronous mobile AR remote communication that captures, streams,
and displays spatial representations of users and their surroundings.
DualStream supports transitions between user and environment representations
with different levels of visuospatial fidelity, as well as the creation of
persistent shared spaces using environment snapshots. We demonstrate how
DualStream can enable spatial communication in real-world contexts, and support
the creation of blended spaces for collaboration. A formative evaluation of
DualStream revealed that users valued the ability to interact spatially and
move between representations, and could see DualStream fitting into their own
remote communication practices in the near future. Drawing from these findings,
we discuss new opportunities for designing more widely accessible spatial
communication tools, centered around the mobile phone.Comment: 10 pages, 4 figures, 1 table; To appear in the proceedings of the
IEEE International Symposium on Mixed and Augmented Reality (ISMAR) 202
Life-Sized Audiovisual Spatial Social Scenes with Multiple Characters: MARC & SMART-IÂČ
International audienceWith the increasing use of virtual characters in virtual and mixed reality settings, the coordination of realism in audiovisual rendering and expressive virtual characters becomes a key issue. In this paper we introduce a new system combining two systems for tackling the issue of realism and high quality in audiovisual rendering and life-sized expressive characters. The goal of the resulting SMART-MARC platform is to investigate the impact of realism on multiple levels: spatial audiovisual rendering of a scene, appearance and expressive behaviors of virtual characters. Potential interactive applications include mediated communication in virtual worlds, therapy, game, arts and elearning. Future experimental studies will focus on 3D audio/visual coherence, social perception and ecologically valid interaction scenes
The Entanglement: Volumetric Music Performances in a Virtual Metaverse Environment
Telematic music performances are an established performance practice in contemporary music. Performing music pieces with geographically distributed musicians is both a technological challenge and an artistic one. These challenges and the resulting possibilities can lead to innovative aesthetic realizations. This paper presents the implementation and realization of âThe Entanglement,â a telematic concert performance in a metaverse environment. The system is realized using web-based frameworks to implement a platform-independent online multi-user environment with volumetric, three- dimensional, streaming of audio and video. This allows live performance of this improvisation piece based on an algorithmic quantum computer composition within a freely explorational virtual environment. We describe the development and realization of the piece and metaverse environment, as well as its artistic and conceptual contextualization
Open-source software in medical imaging: development of OsiriX
Purpose Open source software (oss) development for medical imaging enables collaboration of individuals and groups to produce high-quality tools that meet user needs. This process is reviewed and illustrated with OsiriX, a fast DICOM viewer program for the Apple Macintosh. Materials and methods OsiriX is an oss for the Apple Macintosh under Mac OS X v10.4 or higher specifically designed for navigation and visualization of multimodality and multidimensional images: 2D Viewer, 3D Viewer, 4D Viewer (3D series with temporal dimension, for example: Cardiac-CT) and 5D Viewer (3D series with temporal and functional dimensions, for example: Cardiac-PET-CT). The 3D Viewer offers all modern rendering modes: multiplanar reconstruction, surface rendering, volume rendering and maximum Intensity projection. All these modes support 4D data and are able to produce image fusion between two different series (for example: PET-CT). OsiriX was developed using the Apple Xcode development environment and Cocoa framework as both a DICOM PACS workstation for medical imaging and an image processing software package for medical research (radiology and nuclear imaging), functional imaging, 3D imaging, confocal microscopy and molecular imaging. Results OsiriX is an open source program by Antoine Rosset, a radiologist and software developer, was designed specifically for the needs of advanced imaging modalities. The software program turns an Apple Macintosh into a DICOM PACS workstation for medical imaging and image processing. OsiriX is distributed free of charge under the GNU General Public License and its source code is available to anyone. This system illustrates how open software development for medical imaging tools can be successfully designed, implemented and disseminated. Conclusion oss development can provide useful cost effective tools tailored to specific needs and clinical tasks. The integrity and quality assurance of open software developed by a community of users does not follow the traditional conformance and certification required for commercial medical software programs. However, open software can lead to innovative solutions designed by users better suited for specific task
- âŠ