9 research outputs found

    Web-based Stereoscopic Collaboration for Medical Visualization

    Get PDF
    Medizinische Volumenvisualisierung ist ein wertvolles Werkzeug zur Betrachtung von Volumen- daten in der medizinischen Praxis und Lehre. Eine interaktive, stereoskopische und kollaborative Darstellung in Echtzeit ist notwendig, um die Daten vollständig und im Detail verstehen zu können. Solche Visualisierung von hochauflösenden Daten ist jedoch wegen hoher Hardware- Anforderungen fast nur an speziellen Visualisierungssystemen möglich. Remote-Visualisierung wird verwendet, um solche Visualisierung peripher nutzen zu können. Dies benötigt jedoch fast immer komplexe Software-Deployments, wodurch eine universelle ad-hoc Nutzbarkeit erschwert wird. Aus diesem Sachverhalt ergibt sich folgende Hypothese: Ein hoch performantes Remote- Visualisierungssystem, welches für Stereoskopie und einfache Benutzbarkeit spezialisiert ist, kann für interaktive, stereoskopische und kollaborative medizinische Volumenvisualisierung genutzt werden. Die neueste Literatur über Remote-Visualisierung beschreibt Anwendungen, welche nur reine Webbrowser benötigen. Allerdings wird bei diesen kein besonderer Schwerpunkt auf die perfor- mante Nutzbarkeit von jedem Teilnehmer gesetzt, noch die notwendige Funktion bereitgestellt, um mehrere stereoskopische Präsentationssysteme zu bedienen. Durch die Bekanntheit von Web- browsern, deren einfach Nutzbarkeit und weite Verbreitung hat sich folgende spezifische Frage ergeben: Können wir ein System entwickeln, welches alle Aspekte unterstützt, aber nur einen reinen Webbrowser ohne zusätzliche Software als Client benötigt? Ein Proof of Concept wurde durchgeführt um die Hypothese zu verifizieren. Dazu gehörte eine Prototyp-Entwicklung, deren praktische Anwendung, deren Performanzmessung und -vergleich. Der resultierende Prototyp (CoWebViz) ist eines der ersten Webbrowser basierten Systeme, welches flüssige und interaktive Remote-Visualisierung in Realzeit und ohne zusätzliche Soft- ware ermöglicht. Tests und Vergleiche zeigen, dass der Ansatz eine bessere Performanz hat als andere ähnliche getestete Systeme. Die simultane Nutzung verschiedener stereoskopischer Präsen- tationssysteme mit so einem einfachen Remote-Visualisierungssystem ist zur Zeit einzigartig. Die Nutzung für die normalerweise sehr ressourcen-intensive stereoskopische und kollaborative Anatomieausbildung, gemeinsam mit interkontinentalen Teilnehmern, zeigt die Machbarkeit und den vereinfachenden Charakter des Ansatzes. Die Machbarkeit des Ansatzes wurde auch durch die erfolgreiche Nutzung für andere Anwendungsfälle gezeigt, wie z.B. im Grid-computing und in der Chirurgie

    SVG 3D Graphical Presentation for Web-based Applications

    Get PDF
    Due to the rapid developments in the field of computer graphics and computer hardware, web-based applications are becoming more and more powerful, and the performance distance between web-based applications and desktop applications is increasingly closer. The Internet and the WWW have been widely used for delivering, processing, and publishing 3D data. There is increasingly demand for more and easier access to 3D content on the web. The better the browser experience, the more potential revenue that web-based content can generate for providers and others. The main focus of this thesis is on the design, develop and implementation of a new 3D generic modelling method based on Scalable Vector Graphics (SVG) for web-based applications. While the model is initialized using classical 3D graphics, the scene model is extended using SVG. A new algorithm to present 3D graphics with SVG is proposed. This includes the definition of a 3D scene in the framework, integration of 3D objects, cameras, transformations, light models and textures in a 3D scene, and the rendering of 3D objects on the web page, allowing the end-user to interactively manipulate objects on the web page. A new 3D graphics library for 3D geometric transformation and projection in the SVG GL is design and develop. A set of primitives in the SVG GL, including triangle, sphere, cylinder, cone, etc. are designed and developed. A set of complex 3D models in the SVG GL, including extrusion, revolution, Bezier surface, and point clouds are designed and developed. The new Gouraud shading algorithm and new Phong Shading algorithm in the SVG GL are proposed, designed and developed. The algorithms can be used to generate smooth shading and create highlight for 3D models. The new texture mapping algorithms for the SVG GL oriented toward web-based 3D modelling applications are proposed, designed and developed. Texture mapping algorithms for different 3D objects such as triangle, plane, sphere, cylinder, cone, etc. will also be proposed, designed and developed. This constitutes a unique and significant contribution to the disciplines of web-based 3D modelling, as well as to the process of 3D model popularization

    Computational neuroanatomy of the central complex of drosophila melanogaster

    Get PDF
    In many different insect species the highly conserved neuropil regions known as the central complex or central body complex have been shown to be important in behaviours such as locomotion, visual memory and courtship conditioning. The aim of this project is to generate accurate quantitative neuroanatomy of the central complex in the fruit fly Drosophila melanogaster. Much of the authoritative neuroanatomy of the fruit fly from past literature has been derived using Golgi stains, and in important cases these data are available only from 2D camera lucida drawings of the neurons and linguistic descriptions of connectivity. These cannot easily be mapped onto 3D template brains or compared directly to our own data. Using GAL4 driver and reporter constructs, some of the findings within these studies could be visualized using immunohistochemistry and confocal microscopy. A range of GAL4 driver lines were selected that particularly had prominent expression in the fan-shaped body. Images of brains from these lines were archived using a web-based 3D image stack archive developed for the sharing and backup of large confocal stacks. This is also the platform which we use to publish the data, so that other researchers can reuse this catalogue and compare their results directly. Each brain was annotated using desktop-based tools for labelling neuropil regions, locating landmarks in image stacks and tracing fine neuronal processes both manually and automatically. The development of the tracing and landmark annotation tools is described, and all of the tools used in this work are available as free software. In order to compare and aggregate these data, which are from many different brains, it is necessary to register each image stack onto some standard template brain. Although this is a well-studied problem in medical imaging, these high resolution scans of the central fly brain are unusual in a number of respects. The relative effectiveness of various methods currently available were tested on this data set. The best registrations were produced by a method that generates free-form deformations based on B-splines (the Computational Morphometry Toolkit), but for much faster registrations, the thin plate spline method based on manual landmarks may be sufficient. The annotated and registered data allows us to produce central complex template images and also files that accurately represent the possible central complex connectivity apparent in these images. One interesting result to arise from these efforts was evidence for a possible connection between the inferior region of the fan-shaped body and the beta lobe of the mushroom body which had previously been missed in these GAL4 lines. In addition, we can identify several connections which appear to be similar to those described in [Hanesch et al., 1989], the canonical paper on the architecture of the Drosophila melanogaster central complex, and describe for the first time their variation statistically. This registered data was also used to suggest a method for classifying layers of expression within the fan-shaped body

    Interactive computer vision through the Web

    Get PDF
    Computer vision is the computational science aiming at reproducing and improving the ability of human vision to understand its environment. In this thesis, we focus on two fields of computer vision, namely image segmentation and visual odometry and we show the positive impact that interactive Web applications provide on each. The first part of this thesis focuses on image annotation and segmentation. We introduce the image annotation problem and challenges it brings for large, crowdsourced datasets. Many interactions have been explored in the literature to help segmentation algorithms. The most common consist in designating contours, bounding boxes around objects, or interior and exterior scribbles. When crowdsourcing, annotation tasks are delegated to a non-expert public, sometimes on cheaper devices such as tablets. In this context, we conducted a user study showing the advantages of the outlining interaction over scribbles and bounding boxes. Another challenge of crowdsourcing is the distribution medium. While evaluating an interaction in a small user study does not require complex setup, distributing an annotation campaign to thousands of potential users might differ. Thus we describe how the Elm programming language helped us build a reliable image annotation Web application. A highlights tour of its functionalities and architecture is provided, as well as a guide on how to deploy it to crowdsourcing services such as Amazon Mechanical Turk. The application is completely opensource and available online. In the second part of this thesis we present our open-source direct visual odometry library. In that endeavor, we provide an evaluation of other open-source RGB-D camera tracking algorithms and show that our approach performs as well as the currently available alternatives. The visual odometry problem relies on geometry tools and optimization techniques traditionally requiring much processing power to perform at realtime framerates. Since we aspire to run those algorithms directly in the browser, we review past and present technologies enabling high performance computations on the Web. In particular, we detail how to target a new standard called WebAssembly from the C++ and Rust programming languages. Our library has been started from scratch in the Rust programming language, which then allowed us to easily port it to WebAssembly. Thanks to this property, we are able to showcase a visual odometry Web application with multiple types of interactions available. A timeline enables one-dimensional navigation along the video sequence. Pairs of image points can be picked on two 2D thumbnails of the image sequence to realign cameras and correct drifts. Colors are also used to identify parts of the 3D point cloud, selectable to reinitialize camera positions. Combining those interactions enables improvements on the tracking and 3D point reconstruction results

    Enhanced computer assisted detection of polyps in CT colonography

    Get PDF
    This thesis presents a novel technique for automatically detecting colorectal polyps in computed tomography colonography (CTC). The objective of the documented computer assisted diagnosis (CAD) technique is to deal with the issue of false positive detections without adversely affecting polyp detection sensitivity. The thesis begins with an overview of CTC and a review of the associated research areas, with particular attention given to CAD-CTC. This review identifies excessive false positive detections as a common problem associated with current CAD-CTC techniques. Addressing this problem constitutes the major contribution of this thesis. The documented CAD-CTC technique is trained with, and evaluated using, a series of clinical CTC data sets These data sets contain polyps with a range of different sizes and morphologies. The results presented m this thesis indicate the validity of the developed CAD-CTC technique and demonstrate its effectiveness m accurately detecting colorectal polyps while significantly reducing the number of false positive detections
    corecore