10,182 research outputs found

    Unwind: Interactive Fish Straightening

    Full text link
    The ScanAllFish project is a large-scale effort to scan all the world's 33,100 known species of fishes. It has already generated thousands of volumetric CT scans of fish species which are available on open access platforms such as the Open Science Framework. To achieve a scanning rate required for a project of this magnitude, many specimens are grouped together into a single tube and scanned all at once. The resulting data contain many fish which are often bent and twisted to fit into the scanner. Our system, Unwind, is a novel interactive visualization and processing tool which extracts, unbends, and untwists volumetric images of fish with minimal user interaction. Our approach enables scientists to interactively unwarp these volumes to remove the undesired torque and bending using a piecewise-linear skeleton extracted by averaging isosurfaces of a harmonic function connecting the head and tail of each fish. The result is a volumetric dataset of a individual, straight fish in a canonical pose defined by the marine biologist expert user. We have developed Unwind in collaboration with a team of marine biologists: Our system has been deployed in their labs, and is presently being used for dataset construction, biomechanical analysis, and the generation of figures for scientific publication

    A New Image Quantitative Method for Diagnosis and Therapeutic Response

    Get PDF
    abstract: Accurate quantitative information of tumor/lesion volume plays a critical role in diagnosis and treatment assessment. The current clinical practice emphasizes on efficiency, but sacrifices accuracy (bias and precision). In the other hand, many computational algorithms focus on improving the accuracy, but are often time consuming and cumbersome to use. Not to mention that most of them lack validation studies on real clinical data. All of these hinder the translation of these advanced methods from benchside to bedside. In this dissertation, I present a user interactive image application to rapidly extract accurate quantitative information of abnormalities (tumor/lesion) from multi-spectral medical images, such as measuring brain tumor volume from MRI. This is enabled by a GPU level set method, an intelligent algorithm to learn image features from user inputs, and a simple and intuitive graphical user interface with 2D/3D visualization. In addition, a comprehensive workflow is presented to validate image quantitative methods for clinical studies. This application has been evaluated and validated in multiple cases, including quantifying healthy brain white matter volume from MRI and brain lesion volume from CT or MRI. The evaluation studies show that this application has been able to achieve comparable results to the state-of-the-art computer algorithms. More importantly, the retrospective validation study on measuring intracerebral hemorrhage volume from CT scans demonstrates that not only the measurement attributes are superior to the current practice method in terms of bias and precision but also it is achieved without a significant delay in acquisition time. In other words, it could be useful to the clinical trials and clinical practice, especially when intervention and prognostication rely upon accurate baseline lesion volume or upon detecting change in serial lesion volumetric measurements. Obviously, this application is useful to biomedical research areas which desire an accurate quantitative information of anatomies from medical images. In addition, the morphological information is retained also. This is useful to researches which require an accurate delineation of anatomic structures, such as surgery simulation and planning.Dissertation/ThesisDoctoral Dissertation Biomedical Informatics 201

    ImageSpirit: Verbal Guided Image Parsing

    Get PDF
    Humans describe images in terms of nouns and adjectives while algorithms operate on images represented as sets of pixels. Bridging this gap between how humans would like to access images versus their typical representation is the goal of image parsing, which involves assigning object and attribute labels to pixel. In this paper we propose treating nouns as object labels and adjectives as visual attribute labels. This allows us to formulate the image parsing problem as one of jointly estimating per-pixel object and attribute labels from a set of training images. We propose an efficient (interactive time) solution. Using the extracted labels as handles, our system empowers a user to verbally refine the results. This enables hands-free parsing of an image into pixel-wise object/attribute labels that correspond to human semantics. Verbally selecting objects of interests enables a novel and natural interaction modality that can possibly be used to interact with new generation devices (e.g. smart phones, Google Glass, living room devices). We demonstrate our system on a large number of real-world images with varying complexity. To help understand the tradeoffs compared to traditional mouse based interactions, results are reported for both a large scale quantitative evaluation and a user study.Comment: http://mmcheng.net/imagespirit

    Techniques and software tool for 3D multimodality medical image segmentation

    Get PDF
    The era of noninvasive diagnostic radiology and image-guided radiotherapy has witnessed burgeoning interest in applying different imaging modalities to stage and localize complex diseases such as atherosclerosis or cancer. It has been observed that using complementary information from multimodality images often significantly improves the robustness and accuracy of target volume definitions in radiotherapy treatment of cancer. In this work, we present techniques and an interactive software tool to support this new framework for 3D multimodality medical image segmentation. To demonstrate this methodology, we have designed and developed a dedicated open source software tool for multimodality image analysis MIASYS. The software tool aims to provide a needed solution for 3D image segmentation by integrating automatic algorithms, manual contouring methods, image preprocessing filters, post-processing procedures, user interactive features and evaluation metrics. The presented methods and the accompanying software tool have been successfully evaluated for different radiation therapy and diagnostic radiology applications

    Modeling and visualization of medical anesthesiology acts

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia InformáticaIn recent years, medical visualization has evolved from simple 2D images on a light board to 3D computarized images. This move enabled doctors to find better ways of planning surgery and to diagnose patients. Although there is a great variety of 3D medical imaging software, it falls short when dealing with anesthesiology acts. Very little anaesthesia related work has been done. As a consequence, doctors and medical students have had little support to study the subject of anesthesia in the human body. We all are aware of how costly can be setting medical experiments, covering not just medical aspects but ethical and financial ones as well. With this work we hope to contribute for having better medical visualization tools in the area of anesthesiology. Doctors and in particular medical students should study anesthesiology acts more efficiently. They should be able to identify better locations to administrate the anesthesia, to study how long does it take for the anesthesia to affect patients, to relate the effect on patients with quantity of anaesthesia provided, etc. In this work, we present a medical visualization prototype with three main functionalities: image pre-processing, segmentation and rendering. The image pre-processing is mainly used to remove noise from images, which were obtained via imaging scanners. In the segmentation stage it is possible to identify relevant anatomical structures using proper segmentation algorithms. As a proof of concept, we focus our attention in the lumbosacral region of the human body, with data acquired via MRI scanners. The segmentation we provide relies mostly in two algorithms: region growing and level sets. The outcome of the segmentation implies the creation of a 3D model of the anatomical structure under analysis. As for the rendering, the 3D models are visualized using the marching cubes algorithm. The software we have developed also supports time-dependent data. Hence, we could represent the anesthesia flowing in the human body. Unfortunately, we were not able to obtain such type of data for testing. But we have used human lung data to validate this functionality

    Modeling and rendering for development of a virtual bone surgery system

    Get PDF
    A virtual bone surgery system is developed to provide the potential of a realistic, safe, and controllable environment for surgical education. It can be used for training in orthopedic surgery, as well as for planning and rehearsal of bone surgery procedures...Using the developed system, the user can perform virtual bone surgery by simultaneously seeing bone material removal through a graphic display device, feeling the force via a haptic deice, and hearing the sound of tool-bone interaction --Abstract, page iii

    Towards Real-time Remote Processing of Laparoscopic Video

    Get PDF
    Laparoscopic surgery is a minimally invasive technique where surgeons insert a small video camera into the patient\u27s body to visualize internal organs and use small tools to perform these procedures. However, the benefit of small incisions has a disadvantage of limited visualization of subsurface tissues. Image-guided surgery (IGS) uses pre-operative and intra-operative images to map subsurface structures and can reduce the limitations of laparoscopic surgery. One particular laparoscopic system is the daVinci-si robotic surgical vision system. The video streams generate approximately 360 megabytes of data per second, demonstrating a trend toward increased data sizes in medicine, primarily due to higher-resolution video cameras and imaging equipment. Real-time processing this large stream of data on a bedside PC, single or dual node setup, may be challenging and a high-performance computing (HPC) environment is not typically available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second rate (fps), it is required that each 11.9 MB (1080p) video frame be processed by a server and returned within the time this frame is displayed or 1/30th of a second. The ability to acquire, process, and visualize data in real time is essential for the performance of complex tasks as well as minimizing risk to the patient. We have implemented and compared performance of compression, segmentation and registration algorithms on Clemson\u27s Palmetto supercomputer using dual Nvidia graphics processing units (GPUs) per node and compute unified device architecture (CUDA) programming model. We developed three separate applications that run simultaneously: video acquisition, image processing, and video display. The image processing application allows several algorithms to run simultaneously on different cluster nodes and transfer images through message passing interface (MPI). Our segmentation and registration algorithms resulted in an acceleration factor of around 2 and 8 times respectively. To achieve a higher frame rate, we also resized images and reduced the overall processing time. As a result, using high-speed network to access computing clusters with GPUs to implement these algorithms in parallel will improve surgical procedures by providing real-time medical image processing and laparoscopic data

    Enhancement of virtual colonoscopy system.

    Get PDF
    Colorectal cancer is the fourth most common cancer, and the fourth leading cause of cancer related death in the United States. It also happens to be one of the most preventable cancers provided an individual performs a regular screening. For years colonoscopy via colonoscope was the only method for colorectal cancer screening. In the past decade, colonography or virtual colonoscopy (VC) has become an alternative (or supplement) to the traditional colonoscopy. VC has become a much researched topic since its introduction in the mid-nineties. Various visualization methods have been introduced including: traditional flythrough, colon flattening, and unfolded-cube projection. In recent years, the CVIP Lab has introduced a patented visualization method for VC called flyover. This novel visualization method provides complete visualization of the large intestine without significant modification to the rendered three-dimensional model. In this thesis, a CVIP Lab VC interface was developed using Lab software to segment, extract the centerline, split (for flyover), and visualize the large intestine. This system includes adaptive level sets software to perform large intestine segmentation, and CVIP Lab patented curve skeletons software to extract the large intestine centerline. This software suite has not been combined in this manner before so the system stands as a unique contribution to the CVIP Lab colon project. The system is also a novel VC pipeline when compared to other academic and commercial VC methods. The complete system is capable of segmenting, finding the centerline, splitting, and visualizing a large intestine with a limited number of slices (~350 slices) for VC in approximately four and a half minutes. Complete CT scans were also validated with the centerline extraction external to the system (since the curve skeletons code used for centerline extraction cause memory exceptions because of high memory utilization)
    • …
    corecore