1,193 research outputs found

    The Digital Anatomist Information System and Its Use in the Generation and Delivery of Web-Based Anatomy Atlases

    Get PDF
    Advances in network and imaging technology, coupled with the availability of 3-D datasets such as the Visible Human, provide a unique opportunity for developing information systems in anatomy that can deliver relevant knowledge directly to the clinician, researcher or educator. A software framework is described for developing such a system within a distributed architecture that includes spatial and symbolic anatomy information resources, Web and custom servers, and authoring and end-user client programs. The authoring tools have been used to create 3-D atlases of the brain, knee and thorax that are used both locally and throughout the world. For the one and a half year period from June 1995–January 1997, the on-line atlases were accessed by over 33,000 sites from 94 countries, with an average of over 4000 ‘‘hits’’ per day, and 25,000 hits per day during peak exam periods. The atlases have been linked to by over 500 sites, and have received at least six unsolicited awards by outside rating institutions. The flexibility of the software framework has allowed the information system to evolve with advances in technology and representation methods. Possible new features include knowledge-based image retrieval and tutoring, dynamic generation of 3-D scenes, and eventually, real-time virtual reality navigation through the body. Such features, when coupled with other on-line biomedical information resources, should lead to interesting new ways for managing and accessing structural information in medicine

    Towards Real-time Remote Processing of Laparoscopic Video

    Get PDF
    Laparoscopic surgery is a minimally invasive technique where surgeons insert a small video camera into the patient\u27s body to visualize internal organs and use small tools to perform these procedures. However, the benefit of small incisions has a disadvantage of limited visualization of subsurface tissues. Image-guided surgery (IGS) uses pre-operative and intra-operative images to map subsurface structures and can reduce the limitations of laparoscopic surgery. One particular laparoscopic system is the daVinci-si robotic surgical vision system. The video streams generate approximately 360 megabytes of data per second, demonstrating a trend toward increased data sizes in medicine, primarily due to higher-resolution video cameras and imaging equipment. Real-time processing this large stream of data on a bedside PC, single or dual node setup, may be challenging and a high-performance computing (HPC) environment is not typically available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second rate (fps), it is required that each 11.9 MB (1080p) video frame be processed by a server and returned within the time this frame is displayed or 1/30th of a second. The ability to acquire, process, and visualize data in real time is essential for the performance of complex tasks as well as minimizing risk to the patient. We have implemented and compared performance of compression, segmentation and registration algorithms on Clemson\u27s Palmetto supercomputer using dual Nvidia graphics processing units (GPUs) per node and compute unified device architecture (CUDA) programming model. We developed three separate applications that run simultaneously: video acquisition, image processing, and video display. The image processing application allows several algorithms to run simultaneously on different cluster nodes and transfer images through message passing interface (MPI). Our segmentation and registration algorithms resulted in an acceleration factor of around 2 and 8 times respectively. To achieve a higher frame rate, we also resized images and reduced the overall processing time. As a result, using high-speed network to access computing clusters with GPUs to implement these algorithms in parallel will improve surgical procedures by providing real-time medical image processing and laparoscopic data

    The Cardiac Atlas Project—an imaging database for computational modeling and statistical atlases of the heart

    Get PDF
    Motivation: Integrative mathematical and statistical models of cardiac anatomy and physiology can play a vital role in understanding cardiac disease phenotype and planning therapeutic strategies. However, the accuracy and predictive power of such models is dependent upon the breadth and depth of noninvasive imaging datasets. The Cardiac Atlas Project (CAP) has established a large-scale database of cardiac imaging examinations and associated clinical data in order to develop a shareable, web-accessible, structural and functional atlas of the normal and pathological heart for clinical, research and educational purposes. A goal of CAP is to facilitate collaborative statistical analysis of regional heart shape and wall motion and characterize cardiac function among and within population groups

    The Cardiac Atlas Project--An Imaging Database for Computational Modeling and Statistical Atlases of the Heart

    Get PDF
    MOTIVATION: Integrative mathematical and statistical models of cardiac anatomy and physiology can play a vital role in understanding cardiac disease phenotype and planning therapeutic strategies. However, the accuracy and predictive power of such models is dependent upon the breadth and depth of noninvasive imaging datasets. The Cardiac Atlas Project (CAP) has established a large-scale database of cardiac imaging examinations and associated clinical data in order to develop a shareable, web-accessible, structural and functional atlas of the normal and pathological heart for clinical, research and educational purposes. A goal of CAP is to facilitate collaborative statistical analysis of regional heart shape and wall motion and characterize cardiac function among and within population groups. RESULTS: Three main open-source software components were developed: (i) a database with web-interface; (ii) a modeling client for 3D + time visualization and parametric description of shape and motion; and (iii) open data formats for semantic characterization of models and annotations. The database was implemented using a three-tier architecture utilizing MySQL, JBoss and Dcm4chee, in compliance with the DICOM standard to provide compatibility with existing clinical networks and devices. Parts of Dcm4chee were extended to access image specific attributes as search parameters. To date, approximately 3000 de-identified cardiac imaging examinations are available in the database. All software components developed by the CAP are open source and are freely available under the Mozilla Public License Version 1.1 (http://www.mozilla.org/MPL/MPL-1.1.txt)

    Grid Analysis of Radiological Data

    Get PDF
    IGI-Global Medical Information Science Discoveries Research Award 2009International audienceGrid technologies and infrastructures can contribute to harnessing the full power of computer-aided image analysis into clinical research and practice. Given the volume of data, the sensitivity of medical information, and the joint complexity of medical datasets and computations expected in clinical practice, the challenge is to fill the gap between the grid middleware and the requirements of clinical applications. This chapter reports on the goals, achievements and lessons learned from the AGIR (Grid Analysis of Radiological Data) project. AGIR addresses this challenge through a combined approach. On one hand, leveraging the grid middleware through core grid medical services (data management, responsiveness, compression, and workflows) targets the requirements of medical data processing applications. On the other hand, grid-enabling a panel of applications ranging from algorithmic research to clinical use cases both exploits and drives the development of the services

    SEGMENTATION AND INFORMATICS IN MULTIDIMENSIONAL FLUORESCENCE OPTICAL MICROSCOPY IMAGES

    Get PDF
    Recent advances in the field of optical microscopy have enabled scientists to observe and image complex biological processes across a wide range of spatial and temporal resolution, resulting in an exponential increase in optical microscopy data. Manual analysis of such large volumes of data is extremely time consuming and often impossible if the changes cannot be detected by the human eye. Naturally it is essential to design robust, accurate and high performance image processing and analysis tools to extract biologically significant results. Furthermore, the presentation of the results to the end-user, post analysis, is also an equally challenging issue, especially when the data (and/or the hypothesis) involves several spatial/hierarchical scales (e.g., tissues, cells, (sub)-nuclear components). This dissertation concentrates on a subset of such problems such as robust edge detection, automatic nuclear segmentation and selection in multi-dimensional tissue images, spatial analysis of gene localization within the cell nucleus, information visualization and the development of a computational framework for efficient and high-throughput processing of large datasets. Initially, we have developed 2D nuclear segmentation and selection algorithms which help in the development of an integrated approach for determining the preferential spatial localization of certain genes within the cell nuclei which is emerging as a promising technique for the diagnosis of breast cancer. Quantification requires accurate segmentation of 100 to 200 cell nuclei in each patient tissue sample in order to draw a statistically significant result. Thus, for large scale analysis involving hundreds of patients, manual processing is too time consuming and subjective. We have developed an integrated workflow that selects, following 2D automatic segmentation, a sub-population of accurately delineated nuclei for positioning of fluorescence in situ hybridization labeled genes of interest in tissue samples. Application of the method was demonstrated for discriminating normal and cancerous breast tissue sections based on the differential positioning of the HES5 gene. Automatic results agreed with manual analysis in 11 out of 14 cancers, all 4 normal cases and all 5 non-cancerous breast disease cases, thus showing the accuracy and robustness of the proposed approach. As a natural progression from the 2D analysis algorithms to 3D, we first developed a robust and accurate probabilistic edge detection method for 3D tissue samples since several down stream analysis procedures such as segmentation and tracking rely on the performance of edge detection. The method based on multiscale and multi-orientation steps surpasses several other conventional edge detectors in terms of its performance. Subsequently, given an appropriate edge measure, we developed an optimal graphcut-based 3D nuclear segmentation technique for samples where the cell nuclei are volume or surface labeled. It poses the problem as one of finding minimal closure in a directed graph and solves it efficiently using the maxflow-mincut algorithm. Both interactive and automatic versions of the algorithm are developed. The algorithm outperforms, in terms of three metrics that are commonly used to evaluate segmentation algorithms, a recently reported geodesic distance transform-based 3D nuclear segmentation method which in turns was reported to outperform several other popular tools that segment 3D nuclei in tissue samples. Finally, to apply some of the aforementioned methods to large microscopic datasets, we have developed a user friendly computing environment called MiPipeline which supports high throughput data analysis, data and process provenance, visual programming and seamlessly integrated information visualization of hierarchical biological data. The computational part of the environment is based on LONI Pipeline distributed computing server and the interactive information visualization makes use of several javascript based libraries to visualize an XML-based backbone file populated with essential meta-data and results

    Mobile Augmented Reality: User Interfaces, Frameworks, and Intelligence

    Get PDF
    Mobile Augmented Reality (MAR) integrates computer-generated virtual objects with physical environments for mobile devices. MAR systems enable users to interact with MAR devices, such as smartphones and head-worn wearables, and perform seamless transitions from the physical world to a mixed world with digital entities. These MAR systems support user experiences using MAR devices to provide universal access to digital content. Over the past 20 years, several MAR systems have been developed, however, the studies and design of MAR frameworks have not yet been systematically reviewed from the perspective of user-centric design. This article presents the first effort of surveying existing MAR frameworks (count: 37) and further discuss the latest studies on MAR through a top-down approach: (1) MAR applications; (2) MAR visualisation techniques adaptive to user mobility and contexts; (3) systematic evaluation of MAR frameworks, including supported platforms and corresponding features such as tracking, feature extraction, and sensing capabilities; and (4) underlying machine learning approaches supporting intelligent operations within MAR systems. Finally, we summarise the development of emerging research fields and the current state-of-the-art, and discuss the important open challenges and possible theoretical and technical directions. This survey aims to benefit both researchers and MAR system developers alike.Peer reviewe
    • 

    corecore