1,384 research outputs found

    Distributed Object Medical Imaging Model

    Get PDF
    Abstract- Digital medical informatics and images are commonly used in hospitals today,. Because of the interrelatedness of the radiology department and other departments, especially the intensive care unit and emergency department, the transmission and sharing of medical images has become a critical issue. Our research group has developed a Java-based Distributed Object Medical Imaging Model(DOMIM) to facilitate the rapid development and deployment of medical imaging applications in a distributed environment that can be shared and used by related departments and mobile physiciansDOMIM is a unique suite of multimedia telemedicine applications developed for the use by medical related organizations. The applications support realtime patients’ data, image files, audio and video diagnosis annotation exchanges. The DOMIM enables joint collaboration between radiologists and physicians while they are at distant geographical locations. The DOMIM environment consists of heterogeneous, autonomous, and legacy resources. The Common Object Request Broker Architecture (CORBA), Java Database Connectivity (JDBC), and Java language provide the capability to combine the DOMIM resources into an integrated, interoperable, and scalable system. The underneath technology, including IDL ORB, Event Service, IIOP JDBC/ODBC, legacy system wrapping and Java implementation are explored. This paper explores a distributed collaborative CORBA/JDBC based framework that will enhance medical information management requirements and development. It encompasses a new paradigm for the delivery of health services that requires process reengineering, cultural changes, as well as organizational changes

    Grid Analysis of Radiological Data

    Get PDF
    IGI-Global Medical Information Science Discoveries Research Award 2009International audienceGrid technologies and infrastructures can contribute to harnessing the full power of computer-aided image analysis into clinical research and practice. Given the volume of data, the sensitivity of medical information, and the joint complexity of medical datasets and computations expected in clinical practice, the challenge is to fill the gap between the grid middleware and the requirements of clinical applications. This chapter reports on the goals, achievements and lessons learned from the AGIR (Grid Analysis of Radiological Data) project. AGIR addresses this challenge through a combined approach. On one hand, leveraging the grid middleware through core grid medical services (data management, responsiveness, compression, and workflows) targets the requirements of medical data processing applications. On the other hand, grid-enabling a panel of applications ranging from algorithmic research to clinical use cases both exploits and drives the development of the services

    Preparing CT imaging datasets for deep learning in lung nodule analysis:Insights from four well-known datasets

    Get PDF
    Background: Deep learning is an important means to realize the automatic detection, segmentation, and classification of pulmonary nodules in computed tomography (CT) images. An entire CT scan cannot directly be used by deep learning models due to image size, image format, image dimensionality, and other factors. Between the acquisition of the CT scan and feeding the data into the deep learning model, there are several steps including data use permission, data access and download, data annotation, and data preprocessing. This paper aims to recommend a complete and detailed guide for researchers who want to engage in interdisciplinary lung nodule research of CT images and Artificial Intelligence (AI) engineering.Methods: The data preparation pipeline used the following four popular large-scale datasets: LIDC-IDRI (Lung Image Database Consortium image collection), LUNA16 (Lung Nodule Analysis 2016), NLST (National Lung Screening Trial) and NELSON (The Dutch-Belgian Randomized Lung Cancer Screening Trial). The dataset preparation is presented in chronological order.Findings: The different data preparation steps before deep learning were identified. These include both more generic steps and steps dedicated to lung nodule research. For each of these steps, the required process, necessity, and example code or tools for actual implementation are provided.Discussion and conclusion: Depending on the specific research question, researchers should be aware of the various preparation steps required and carefully select datasets, data annotation methods, and image preprocessing methods. Moreover, it is vital to acknowledge that each auxiliary tool or code has its specific scope of use and limitations. This paper proposes a standardized data preparation process while clearly demonstrating the principles and sequence of different steps. A data preparation pipeline can be quickly realized by following these proposed steps and implementing the suggested example codes and tools.</p

    Determination of critical factors for fast and accurate 2D medical image deformation

    Get PDF
    The advent of medical imaging technology enabled physicians to study patient anatomy non-invasively and revolutionized the medical community. As medical images have become digitized and the resolution of these images has increased, software has been developed to allow physicians to explore their patients\u27 image studies in an increasing number of ways by allowing viewing and exploration of reconstructed three-dimensional models. Although this has been a boon to radiologists, who specialize in interpreting medical images, few software packages exist that provide fast and intuitive interaction for other physicians. In addition, although the users of these applications can view their patient data at the time the scan was taken, the placement of the tissues during a surgical intervention is often different due to the position of the patient and methods used to provide a better view of the surgical field. None of the commonly available medical image packages allow users to predict the deformation of the patient\u27s tissues under those surgical conditions. This thesis analyzes the performance and accuracy of a less computationally intensive yet physically-based deformation algorithm- the extended ChainMail algorithm. The proposed method allows users to load DICOM images from medical image studies, interactively classify the tissues in those images according to their properties under deformation, deform the tissues in two dimensions, and visualize the result. The method was evaluated using data provided by the Truth Cube experiment, where a phantom made of material with properties similar to liver under deformation was placed under varying amounts of uniaxial strain. CT scans were before and after the deformations. The deformation was performed on a single DICOM image from the study that had been manually classified as well as on data sets generated from that original image. These generated data sets were ideally segmented versions of the phantom images that had been scaled to varying fidelities in order to evaluate the effect of image size on the algorithm\u27s accuracy and execution time. Two variations of the extended ChainMail algorithm parameters were also implemented for each of the generated data sets in order to examine the effect of the parameters. The resultant deformations were compared with the actual deformations as determined by the Truth Cube experimenters. For both variations of the algorithm parameters, the predicted deformations at 5% uniaxial strain had an RMS error of a similar order of magnitude to the errors in a finite element analysis performed by the truth cube experimenters for the deformations at 18.25% strain. The average error was able to be reduced by approximately between 10-20% for the lower fidelity data sets through the use of one of the parameter schemes, although the benefit decreased as the image size increased. When the algorithm was evaluated under 18.25% strain, the average errors were more than 8 y times that of the errors in the finite element analysis. Qualitative analysis of the deformed images indicated differing degrees of accuracy across the ideal image set, with the largest displacements estimated closer to the initial point of deformation. This is hypothesized to be a result of the order in which deformation was processed for points in the image. The algorithm execution time was examined for the varying generated image fidelities. For a generated image that was approximately 18.5% of the size of the tissue in the original image, the execution time was less than 15 seconds. In comparison, the algorithm processing time for the full-scale image was over 3 y hours. The analysis of the extended ChainMail algorithm for use in medical image deformation emphasizes the importance of the choice of algorithm parameters on the accuracy of the deformations and of data set size on the processing time

    Linking Whole-Slide Microscope Images with DICOM by Using JPEG2000 Interactive Protocol

    Get PDF
    The use of digitized histopathologic specimens (also known as whole-slide images (WSIs)) in clinical medicine requires compatibility with the Digital Imaging and Communications in Medicine (DICOM) standard. Unfortunately, WSIs usually exceed DICOM image object size limit, making it impossible to store and exchange them in a straightforward way. Moreover, transmitting the entire DICOM image for viewing is ineffective for WSIs. With the JPEG2000 Interactive Protocol (JPIP), WSIs can be linked with DICOM by transmitting image data over an auxiliary connection, apart from patient data. In this study, we explored the feasibility of using JPIP to link JPEG2000 WSIs with a DICOM-based Picture Archiving and Communications System (PACS). We first modified an open-source DICOM library by adding support for JPIP as described in the existing DICOM Supplement 106. Second, the modified library was used as a basis for a software package (JVSdicom), which provides a proof-of-concept for a DICOM client–server system that can transmit patient data, conventional DICOM imagery (e.g., radiological), and JPIP-linked JPEG2000 WSIs. The software package consists of a compression application (JVSdicom Compressor) for producing DICOM-compatible JPEG2000 WSIs, a DICOM PACS server application (JVSdicom Server), and a DICOM PACS client application (JVSdicom Workstation). JVSdicom is available for free from our Web site (http://jvsmicroscope.uta.fi/), which also features a public JVSdicom Server, containing example X-ray images and histopathology WSIs of breast cancer cases. The software developed indicates that JPEG2000 and JPIP provide a well-working solution for linking WSIs with DICOM, requiring only minor modifications to current DICOM standard specification
    • 

    corecore