683 research outputs found

    Improved detection and characterization of obscured central gland tumors of the prostate: texture analysis of non contrast and contrast enhanced MR images for differentiation of benign prostate hyperplasia (BPH) nodules and cancer

    Full text link
    OBJECTIVE: The purpose of this study to assess the value of texture analysis (TA) for prostate cancer (PCa) detection on T2 weighted images (T2WI) and dynamic contrast-enhanced images (DCE) by differentiating between the PCa and Benign Prostate Hyperplasia (BPH). MATERIALS & METHODS: This study used 10 retrospective MRI data sets that were acquired from men with confirmed PCa. The prostate region of interest (ROI) was delineated by an expert on MRI data sets using automated prostate capsule segmentation scheme. The statistical significance test was used for feature selection scheme for optimal differentiation of PCa from BPH on MR images. In pre-processing, for T2-WI, Bias correction and all images intensities are standardized to a representative template. For DCE images, Bias correction and all images are registered to time point 1 for that patient. Following pre-processing texture, features from ROI were extracted and analyzed. Texture features that were extracted are: Intensity mean and standard deviation, Sobel (Edge detection), Haralick features, and Gabor features. RESULTS: In T2-WI, statistically significant differences were observed in Haralick features. In DCE images, statistically significant differences were observed in mean intensity, Sobel, Gabor, and Haralick features. CONCLUSION: BPH is better differentiated in DCE images compared to T2-WI. The statically significant features may be combined to build a BPH vs. cancer detection system in future

    Texture Analysis Methods for Medical Image Characterisation

    Get PDF

    Data and knowledge engineering for medical image and sensor data

    Get PDF

    Deep Learning Based Medical Image Analysis with Limited Data

    Full text link
    Deep Learning Methods have shown its great effort in the area of Computer Vision. However, when solving the problems of medical imaging, deep learning’s power is confined by limited data available. We present a series of novel methodologies for solving medical imaging analysis problems with limited Computed tomography (CT) scans available. Our method, based on deep learning, with different strategies, including using Generative Adversar- ial Networks, two-stage training, infusing the expert knowledge, voting based or converting to other space, solves the data set limitation issue for the cur- rent medical imaging problems, specifically cancer detection and diagnosis, and shows very good performance and outperforms the state-of-art results in the literature. With the self-learned features, deep learning based techniques start to be applied to the biomedical imaging problems and various structures have been designed. In spite of its simplity and anticipated good performance, the deep learning based techniques can not perform to its best extent due to the limited size of data sets for the medical imaging problems. On the other side, the traditional hand-engineered features based methods have been studied in the past decades and a lot of useful features have been found by these research for the task of detecting and diagnosing the pulmonary nod- ules on CT scans, but these methods are usually performed through a series of complicated procedures with manually empirical parameter adjustments. Our method significantly reduces the complications of the traditional proce- dures for pulmonary nodules detection, while retaining and even outperforming the state-of-art accuracy. Besides, we make contribution on how to convert low-dose CT image to full-dose CT so as to adapting current models on the newly-emerged low-dose CT data

    Technical note: Extension of CERR for computational radiomics: a comprehensive MATLAB platform for reproducible radiomics research

    Get PDF
    PurposeRadiomics is a growing field of image quantitation, but it lacks stable and high-quality software systems. We extended the capabilities of the Computational Environment for Radiological Research (CERR) to create a comprehensive, open-source, MATLAB-based software platform with an emphasis on reproducibility, speed, and clinical integration of radiomics research. MethodThe radiomics tools in CERR were designed specifically to quantitate medical images in combination with CERR's core functionalities of radiological data import, transformation, management, image segmentation, and visualization. CERR allows for batch calculation and visualization of radiomics features, and provides a user-friendly data structure for radiomics metadata. All radiomics computations are vectorized for speed. Additionally, a test suite is provided for reconstruction and comparison with radiomics features computed using other software platforms such as the Insight Toolkit (ITK) and PyRadiomics. CERR was evaluated according to the standards defined by the Image Biomarker Standardization Initiative. CERR's radiomics feature calculation was integrated with the clinically used MIM software using its MATLAB((R)) application programming interface. ResultsThe CERR provides a comprehensive computational platform for radiomics analysis. Matrix formulations for the compute-intensive Haralick texture resulted in speeds that are superior to the implementation in ITK 4.12. For an image discretized into 32 bins, CERR achieved a speedup of 3.5 times over ITK. The CERR test suite enabled the successful identification of programming errors as well as genuine differences in radiomics definitions and calculations across the software packages tested. ConclusionThe CERR's radiomics capabilities are comprehensive, open-source, and fast, making it an attractive platform for developing and exploring radiomics signatures across institutions. The ability to both choose from a wide variety of radiomics implementations and to integrate with a clinical workflow makes CERR useful for retrospective as well as prospective research analyses

    Adaptive Feature Engineering Modeling for Ultrasound Image Classification for Decision Support

    Get PDF
    Ultrasonography is considered a relatively safe option for the diagnosis of benign and malignant cancer lesions due to the low-energy sound waves used. However, the visual interpretation of the ultrasound images is time-consuming and usually has high false alerts due to speckle noise. Improved methods of collection image-based data have been proposed to reduce noise in the images; however, this has proved not to solve the problem due to the complex nature of images and the exponential growth of biomedical datasets. Secondly, the target class in real-world biomedical datasets, that is the focus of interest of a biopsy, is usually significantly underrepresented compared to the non-target class. This makes it difficult to train standard classification models like Support Vector Machine (SVM), Decision Trees, and Nearest Neighbor techniques on biomedical datasets because they assume an equal class distribution or an equal misclassification cost. Resampling techniques by either oversampling the minority class or under-sampling the majority class have been proposed to mitigate the class imbalance problem but with minimal success. We propose a method of resolving the class imbalance problem with the design of a novel data-adaptive feature engineering model for extracting, selecting, and transforming textural features into a feature space that is inherently relevant to the application domain. We hypothesize that by maximizing the variance and preserving as much variability in well-engineered features prior to applying a classifier model will boost the differentiation of the thyroid nodules (benign or malignant) through effective model building. Our proposed a hybrid approach of applying Regression and Rule-Based techniques to build our Feature Engineering and a Bayesian Classifier respectively. In the Feature Engineering model, we transformed images pixel intensity values into a high dimensional structured dataset and fitting a regression analysis model to estimate relevant kernel parameters to be applied to the proposed filter method. We adopted an Elastic Net Regularization path to control the maximum log-likelihood estimation of the Regression model. Finally, we applied a Bayesian network inference to estimate a subset for the textural features with a significant conditional dependency in the classification of the thyroid lesion. This is performed to establish the conditional influence on the textural feature to the random factors generated through our feature engineering model and to evaluate the success criterion of our approach. The proposed approach was tested and evaluated on a public dataset obtained from thyroid cancer ultrasound diagnostic data. The analyses of the results showed that the classification performance had a significant improvement overall for accuracy and area under the curve when then proposed feature engineering model was applied to the data. We show that a high performance of 96.00% accuracy with a sensitivity and specificity of 99.64%) and 90.23% respectively was achieved for a filter size of 13 × 13
    • …
    corecore