39 research outputs found
Recommended from our members
Evaluation of Contrast Enhancement by Digital Equalization in Digital Mammography
Purpose: This study evaluated an algorithm based on a method of contrast enhancement by digital equalization (CEDE). Method: The algorithm was designed to enhance image contrast by employing digital equalization of digital mammograms. The CEDE algorithm was tested using ten mammograms with cancer (13 lesions) taken the University of South Florida data base, together with eight mammograms which only contained benign lesions. Three readers compared the processed images with the original mammograms for lesion conspicuity. A five point ranking scale was employed where a score of 3 corresponded to equal lesion visibility, ranks > 3 corresponded to superior lesion visibility, whereas ranks < 3 corresponded to markedly inferior lesion visibility. Results: The mean observer score for all lesions was always at least equal to that of the original digital mammogram (i.e., 3 or greater), and there was no evidence of any image distortion or other image processing artefacts. The mean rank (± standard deviation) for the 13 malignant lesions was 3.52 ± 0.38. The corresponding rank for the eight benign lesions was 3.33 ± 0.26. These differences were statistically significant in terms of standard error. Conclusion: The CEDE algorithm is capable of significantly enhancing lesion contrast in digital mammograms and our preliminary results indicate that this algorithm merits additional refinement and further (objective) evaluation
Recommended from our members
Contrast enhancement by multi-scale adaptive histogram equalization
An approach for contrast enhancement utilizing multi-scale analysis is introduced. Sub-band coefficients were modified by the method of adaptive histogram equalization. To achieve optimal contrast enhancement, the sizes of sub-regions were chosen with consideration to the support of the analysis filters. The enhanced images provided subtle details of tissues that are only visible with tedious contrast/brightness windowing methods currently used in clinical reading. We present results on chest CT data, which shows significant improvement over existing state-of-the-art methods: unsharp masking, adaptive histogram equalization (AHE), and the contrast limited adaptive histogram equalization (CLAHE). A systematic study on 109 clinical chest CT images by three radiologists suggests the promise of this method in terms of both interpretation time and diagnostic performance on different pathological cases. In addition, radiologists observed no noticeable artifacts or amplification of noise that usually appears in traditional adaptive histogram equalization and its variations
Recommended from our members
State of the Art of Level Set Methods in Segmentation and Registration of Medical Imaging Modalities
Segmentation of medical images is an important step in various applications such as visualization, quantitative analysis and image-guided surgery. Numerous segmentation methods have been developed in the past two decades for extraction of organ contours on medical images. Low-level segmentation methods, such as pixel-based clustering, region growing, and filter-based edge detection, require additional pre-processing and post-processing as well as considerable amounts of expert intervention or information of the objects of interest. Furthermore the subsequent analysis of segmented objects is hampered by the primitive, pixel or voxel level representations from those region-based segmentation. Deformable models, on the other hand, provide an explicit representation of the boundary and the shape of the object. They combine several desirable features such as inherent connectivity and smoothness, which counteract noise and boundary irregularities, as well as the ability to incorporate knowledge about the object of interest. However, parametric deformable models have two main limitations. First, in situations where the initial model and desired object boundary differ greatly in size and shape, the model must be re-parameterized dynamically to faithfully recover the object boundary. The second limitation is that it has difficulty dealing with topological adaptation such as splitting or merging model parts, a useful property for recovering either multiple objects or objects with unknown topology. This difficulty is caused by the fact that a new parameterization must be constructed whenever topology change occurs, which requires sophisticated schemes. Level set deformable models, also referred to as geometric deformable models, provide an elegant solution to address the primary limitations of parametric deformable models. These methods have drawn a great deal of attention since their introduction in 1988. Advantages of the contour implicit formulation of the deformable model over parametric formulation include: (1) no parameterization of the contour, (2) topological flexibility, (3) good numerical stability, (4) straightforward extension of the 2D formulation to n-D. Recent reviews on the subject include papers from Suri. In this chapter we give a general overview of the level set segmentation methods with emphasize on new frameworks recently introduced in the context of medical imaging problems. We then introduce novel approaches that aim at combining segmentation and registration in a level set formulation. Finally we review a selective set of clinical works with detailed validation of the level set methods for several clinical applications
Recommended from our members
An adaptive speed term based on homogeneity for level-set segmentation
We tested on an edge map computed from a local homogeneity measurement, which is a potential replacement for the traditional gradient-based edge map in level-set segmentation. In existing level-set methods, the gradient information is used as a stopping criteria for curve evolution, and also provides the attracting force to the zero level-set from the target boundary. However, in a discrete implementation, the gradient-based term can never fully stop the level-set evolution even for ideal edges, leakage is often unavoidable. Also the effective distance of the attracting force and blurring of edges become a trade-off in choosing the shape and support of the smoothing filter. The proposed homogeneity measurement provides easier and more robust edge estimation, and the possibility of fully stopping the level-set evolution. The homogeneity term decreasing from a homogenous region to the boundary, which dramatically increases the effective distance of the attracting force and also provides additional measurement of the overall approximation to the target boundary. Therefore, it provides a reliable criteria of adaptively changing the advent speed. By using this term, the leakage problem was avoided effectively in most cases compared to traditional level-set methods. The computation of the homogeneity is fast and its extension to the 3D case is straightforward
Recommended from our members
Improving statistics for hybrid segmentation of high-resolution multichannel images
High-resolution multichannel textures are difficult to characterize with simple statistics and the high level of detail makes the selection of a particular contour using classical gradient-based methods not effective. We have developed a hybrid method that combines fuzzy connectedness and Voronoi diagram classification for the segmentation of color and multichannel objects. The multi-step classification process relies on homogeneity measures derived from moment statistics and histogram information. These color features have been optimized to best combine individual channel information in the classification process. The segmentation initialization requires only a set of interior and exterior seed points, minimizing user intervention and the influence of the initialization on the overall quality of the results. The method was tested on volumes from the Visible Human and on brain multi-protocol MRI data sets. The hybrid segmentation produced robust, rapid and finely detailed contours with good visual accuracy. The addition of quantized statistics and color histogram distances as classification features improved the robustness of the method with regards to initialization when compared to our original implementation
Recommended from our members
De-noising SPECT/PET Images Using Cross-Scale Regularization
De-noising of SPECT and PET images is a challenging task due to the inherent low signal-to-noise ratio of acquired data. Wavelet based multi-scale denoising methods typically apply thresholding operators on sub-band coefficients to eliminate noise components in spatial-frequency space prior to reconstruction. In the case of high noise levels, detailed scales of sub-band images are usually dominated by noise which cannot be easily removed using traditional thresholding schemes. To address this issue, a cross-scale regularization scheme is introduced, which takes into account cross-scale coherence of structured signals. Preliminary results show promising performance in denoising clinical SPECT and PET images for liver and brain studies. Wavelet thresholding was also compared to denoising with a brushlet expansion. The proposed regularization scheme eliminates the need for threshold parameter settings, making the denoising process less tedious and suitable for clinical practice
Recommended from our members
Flow-resolution Enhancement in Electrophoretic NMR Using De-noising and Linear Prediction
Detection of electrophoretic motion of ionic species using multi-dimensional Electrophoretic NMR (nD-ENMR) has demonstrated the potential to distinguish signals from two molecules in a solution mixture without their physical separation. Therefore, this technique may be applied for simultaneous structure determination of proteins and protein conformations, even during their biochemical interactions. Indeed, this has been achieved by introducing an additional dimension of electrophoretic mobility to the conventional multi-dimensional NMR by applying an external DC electric field. Consequently, the protein spectra are differently modulated by their electrophoretic mobilities in the electrophoretic flow dimension. Unfortunately, spectral resolution in the flow dimension has been limited by severe signal truncations due to the limited DC electric field available before onset of heating-induced convection. Linear prediction, which have been widely used for high-resolution spectral estimation from finite Fourier samples, have already been proposed to extend the truncated ENMR flow oscillation curves. However, we found that the spectral quality of linear prediction deteriorates as the spectral S/N decreases. To alleviate this problem, we have denoised the ENMR data using low pass filters prior to linear prediction. This technique has lead to improved resolution in the electrophoretic flow dimension. The approach was applied to analyze a 2D ENMR data matrix obtained from a mixture solution of two proteins ubiquitin and bovine serum albumin (BSA) in D2O
MiR-34a-5p promotes the multi-drug resistance of osteosarcoma by targeting the CD117 gene.
An association has been reported between miR-34a-5p and several types of cancer. Specifically, in this study, using systematic observations of multi-drug sensitive (G-292 and MG63.2) and resistant (SJSA-1 and MNNG/HOS) osteosarcoma (OS) cell lines, we showed that miR-34a-5p promotes the multi-drug resistance of OS through the receptor tyrosine kinase CD117, a direct target of miR-34a-5p. Consistently, the siRNA-mediated repression of CD117 in G-292 and MG63.2 cells led to a similar phenotype that exhibited all of the miR-34a-5p mimic-triggered changes. In addition, the activity of the MEF2 signaling pathway was drastically altered by the forced changes in the miR-34a-5p or CD117 level in OS cells. Furthermore, si-CD117 suppressed the enhanced colony and sphere formation, which is in agreement with the characteristics of a cancer stem marker. Taken together, our data established CD117 as a direct target of miR-34-5p and demonstrated that this regulation interferes with several CD117-mediated effects on OS cells. In addition to providing new mechanistic insights, our results will provide an approach for diagnosing and chemotherapeutically treating OS
Does Full Waveform Inversion Benefit from Big Data?
This paper investigates the impact of big data on deep learning models for
full waveform inversion (FWI). While it is well known that big data can boost
the performance of deep learning models in many tasks, its effectiveness has
not been validated for FWI. To address this gap, we present an empirical study
that investigates how deep learning models in FWI behave when trained on
OpenFWI, a collection of large-scale, multi-structural datasets published
recently. Particularly, we train and evaluate the FWI models on a combination
of 10 2D subsets in OpenFWI that contain 470K data pairs in total. Our
experiments demonstrate that larger datasets lead to better performance and
generalization of deep learning models for FWI. We further demonstrate that
model capacity needs to scale in accordance with data size for optimal
improvement
Recommended from our members
Hybrid Segmentation of Anatomical Data
We propose new hybrid methods for automated segmentation of radiological patient data and the Visible Human data. In this paper, we integrate boundary-based and region-based segmentation methods which amplifies the strength but reduces the weakness of both approaches. The novelty comes from combining a boundary-based method, the deformable model-based segmentation with region-based segmentation methods, the fuzzy connectedness and Voronoi Diagram-based segmentation, to develop hybrid methods that yield high precision, accuracy and efficiency. This work is a part of a NLM funded effort to provide a fully implemented and tested Visible Human Project Segmentation and Registration Toolkit (Insight)