4,680 research outputs found

    Segmentation of ultrasound images of thyroid nodule for assisting fine needle aspiration cytology

    Get PDF
    The incidence of thyroid nodule is very high and generally increases with the age. Thyroid nodule may presage the emergence of thyroid cancer. The thyroid nodule can be completely cured if detected early. Fine needle aspiration cytology is a recognized early diagnosis method of thyroid nodule. There are still some limitations in the fine needle aspiration cytology, and the ultrasound diagnosis of thyroid nodule has become the first choice for auxiliary examination of thyroid nodular disease. If we could combine medical imaging technology and fine needle aspiration cytology, the diagnostic rate of thyroid nodule would be improved significantly. The properties of ultrasound will degrade the image quality, which makes it difficult to recognize the edges for physicians. Image segmentation technique based on graph theory has become a research hotspot at present. Normalized cut (Ncut) is a representative one, which is suitable for segmentation of feature parts of medical image. However, how to solve the normalized cut has become a problem, which needs large memory capacity and heavy calculation of weight matrix. It always generates over segmentation or less segmentation which leads to inaccurate in the segmentation. The speckle noise in B ultrasound image of thyroid tumor makes the quality of the image deteriorate. In the light of this characteristic, we combine the anisotropic diffusion model with the normalized cut in this paper. After the enhancement of anisotropic diffusion model, it removes the noise in the B ultrasound image while preserves the important edges and local details. This reduces the amount of computation in constructing the weight matrix of the improved normalized cut and improves the accuracy of the final segmentation results. The feasibility of the method is proved by the experimental results.Comment: 15pages,13figure

    Blending Learning and Inference in Structured Prediction

    Full text link
    In this paper we derive an efficient algorithm to learn the parameters of structured predictors in general graphical models. This algorithm blends the learning and inference tasks, which results in a significant speedup over traditional approaches, such as conditional random fields and structured support vector machines. For this purpose we utilize the structures of the predictors to describe a low dimensional structured prediction task which encourages local consistencies within the different structures while learning the parameters of the model. Convexity of the learning task provides the means to enforce the consistencies between the different parts. The inference-learning blending algorithm that we propose is guaranteed to converge to the optimum of the low dimensional primal and dual programs. Unlike many of the existing approaches, the inference-learning blending allows us to learn efficiently high-order graphical models, over regions of any size, and very large number of parameters. We demonstrate the effectiveness of our approach, while presenting state-of-the-art results in stereo estimation, semantic segmentation, shape reconstruction, and indoor scene understanding

    Semantically Guided Depth Upsampling

    Full text link
    We present a novel method for accurate and efficient up- sampling of sparse depth data, guided by high-resolution imagery. Our approach goes beyond the use of intensity cues only and additionally exploits object boundary cues through structured edge detection and semantic scene labeling for guidance. Both cues are combined within a geodesic distance measure that allows for boundary-preserving depth in- terpolation while utilizing local context. We model the observed scene structure by locally planar elements and formulate the upsampling task as a global energy minimization problem. Our method determines glob- ally consistent solutions and preserves fine details and sharp depth bound- aries. In our experiments on several public datasets at different levels of application, we demonstrate superior performance of our approach over the state-of-the-art, even for very sparse measurements.Comment: German Conference on Pattern Recognition 2016 (Oral

    Localizing Region-Based Active Contours

    Get PDF
    ©2008 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or distribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.DOI: 10.1109/TIP.2008.2004611In this paper, we propose a natural framework that allows any region-based segmentation energy to be re-formulated in a local way. We consider local rather than global image statistics and evolve a contour based on local information. Localized contours are capable of segmenting objects with heterogeneous feature profiles that would be difficult to capture correctly using a standard global method. The presented technique is versatile enough to be used with any global region-based active contour energy and instill in it the benefits of localization. We describe this framework and demonstrate the localization of three well-known energies in order to illustrate how our framework can be applied to any energy. We then compare each localized energy to its global counterpart to show the improvements that can be achieved. Next, an in-depth study of the behaviors of these energies in response to the degree of localization is given. Finally, we show results on challenging images to illustrate the robust and accurate segmentations that are possible with this new class of active contour models

    Simultaneous multi-band detection of Low Surface Brightness galaxies with Markovian modelling

    Get PDF
    We present an algorithm for the detection of Low Surface Brightness (LSB) galaxies in images, called MARSIAA (MARkovian Software for Image Analysis in Astronomy), which is based on multi-scale Markovian modeling. MARSIAA can be applied simultaneously to different bands. It segments an image into a user-defined number of classes, according to their surface brightness and surroundings - typically, one or two classes contain the LSB structures. We have developed an algorithm, called DetectLSB, which allows the efficient identification of LSB galaxies from among the candidate sources selected by MARSIAA. To assess the robustness of our method, the method was applied to a set of 18 B and I band images (covering 1.3 square degrees in total) of the Virgo cluster. To further assess the completeness of the results of our method, both MARSIAA, SExtractor, and DetectLSB were applied to search for (i) mock Virgo LSB galaxies inserted into a set of deep Next Generation Virgo Survey (NGVS) gri-band subimages and (ii) Virgo LSB galaxies identified by eye in a full set of NGVS square degree gri images. MARSIAA/DetectLSB recovered ~20% more mock LSB galaxies and ~40% more LSB galaxies identified by eye than SExtractor/DetectLSB. With a 90% fraction of false positives from an entirely unsupervised pipeline, a completeness of 90% is reached for sources with r_e > 3" at a mean surface brightness level of mu_g=27.7 mag/arcsec^2 and a central surface brightness of mu^0 g=26.7 mag/arcsec^2. About 10% of the false positives are artifacts, the rest being background galaxies. We have found our method to be complementary to the application of matched filters and an optimized use of SExtractor, and to have the following advantages: it is scale-free, can be applied simultaneously to several bands, and is well adapted for crowded regions on the sky.Comment: 39 pages, 18 figures, accepted for publication in A

    Accuracy of MAP segmentation with hidden Potts and Markov mesh prior models via Path Constrained Viterbi Training, Iterated Conditional Modes and Graph Cut based algorithms

    Full text link
    In this paper, we study statistical classification accuracy of two different Markov field environments for pixelwise image segmentation, considering the labels of the image as hidden states and solving the estimation of such labels as a solution of the MAP equation. The emission distribution is assumed the same in all models, and the difference lays in the Markovian prior hypothesis made over the labeling random field. The a priori labeling knowledge will be modeled with a) a second order anisotropic Markov Mesh and b) a classical isotropic Potts model. Under such models, we will consider three different segmentation procedures, 2D Path Constrained Viterbi training for the Hidden Markov Mesh, a Graph Cut based segmentation for the first order isotropic Potts model, and ICM (Iterated Conditional Modes) for the second order isotropic Potts model. We provide a unified view of all three methods, and investigate goodness of fit for classification, studying the influence of parameter estimation, computational gain, and extent of automation in the statistical measures Overall Accuracy, Relative Improvement and Kappa coefficient, allowing robust and accurate statistical analysis on synthetic and real-life experimental data coming from the field of Dental Diagnostic Radiography. All algorithms, using the learned parameters, generate good segmentations with little interaction when the images have a clear multimodal histogram. Suboptimal learning proves to be frail in the case of non-distinctive modes, which limits the complexity of usable models, and hence the achievable error rate as well. All Matlab code written is provided in a toolbox available for download from our website, following the Reproducible Research Paradigm

    Multi-spectral light interaction modeling and imaging of skin lesions

    Get PDF
    Nevoscope as a diagnostic tool for melanoma was evaluated using a white light source with promising results. Information about the lesion depth and its structure will further improve the sensitivity and specificity of melanoma diagnosis. Wavelength-dependent variable penetration power of monochromatic light in the trans-illumination imaging using the Nevoscope can be used to obtain this information. Optimal selection of wavelengths for multi-spectral imaging requires light-tissue interaction modeling. For this, three-dimensional wavelength dependent voxel-based models of skin lesions with different depths are proposed. A Monte Carlo simulation algorithm (MCSVL) is developed in MATLAB and the tissue models are simulated using the Nevoscope optical geometry. 350-700nm optical wavelengths with an interval of 5nm are used in the study. A correlation analysis between the lesion depth and the diffuse reflectance is then used to obtain wavelengths that will produce diffuse reflectance suitable for imaging and give information related to the nevus depth and structure. Using the selected wavelengths, multi-spectral trans-illumination images of the skin lesions are collected and analyzed. An adaptive wavelet transform based tree-structure classification method (ADWAT) is proposed to classify epi-illuminance images of the skin lesions obtained using a white light source into melanoma and dysplastic nevus images classes. In this method, tree-structure models of melanoma and dysplastic nevus are developed and semantically compared with the tree-structure of the unknown image for classification. Development of the tree-structure is dependent on threshold selections obtained from a statistical analysis of the feature set. This makes the classification method adaptive. The true positive value obtained for this classifier is 90% with a false positive of 10%. The Extended ADWAT method and Fuzzy Membership Functions method using combined features from the epi-illuminance and multi-spectral images further improve the sensitivity and specificity of melanoma diagnosis. The combined feature set with the Extended-ADWAT method gives a true positive of 93.33% with a false positive of 8.88%. The Gaussian Membership Functions give a true positive of 100% with a false positive of 17.77% while the Bell Membership Functions give a true positive of 100% with a false positive of 4.44%
    corecore