16 research outputs found

    Automatic detection of malignant prostatic gland units in cross-sectional microscopic images

    Get PDF
    Prostate cancer is the second most frequent cause of cancer deaths among men in the US. In the most reliable screening method, histological images from a biopsy are examined under a microscope by pathologists. In an early stage of prostate cancer, only relatively few gland units in a large region become malignant. Discovering such sparse malignant gland units using a microscope is a labor-intensive and error-prone task for pathologists. In this paper, we develop effective image segmentation and classification methods for automatic detection of malignant gland units in microscopic images. Both segmentation and classification methods are based on carefully designed feature descriptors, including color histograms and texton co-occurrence tables. © 2010 IEEE.published_or_final_versionThe 17th IEEE International Conference on Image Processing (ICIP 2010), Hong Kong, China, 26-29 September 2010. In Proceedings of the 17th ICIP, 2010, p. 1057-106

    Category-specific incremental visual codebook training for scene categorization

    Get PDF
    In this paper, we propose a category-specific incremental visual codebook training method for scene categorization. In this method, based on a preliminary codebook trained from a subset of training samples, we incrementally introduce the remaining training samples to enrich the content of the visual codebook. Then, the incremental learned codebook is used to encode the images for scene categorization. The advantages of the proposed method are (1) computationally efficient comparing with batch mode clustering method; (2) the number of visual words is determined automatically in the incremental learning procedure; (3) scene categorization performance is improved using the enriched codebook comparing with using the codebook trained from a subset of training samples. The experimental results show the effectiveness of the proposed method. © 2010 IEEE.published_or_final_versionThe 17th IEEE International Conference on Image Processing (ICIP 2010), Hong Kong, China, 26-29 September 2010. In Proceedings of 17th ICIP, 2010, p. 1501-150

    Reconstructing diffusion kurtosis tensors from sparse noisy measurements

    Get PDF
    Diffusion kurtosis imaging (DKI) is a recent MRI based method that can quantify deviation from Gaussian behavior using a kurtosis tensor. DKI has potential value for the assessment of neurologic diseases. Existing techniques for diffusion kurtosis imaging typically need to capture hundreds of MRI images, which is not clinically feasible on human subjects. In this paper, we develop robust denoising and model fitting methods that make it possible to accurately reconstruct a kurtosis tensor from 75 or less noisy measurements. Our denoising method is based on subspace learning for multi-dimensional signals and our model fitting technique uses iterative reweighting to effectively discount the influences of outliers. The total data acquisition time thus drops significantly, making diffusion kurtosis imaging feasible for many clinical applications involving human subjects. © 2010 IEEE.published_or_final_versionThe 17th IEEE International Conference on Image Processing (ICIP 2010), Hong Kong, China, 26-29 September 2010. In Proceedings of the 17th ICIP, 2010, p. 4185-418

    Quantification of structures of skin lesion and calibration of dermoscopy images for automated melanoma diagnosis

    Get PDF
    研究成果の概要 (和文) : 我々が開発している、インターネット上の悪性黒色腫自動診断支援システムの実用化を具体的に推進するために、課題名にある2点の開発を重点的に行った。(1)定量化研究においては、皮膚科医が臨床現場で用いるABCD rule、7-point checklistの全15項目について、皮膚科医の判断と統計的有意差が見られないモデルの構築に成功した。(2)画像補正研究においては、特殊なハードウェアなどを必要とせず、画像の明度、色彩を適切なダーモスコピー画像のものに調整する手法を確立した。研究成果の概要 (英文) : I had investigated mainly following two themes for making our Internet-based melanoma screening system fit for practical use. (1) Quantification of dermoscopic structures (2) Development of automated color calibration for dermoscopy images. For both themes, I had successfully achieved the objectives set out in the proposal. (1) Succeeded to build a recognition system that has statistically no difference with expert dermatologists in recognizing a total of 15 dermoscopic structures defined in ABCD rule and 7-point checklist. (2) Achieved equivalent color calibration performance for dermoscopy images in luminance, hue and saturation without any special devices

    A Novel HVS-based Watermarking Scheme in CT Domain

    Get PDF
    In this paper, a novel watermarking technique in contourlet transform (CT) domain is presented. The proposed algorithm takes advantage of a multiscale framework and multi- directionality to extract the significant frequency, luminance and texture component in an image. Unlike the conventional methods in the contourlet domain, mask function is accomplished pixel by pixel by taking into account the frequency, the luminance and the texture content of all the image subbands including the low-pass subband and directional subbands. The adaptive nature of the novel method allows the scheme to be adaptive in terms of the imperceptibility and robustness. The watermark is detected by computing the correlation. Finally, the experimental results demonstrate the imperceptibility and the robustness against standard watermarking attacks

    Approximate Bayesian Computation, stochastic algorithms and non-local means for complex noise models

    Get PDF
    International audienceIn this paper, we present a stochastic NL-means-based denoising algorithm for generalized non-parametric noise models. First, we provide a statistical interpretation to current patch-based neighborhood filters and justify the Bayesian inference that needs to explicitly accounts for discrepancies between the model and the data. Furthermore, we investigate the Approximate Bayesian Computation (ABC) rejection method combined with density learning techniques for handling situations where the posterior is intractable or too prohibitive to calculate. We demonstrate our stochastic Gamma NL-means (SGNL) on real images corrupted by non-Gaussian noise

    No-reference depth map quality evaluation model based on depth map edge confidence measurement in immersive video applications

    Get PDF
    When it comes to evaluating perceptual quality of digital media for overall quality of experience assessment in immersive video applications, typically two main approaches stand out: Subjective and objective quality evaluation. On one hand, subjective quality evaluation offers the best representation of perceived video quality assessed by the real viewers. On the other hand, it consumes a significant amount of time and effort, due to the involvement of real users with lengthy and laborious assessment procedures. Thus, it is essential that an objective quality evaluation model is developed. The speed-up advantage offered by an objective quality evaluation model, which can predict the quality of rendered virtual views based on the depth maps used in the rendering process, allows for faster quality assessments for immersive video applications. This is particularly important given the lack of a suitable reference or ground truth for comparing the available depth maps, especially when live content services are offered in those applications. This paper presents a no-reference depth map quality evaluation model based on a proposed depth map edge confidence measurement technique to assist with accurately estimating the quality of rendered (virtual) views in immersive multi-view video content. The model is applied for depth image-based rendering in multi-view video format, providing comparable evaluation results to those existing in the literature, and often exceeding their performance

    Probabilistic modeling and inference for sequential space-varying blur identification

    Get PDF
    International audienceThe identification of parameters of spatially variant blurs given a clean image and its blurry noisy version is a challenging inverse problem of interest in many application fields, such as biological microscopy and astronomical imaging. In this paper, we consider a parametric model of the blur and introduce an 1D state-space model to describe the statistical dependence among the neighboring kernels. We apply a Bayesian approach to estimate the posterior distribution of the kernel parameters given the available data. Since this posterior is intractable for most realistic models, we propose to approximate it through a sequential Monte Carlo approach by processing all data in a sequential and efficient manner. Additionally, we propose a new sampling method to alleviate the particle degeneracy problem, which is present in approximate Bayesian filtering, particularly in challenging concentrated posterior distributions. The considered method allows us to process sequentially image patches at a reasonable computational and memory costs. Moreover, the probabilistic approach we adopt in this paper provides uncertainty quantification which is useful for image restoration. The practical experimental results illustrate the improved estimation performance of our novel approach, demonstrating also the benefits of exploiting the spatial structure the parametric blurs in the considered models

    A single-lobe photometric stereo approach for heterogeneous material

    Get PDF
    Shape from shading with multiple light sources is an active research area, and a diverse range of approaches have been proposed in recent decades. However, devising a robust reconstruction technique still remains a challenging goal, as the image acquisition process is highly nonlinear. Recent Photometric Stereo variants rely on simplifying assumptions in order to make the problem solvable: light propagation is still commonly assumed to be uniform, and the Bidirectional Reflectance Distribution Function is assumed to be diffuse, with limited interest for specular materials. In this work, we introduce a well-posed formulation based on partial differential equations (PDEs) for a unified reflectance function that can model both diffuse and specular reflections. We base our derivation on ratio of images, which makes the model independent from photometric invariants and yields a well-posed differential problem based on a system of quasi-linear PDEs with discontinuous coefficients. In addition, we directly solve a differential problem for the unknown depth, thus avoiding the intermediate step of approximating the normal field. A variational approach is presented ensuring robustness to noise and outliers (such as black shadows), and this is confirmed with a wide range of experiments on both synthetic and real data, where we compare favorably to the state of the art.Roberto Mecca is a Marie Curie fellow of the “Istituto Nazionale di Alta Matematica” (Italy) for a project shared with University of Cambridge, Department of Engineering and the Department of Mathematics, University of Bologna
    corecore