2,240 research outputs found

    Rotationally invariant texture features using the dual-tree complex wavelet transform

    Get PDF

    Rotationally invariant texture based features

    Get PDF

    Self-Similar Anisotropic Texture Analysis: the Hyperbolic Wavelet Transform Contribution

    Full text link
    Textures in images can often be well modeled using self-similar processes while they may at the same time display anisotropy. The present contribution thus aims at studying jointly selfsimilarity and anisotropy by focusing on a specific classical class of Gaussian anisotropic selfsimilar processes. It will first be shown that accurate joint estimates of the anisotropy and selfsimilarity parameters are performed by replacing the standard 2D-discrete wavelet transform by the hyperbolic wavelet transform, which permits the use of different dilation factors along the horizontal and vertical axis. Defining anisotropy requires a reference direction that needs not a priori match the horizontal and vertical axes according to which the images are digitized, this discrepancy defines a rotation angle. Second, we show that this rotation angle can be jointly estimated. Third, a non parametric bootstrap based procedure is described, that provides confidence interval in addition to the estimates themselves and enables to construct an isotropy test procedure, that can be applied to a single texture image. Fourth, the robustness and versatility of the proposed analysis is illustrated by being applied to a large variety of different isotropic and anisotropic self-similar fields. As an illustration, we show that a true anisotropy built-in self-similarity can be disentangled from an isotropic self-similarity to which an anisotropic trend has been superimposed

    Multispectral texture synthesis

    Get PDF
    Synthesizing texture involves the ordering of pixels in a 2D arrangement so as to display certain known spatial correlations, generally as described by a sample texture. In an abstract sense, these pixels could be gray-scale values, RGB color values, or entire spectral curves. The focus of this work is to develop a practical synthesis framework that maintains this abstract view while synthesizing texture with high spectral dimension, effectively achieving spectral invariance. The principle idea is to use a single monochrome texture synthesis step to capture the spatial information in a multispectral texture. The first step is to use a global color space transform to condense the spatial information in a sample texture into a principle luminance channel. Then, a monochrome texture synthesis step generates the corresponding principle band in the synthetic texture. This spatial information is then used to condition the generation of spectral information. A number of variants of this general approach are introduced. The first uses a multiresolution transform to decompose the spatial information in the principle band into an equivalent scale/space representation. This information is encapsulated into a set of low order statistical constraints that are used to iteratively coerce white noise into the desired texture. The residual spectral information is then generated using a non-parametric Markov Ran dom field model (MRF). The remaining variants use a non-parametric MRF to generate the spatial and spectral components simultaneously. In this ap proach, multispectral texture is grown from a seed region by sampling from the set of nearest neighbors in the sample texture as identified by a template matching procedure in the principle band. The effectiveness of both algorithms is demonstrated on a number of texture examples ranging from greyscale to RGB textures, as well as 16, 22, 32 and 63 band spectral images. In addition to the standard visual test that predominates the literature, effort is made to quantify the accuracy of the synthesis using informative and effective metrics. These include first and second order statistical comparisons as well as statistical divergence tests

    Radiomics-Based Outcome Prediction for Pancreatic Cancer Following Stereotactic Body Radiotherapy

    Get PDF
    (1) Background: Radiomics use high-throughput mining of medical imaging data to extract unique information and predict tumor behavior. Currently available clinical prediction models poorly predict treatment outcomes in pancreatic adenocarcinoma. Therefore, we used radiomic features of primary pancreatic tumors to develop outcome prediction models and compared them to traditional clinical models. (2) Methods: We extracted and analyzed radiomic data from pre-radiation contrast-enhanced CTs of 74 pancreatic cancer patients undergoing stereotactic body radiotherapy. A panel of over 800 radiomic features was screened to create overall survival and local-regional recurrence prediction models, which were compared to clinical prediction models and models combining radiomic and clinical information. (3) Results: A 6-feature radiomic signature was identified that achieved better overall survival prediction performance than the clinical model (mean concordance index: 0.66 vs. 0.54 on resampled cross-validation test sets), and the combined model improved the performance slightly further to 0.68. Similarly, a 7-feature radiomic signature better predicted recurrence than the clinical model (mean AUC of 0.78 vs. 0.66). (4) Conclusion: Overall survival and recurrence can be better predicted with models based on radiomic features than with those based on clinical features for pancreatic cancer

    Redundancy of stereoscopic images: Experimental Evaluation

    Full text link
    With the recent advancement in visualization devices over the last years, we are seeing a growing market for stereoscopic content. In order to convey 3D content by means of stereoscopic displays, one needs to transmit and display at least 2 points of view of the video content. This has profound implications on the resources required to transmit the content, as well as demands on the complexity of the visualization system. It is known that stereoscopic images are redundant, which may prove useful for compression and may have positive effect on the construction of the visualization device. In this paper we describe an experimental evaluation of data redundancy in color stereoscopic images. In the experiments with computer generated and real life and test stereo images, several observers visually tested the stereopsis threshold and accuracy of parallax measuring in anaglyphs and stereograms as functions of the blur degree of one of two stereo images and color saturation threshold in one of two stereo images for which full color 3D perception with no visible color degradations is maintained. The experiments support a theoretical estimate that one has to add, to data required to reproduce one of two stereoscopic images, only several percents of that amount of data in order to achieve stereoscopic perception
    • …
    corecore