5,065 research outputs found

    IVUS-based histology of atherosclerotic plaques: improving longitudinal resolution

    Get PDF
    Although Virtual Histology (VH) is the in-vivo gold standard for atherosclerosis plaque characterization in IVUS images, it suffers from a poor longitudinal resolution due to ECG-gating. In this paper, we propose an image- based approach to overcome this limitation. Since each tissue have different echogenic characteristics, they show in IVUS images different local frequency components. By using Redundant Wavelet Packet Transform (RWPT), IVUS images are decomposed in multiple sub-band images. To encode the textural statistics of each resulting image, run-length features are extracted from the neighborhood centered on each pixel. To provide the best discrimination power according to these features, relevant sub-bands are selected by using Local Discriminant Bases (LDB) algorithm in combination with Fisher’s criterion. A structure of weighted multi-class SVM permits the classification of the extracted feature vectors into three tissue classes, namely fibro-fatty, necrotic core and dense calcified tissues. Results shows the superiority of our approach with an overall accuracy of 72% in comparison to methods based on Local Binary Pattern and Co-occurrence, which respectively give accuracy rates of 70% and 71%

    Image blur estimation based on the average cone of ratio in the wavelet domain

    Get PDF
    In this paper, we propose a new algorithm for objective blur estimation using wavelet decomposition. The central idea of our method is to estimate blur as a function of the center of gravity of the average cone ratio (ACR) histogram. The key properties of ACR are twofold: it is powerful in estimating local edge regularity, and it is nearly insensitive to noise. We use these properties to estimate the blurriness of the image, irrespective of the level of noise. In particular, the center of gravity of the ACR histogram is a blur metric. The method is applicable both in case where the reference image is available and when there is no reference. The results demonstrate a consistent performance of the proposed metric for a wide class of natural images and in a wide range of out of focus blurriness. Moreover, the proposed method shows a remarkable insensitivity to noise compared to other wavelet domain methods

    Neighborhood detection and rule selection from cellular automata patterns

    Get PDF
    Using genetic algorithms (GAs) to search for cellular automation (CA) rules from spatio-temporal patterns produced in CA evolution is usually complicated and time-consuming when both, the neighborhood structure and the local rule are searched simultaneously. The complexity of this problem motivates the development of a new search which separates the neighborhood detection from the GA search. In the paper, the neighborhood is determined by independently selecting terms from a large term set on the basis of the contribution each term makes to the next state of the cell to be updated. The GA search is then started with a considerably smaller set of candidate rules pre-defined by the detected neighhorhood. This approach is tested over a large set of one-dimensional (1-D) and two-dimensional (2-D) CA rules. Simulation results illustrate the efficiency of the new algorith

    Fabric defect detection using the wavelet transform in an ARM processor

    Get PDF
    Small devices used in our day life are constructed with powerful architectures that can be used for industrial applications when requiring portability and communication facilities. We present in this paper an example of the use of an embedded system, the Zeus epic 520 single board computer, for defect detection in textiles using image processing. We implement the Haar wavelet transform using the embedded visual C++ 4.0 compiler for Windows CE 5. The algorithm was tested for defect detection using images of fabrics with five types of defects. An average of 95% in terms of correct defect detection was obtained, achieving a similar performance than using processors with float point arithmetic calculations
    corecore