200 research outputs found

    Minimum-Energy Bivariate Wavelet Frame with Arbitrary Dilation Matrix

    Get PDF
    In order to characterize the bivariate signals, minimum-energy bivariate wavelet frames with arbitrary dilation matrix are studied, which are based on superiority of the minimum-energy frame and the significant properties of bivariate wavelet. Firstly, the concept of minimum-energy bivariate wavelet frame is defined, and its equivalent characterizations and a necessary condition are presented. Secondly, based on polyphase form of symbol functions of scaling function and wavelet function, two sufficient conditions and an explicit constructed method are given. Finally, the decomposition algorithm, reconstruction algorithm, and numerical examples are designed

    Adaptive multiresolution visualization of large multidimensional multivariate scientific datasets

    Get PDF
    The sizes of today\u27s scientific datasets range from megabytes to terabytes, making it impossible to directly browse the raw datasets visually. This presents significant challenges for visualization scientists who are interested in supporting these datasets. In this thesis, we present an adaptive data representation model which can be utilized with many of the commonly employed visualization techniques when dealing with large amounts of data. Our hierarchical design also alleviates the long standing visualization problem due to limited display space. The idea is based on using compactly supported orthogonal wavelets and additional downsizing techniques to generate a hierarchy of fine to coarse approximations of a very large dataset for visualization. An adaptive data hierarchy, which contains authentic multiresolution approximations and the corresponding error, has many advantages over the original data. First, it allows scientists to visualize the overall structure of a dataset by browsing its coarse approximations. Second, the fine approximations of the hierarchy provide local details of the interesting data subsets. Third, the error of the data representation can provide the scientist with information about the authenticity of the data approximation. Finally, in a client-server network environment, a coarse representation can increase the efficiency of a visualization process by quickly giving users a rough idea of the dataset before they decide whether to continue the transmission or to abort it. For datasets which require long rendering time, an authentic approximation of a very large dataset can speed up the visualization process greatly. Variations on the main wavelet-based multiresolution hierarchy described in this thesis also lead to other multiresolution representation mechanisms. For example, we investigate the uses of norm projections and principal components to build multiresolution data hierarchies of large multivariate datasets. This leads to the development of a more flexible dual multiresolution visualization environment for large data exploration. We present the results of experimental studies of our adaptive multiresolution representation using wavelets. Utilizing a multiresolution data hierarchy, we illustrate that information access from a dataset with tens of millions of data values can be achieved in real time. Based on these results, we propose procedures to assist in generating a multiresolution hierarchy of a large dataset. For example, the findings indicate that an ordinary computed tomography volume dataset can be represented effectively for some tasks by an adaptive data hierarchy with less than 1.5% of its original size

    Multiresolution image models and estimation techniques

    Get PDF

    Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective

    Get PDF
    Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques

    Fuzzy techniques for noise removal in image sequences and interval-valued fuzzy mathematical morphology

    Get PDF
    Image sequences play an important role in today's world. They provide us a lot of information. Videos are for example used for traffic observations, surveillance systems, autonomous navigation and so on. Due to bad acquisition, transmission or recording, the sequences are however usually corrupted by noise, which hampers the functioning of many image processing techniques. A preprocessing module to filter the images often becomes necessary. After an introduction to fuzzy set theory and image processing, in the first main part of the thesis, several fuzzy logic based video filters are proposed: one filter for grayscale video sequences corrupted by additive Gaussian noise and two color extensions of it and two grayscale filters and one color filter for sequences affected by the random valued impulse noise type. In the second main part of the thesis, interval-valued fuzzy mathematical morphology is studied. Mathematical morphology is a theory intended for the analysis of spatial structures that has found application in e.g. edge detection, object recognition, pattern recognition, image segmentation, image magnification… In the thesis, an overview is given of the evolution from binary mathematical morphology over the different grayscale morphology theories to interval-valued fuzzy mathematical morphology and the interval-valued image model. Additionally, the basic properties of the interval-valued fuzzy morphological operators are investigated. Next, also the decomposition of the interval-valued fuzzy morphological operators is investigated. We investigate the relationship between the cut of the result of such operator applied on an interval-valued image and structuring element and the result of the corresponding binary operator applied on the cut of the image and structuring element. These results are first of all interesting because they provide a link between interval-valued fuzzy mathematical morphology and binary mathematical morphology, but such conversion into binary operators also reduces the computation. Finally, also the reverse problem is tackled, i.e., the construction of interval-valued morphological operators from the binary ones. Using the results from a more general study in which the construction of an interval-valued fuzzy set from a nested family of crisp sets is constructed, increasing binary operators (e.g. the binary dilation) are extended to interval-valued fuzzy operators

    Disoriented Chiral Condensate: Theory and Experiment

    Full text link
    It is thought that a region of pseudo-vacuum, where the chiral order parameter is misaligned from its vacuum orientation in isospin space, might occasionally form in high energy hadronic or nuclear collisions. The possible detection of such disoriented chiral condensate (DCC) would provide useful information about the chiral structure of the QCD vacuum and/or the chiral phase transition of strong interactions at high temperature. We review the theoretical developments concerning the possible DCC formation in high-energy collisions as well as the various experimental searches that have been performed so far. We discuss future prospects for upcoming DCC searches, e.g. in high-energy heavy-ion collision experiments at RHIC and LHC.Comment: 120 pages, 52 figures. Uses elsart.cls. To appear in Physics Reports. Minor corrections, references adde

    Automatic Segmentation and Classification of Red and White Blood cells in Thin Blood Smear Slides

    Get PDF
    In this work we develop a system for automatic detection and classification of cytological images which plays an increasing important role in medical diagnosis. A primary aim of this work is the accurate segmentation of cytological images of blood smears and subsequent feature extraction, along with studying related classification problems such as the identification and counting of peripheral blood smear particles, and classification of white blood cell into types five. Our proposed approach benefits from powerful image processing techniques to perform complete blood count (CBC) without human intervention. The general framework in this blood smear analysis research is as follows. Firstly, a digital blood smear image is de-noised using optimized Bayesian non-local means filter to design a dependable cell counting system that may be used under different image capture conditions. Then an edge preservation technique with Kuwahara filter is used to recover degraded and blurred white blood cell boundaries in blood smear images while reducing the residual negative effect of noise in images. After denoising and edge enhancement, the next step is binarization using combination of Otsu and Niblack to separate the cells and stained background. Cells separation and counting is achieved by granulometry, advanced active contours without edges, and morphological operators with watershed algorithm. Following this is the recognition of different types of white blood cells (WBCs), and also red blood cells (RBCs) segmentation. Using three main types of features: shape, intensity, and texture invariant features in combination with a variety of classifiers is next step. The following features are used in this work: intensity histogram features, invariant moments, the relative area, co-occurrence and run-length matrices, dual tree complex wavelet transform features, Haralick and Tamura features. Next, different statistical approaches involving correlation, distribution and redundancy are used to measure of the dependency between a set of features and to select feature variables on the white blood cell classification. A global sensitivity analysis with random sampling-high dimensional model representation (RS-HDMR) which can deal with independent and dependent input feature variables is used to assess dominate discriminatory power and the reliability of feature which leads to an efficient feature selection. These feature selection results are compared in experiments with branch and bound method and with sequential forward selection (SFS), respectively. This work examines support vector machine (SVM) and Convolutional Neural Networks (LeNet5) in connection with white blood cell classification. Finally, white blood cell classification system is validated in experiments conducted on cytological images of normal poor quality blood smears. These experimental results are also assessed with ground truth manually obtained from medical experts

    MULTIRIDGELETS FOR TEXTURE ANALYSIS

    Get PDF
    Directional wavelets have orientation selectivity and thus are able to efficiently represent highly anisotropic elements such as line segments and edges. Ridgelet transform is a kind of directional multi-resolution transform and has been successful in many image processing and texture analysis applications. The objective of this research is to develop multi-ridgelet transform by applying multiwavelet transform to the Radon transform so as to attain attractive improvements. By adapting the cardinal orthogonal multiwavelets to the ridgelet transform, it is shown that the proposed cardinal multiridgelet transform (CMRT) possesses cardinality, approximate translation invariance, and approximate rotation invariance simultaneously, whereas no single ridgelet transform can hold all these properties at the same time. These properties are beneficial to image texture analysis. This is demonstrated in three studies of texture analysis applications. Firstly a texture database retrieval study taking a portion of the Brodatz texture album as an example has demonstrated that the CMRT-based texture representation for database retrieval performed better than other directional wavelet methods. Secondly the study of the LCD mura defect detection was based upon the classification of simulated abnormalities with a linear support vector machine classifier, the CMRT-based analysis of defects were shown to provide efficient features for superior detection performance than other competitive methods. Lastly and the most importantly, a study on the prostate cancer tissue image classification was conducted. With the CMRT-based texture extraction, Gaussian kernel support vector machines have been developed to discriminate prostate cancer Gleason grade 3 versus grade 4. Based on a limited database of prostate specimens, one classifier was trained to have remarkable test performance. This approach is unquestionably promising and is worthy to be fully developed
    corecore