55 research outputs found

    Multiresolution signal decomposition schemes

    Get PDF
    [PNA-R9810] Interest in multiresolution techniques for signal processing and analysis is increasing steadily. An important instance of such a technique is the so-called pyramid decomposition scheme. This report proposes a general axiomatic pyramid decomposition scheme for signal analysis and synthesis. This scheme comprises the following ingredients: (i) the pyramid consists of a (finite or infinite) number of levels such that the information content decreases towards higher levels; (ii) each step towards a higher level is constituted by an (information-reducing) analysis operator, whereas each step towards a lower level is modeled by an (information-preserving) synthesis operator. One basic assumption is necessary: synthesis followed by analysis yields the identity operator, meaning that no information is lost by these two consecutive steps. In this report, several examples are described of linear as well as nonlinear (e.g., morphological) pyramid decomposition schemes. Some of these examples are known from the literature (Laplacian pyramid, morphological granulometries, skeleton decomposition) and some of them are new (morphological Haar pyramid, median pyramid). Furthermore, the report makes a distinction between single-scale and multiscale decomposition schemes (i.e. without or with sample reduction).#[PNA-R9905] In its original form, the wavelet transform is a linear tool. However, it has been increasingly recognized that nonlinear extensions are possible. A major impulse to the development of nonlinea

    Multiresolution signal decomposition schemes. Part 2: Morphological wavelets

    Get PDF
    In its original form, the wavelet transform is a linear tool. However, it has been increasingly recognized that nonlinear extensions are possible. A major impulse to the development of nonlinear wavelet transforms has been given by the introduction of the lifting scheme by Sweldens. The aim of this report, which is a sequel to a previous report devoted exclusively to the pyramid transform, is to present an axiomatic framework encompassing most existing linear and nonlinear wavelet decompositions. Furthermore, it introduces some, thus far unknown, wavelets based on mathematical morphology, such as the morphological Haar wavelet, both in one and two dimensions. A general and flexible approach for the construction of nonlinear (morphological) wavelets is provided by the lifting scheme. This paper discusses one example in considerable detail, the max-lifting scheme, which has the intriguing property that it preserves local maxima in a signal over a range of scales, depending on how local or global these maxima are

    Nonlinear multiresolution signal decomposition schemes. II. Morphological wavelets

    Full text link

    Significance linked connected component analysis plus

    Get PDF
    Dr. Xinhua Zhuang, Dissertation Supervisor.Field of Study: Computer Science."May 2018."An image coding algorithm, SLCCA Plus, is introduced in this dissertation. SLCCA Plus is a wavelet-based subband coding method. In wavelet-based subband coding, the input images will go through a wavelet transform and be decomposed into wavelet subband pyramids. Then the characteristics of the wavelet coefficients within and among subbands will be utilized to removing the redundancy. The rest information will be organized and go through entropy encoding. SLCCA Plus contains a series improvement method to the SLCCA. Before SLCCA, there are three top-ranked wavelet image coders. Namely, Embedded Zerotree Wavelet coder (EZW), Morphological Representation of Wavelet Date (MEWD), and Set Partitioning in Hierarchical Trees (SPIHT). They exploit either inter-subband relation among zero wavelet coefficients or within-subband clustering. SLCCA, on the other hand, outperforms these three coders by exploring both the inter- subband coefficients relations and within-subband clustering of significant wavelet coefficients. SLCCA Plus strengthens SLCCA in the following aspects: Intelligence quantization, enhanced cluster filter, potential-significant shared-zero, and improved context models. The purpose of the first three improvements is to remove redundancy information further while keeping the image error as low as possible. As a result, they achieve a better trade-off between bit cost and image quality. Moreover, the improved context lowers the entropy by refining the classification of symbols in cluster sequence and magnitude bit-planes. Lower entropy means the adaptive arithmetic coding can achieve a better coding gain. For performance evaluation, SLCCA Plus is compared to SLCCA and JPEG2000. On average, SLCCA Plus achieves 7% bit saving over JPEG 2000 and 4% over SLCCA. The results comparison shows that SLCCA Plus shows more texture and edge details at a lower bitrate.Includes bibliographical references (pages 88-92)

    Wavelet and Multiscale Methods

    Get PDF
    Various scientific models demand finer and finer resolutions of relevant features. Paradoxically, increasing computational power serves to even heighten this demand. Namely, the wealth of available data itself becomes a major obstruction. Extracting essential information from complex structures and developing rigorous models to quantify the quality of information leads to tasks that are not tractable by standard numerical techniques. The last decade has seen the emergence of several new computational methodologies to address this situation. Their common features are the nonlinearity of the solution methods as well as the ability of separating solution characteristics living on different length scales. Perhaps the most prominent examples lie in multigrid methods and adaptive grid solvers for partial differential equations. These have substantially advanced the frontiers of computability for certain problem classes in numerical analysis. Other highly visible examples are: regression techniques in nonparametric statistical estimation, the design of universal estimators in the context of mathematical learning theory and machine learning; the investigation of greedy algorithms in complexity theory, compression techniques and encoding in signal and image processing; the solution of global operator equations through the compression of fully populated matrices arising from boundary integral equations with the aid of multipole expansions and hierarchical matrices; attacking problems in high spatial dimensions by sparse grid or hyperbolic wavelet concepts. This workshop proposed to deepen the understanding of the underlying mathematical concepts that drive this new evolution of computation and to promote the exchange of ideas emerging in various disciplines

    Construction of the Scale Aware Anisotropic Diffusion Pyramid With Application to Multi-scale Tracking

    Get PDF
    This thesis is concerned with the identification of features within two-dimensional imagery. Current acquisition technology is capable of producing very high-resolution images at large frame rates and generating an enormous amount of raw data. Exceeding present signal processing technology in all but the simplest image processing tasks, the visual information contained in these image sequences is tremendous in both spatial and temporal content. A majority of this detail is relatively unimportant for the identification of an object, however, and the motivations for this thesis, at the core, are the study and development of methods that are capable of identifying image features in a highly robust and efficient manor. Biological vision systems have developed methods for coping with high-resolution imagery, and these systems serve as a starting point for designing robust and efficient algorithms capable of identifying features within image sequences. By foveating towards a region of interest, biological systems initially search coarse-scale scene representations and exploit this information to efficiently process finer resolution data. This search procedure is facilitated by the nonlinear distribution of visual sensors within a biological vision system, and the result is a very efficient and robust method for identifying objects. Humans will initially identify peripheral objects as potential regions of interest, acquiring higher-resolution image information by focusing on the region, and deciding if the perceived object is actually present through the use of all available knowledge of the scene

    Vision-Based 2D and 3D Human Activity Recognition

    Get PDF
    corecore