Models of statistical self-similarity for signal and image synthesis

Abstract

Statistical self-similarity of random processes in continuous-domains is defined through invariance of their statistics to time or spatial scaling. In discrete-time, scaling by an arbitrary factor of signals can be accomplished through frequency warping, and statistical self-similarity is defined by the discrete-time continuous-dilation scaling operation. Unlike other self-similarity models mostly relying on characteristics of continuous self-similarity other than scaling, this model provides a way to express discrete-time statistical self-similarity using scaling of discrete-time signals. This dissertation studies the discrete-time self-similarity model based on the new scaling operation, and develops its properties, which reveals relations with other models. Furthermore, it also presents a new self-similarity definition for discrete-time vector processes, and demonstrates synthesis examples for multi-channel network traffic. In two-dimensional spaces, self-similar random fields are of interest in various areas of image processing, since they fit certain types of natural patterns and textures very well. Current treatments of self-similarity in continuous two-dimensional space use a definition that is a direct extension of the 1-D definition. However, most of current discrete-space two-dimensional approaches do not consider scaling but instead are based on ad hoc formulations, for example, digitizing continuous random fields such as fractional Brownian motion. The dissertation demonstrates that the current statistical self-similarity definition in continuous-space is restrictive, and provides an alternative, more general definition. It also provides a formalism for discrete-space statistical self-similarity that depends on a new scaling operator for discrete images. Within the new framework, it is possible to synthesize a wider class of discrete-space self-similar random fields

    Similar works