156,763 research outputs found

    Models of statistical self-similarity for signal and image synthesis

    Get PDF
    Statistical self-similarity of random processes in continuous-domains is defined through invariance of their statistics to time or spatial scaling. In discrete-time, scaling by an arbitrary factor of signals can be accomplished through frequency warping, and statistical self-similarity is defined by the discrete-time continuous-dilation scaling operation. Unlike other self-similarity models mostly relying on characteristics of continuous self-similarity other than scaling, this model provides a way to express discrete-time statistical self-similarity using scaling of discrete-time signals. This dissertation studies the discrete-time self-similarity model based on the new scaling operation, and develops its properties, which reveals relations with other models. Furthermore, it also presents a new self-similarity definition for discrete-time vector processes, and demonstrates synthesis examples for multi-channel network traffic. In two-dimensional spaces, self-similar random fields are of interest in various areas of image processing, since they fit certain types of natural patterns and textures very well. Current treatments of self-similarity in continuous two-dimensional space use a definition that is a direct extension of the 1-D definition. However, most of current discrete-space two-dimensional approaches do not consider scaling but instead are based on ad hoc formulations, for example, digitizing continuous random fields such as fractional Brownian motion. The dissertation demonstrates that the current statistical self-similarity definition in continuous-space is restrictive, and provides an alternative, more general definition. It also provides a formalism for discrete-space statistical self-similarity that depends on a new scaling operator for discrete images. Within the new framework, it is possible to synthesize a wider class of discrete-space self-similar random fields

    DCTM: Discrete-Continuous Transformation Matching for Semantic Flow

    Full text link
    Techniques for dense semantic correspondence have provided limited ability to deal with the geometric variations that commonly exist between semantically similar images. While variations due to scale and rotation have been examined, there lack practical solutions for more complex deformations such as affine transformations because of the tremendous size of the associated solution space. To address this problem, we present a discrete-continuous transformation matching (DCTM) framework where dense affine transformation fields are inferred through a discrete label optimization in which the labels are iteratively updated via continuous regularization. In this way, our approach draws solutions from the continuous space of affine transformations in a manner that can be computed efficiently through constant-time edge-aware filtering and a proposed affine-varying CNN-based descriptor. Experimental results show that this model outperforms the state-of-the-art methods for dense semantic correspondence on various benchmarks

    Stochastic models which separate fractal dimension and Hurst effect

    Get PDF
    Fractal behavior and long-range dependence have been observed in an astonishing number of physical systems. Either phenomenon has been modeled by self-similar random functions, thereby implying a linear relationship between fractal dimension, a measure of roughness, and Hurst coefficient, a measure of long-memory dependence. This letter introduces simple stochastic models which allow for any combination of fractal dimension and Hurst exponent. We synthesize images from these models, with arbitrary fractal properties and power-law correlations, and propose a test for self-similarity.Comment: 8 pages, 2 figure

    Self-Similar Anisotropic Texture Analysis: the Hyperbolic Wavelet Transform Contribution

    Full text link
    Textures in images can often be well modeled using self-similar processes while they may at the same time display anisotropy. The present contribution thus aims at studying jointly selfsimilarity and anisotropy by focusing on a specific classical class of Gaussian anisotropic selfsimilar processes. It will first be shown that accurate joint estimates of the anisotropy and selfsimilarity parameters are performed by replacing the standard 2D-discrete wavelet transform by the hyperbolic wavelet transform, which permits the use of different dilation factors along the horizontal and vertical axis. Defining anisotropy requires a reference direction that needs not a priori match the horizontal and vertical axes according to which the images are digitized, this discrepancy defines a rotation angle. Second, we show that this rotation angle can be jointly estimated. Third, a non parametric bootstrap based procedure is described, that provides confidence interval in addition to the estimates themselves and enables to construct an isotropy test procedure, that can be applied to a single texture image. Fourth, the robustness and versatility of the proposed analysis is illustrated by being applied to a large variety of different isotropic and anisotropic self-similar fields. As an illustration, we show that a true anisotropy built-in self-similarity can be disentangled from an isotropic self-similarity to which an anisotropic trend has been superimposed

    Kernel Belief Propagation

    Full text link
    We propose a nonparametric generalization of belief propagation, Kernel Belief Propagation (KBP), for pairwise Markov random fields. Messages are represented as functions in a reproducing kernel Hilbert space (RKHS), and message updates are simple linear operations in the RKHS. KBP makes none of the assumptions commonly required in classical BP algorithms: the variables need not arise from a finite domain or a Gaussian distribution, nor must their relations take any particular parametric form. Rather, the relations between variables are represented implicitly, and are learned nonparametrically from training data. KBP has the advantage that it may be used on any domain where kernels are defined (Rd, strings, groups), even where explicit parametric models are not known, or closed form expressions for the BP updates do not exist. The computational cost of message updates in KBP is polynomial in the training data size. We also propose a constant time approximate message update procedure by representing messages using a small number of basis functions. In experiments, we apply KBP to image denoising, depth prediction from still images, and protein configuration prediction: KBP is faster than competing classical and nonparametric approaches (by orders of magnitude, in some cases), while providing significantly more accurate results

    Self-Replicating Machines in Continuous Space with Virtual Physics

    Get PDF
    JohnnyVon is an implementation of self-replicating machines in continuous two-dimensional space. Two types of particles drift about in a virtual liquid. The particles are automata with discrete internal states but continuous external relationships. Their internal states are governed by finite state machines but their external relationships are governed by a simulated physics that includes Brownian motion, viscosity, and spring-like attractive and repulsive forces. The particles can be assembled into patterns that can encode arbitrary strings of bits. We demonstrate that, if an arbitrary "seed" pattern is put in a "soup" of separate individual particles, the pattern will replicate by assembling the individual particles into copies of itself. We also show that, given sufficient time, a soup of separate individual particles will eventually spontaneously form self-replicating patterns. We discuss the implications of JohnnyVon for research in nanotechnology, theoretical biology, and artificial life
    • …
    corecore