529 research outputs found
An Inhomogeneous Bayesian Texture Model for Spatially Varying Parameter Estimation
In statistical model based texture feature extraction, features based on spatially varying parameters achievehigher discriminative performances compared to spatially constant parameters. In this paper we formulate anovel Bayesian framework which achieves texture characterization by spatially varying parameters based onGaussian Markov random fields. The parameter estimation is carried out by Metropolis-Hastings algorithm.The distributions of estimated spatially varying parameters are then used as successful discriminant texturefeatures in classification and segmentation. Results show that novel features outperform traditional GaussianMarkov random field texture features which use spatially constant parameters. These features capture bothpixel spatial dependencies and structural properties of a texture giving improved texture features for effectivetexture classification and segmentation
Preattentive texture discrimination with early vision mechanisms
We present a model of human preattentive texture perception. This model consists of three stages: (1) convolution of the image with a bank of even-symmetric linear filters followed by half-wave rectification to give a set of responses modeling outputs of V1 simple cells, (2) inhibition, localized in space, within and among the neural-response profiles that results in the suppression of weak responses when there are strong responses at the same or nearby locations, and (3) texture-boundary detection by using wide odd-symmetric mechanisms. Our model can predict the salience of texture boundaries in any arbitrary gray-scale image. A computer implementation of this model has been tested on many of the classic stimuli from psychophysical literature. Quantitative predictions of the degree of discriminability of different texture pairs match well with experimental measurements of discriminability in human observers
Plane-Based Optimization of Geometry and Texture for RGB-D Reconstruction of Indoor Scenes
We present a novel approach to reconstruct RGB-D indoor scene with plane
primitives. Our approach takes as input a RGB-D sequence and a dense coarse
mesh reconstructed by some 3D reconstruction method on the sequence, and
generate a lightweight, low-polygonal mesh with clear face textures and sharp
features without losing geometry details from the original scene. To achieve
this, we firstly partition the input mesh with plane primitives, simplify it
into a lightweight mesh next, then optimize plane parameters, camera poses and
texture colors to maximize the photometric consistency across frames, and
finally optimize mesh geometry to maximize consistency between geometry and
planes. Compared to existing planar reconstruction methods which only cover
large planar regions in the scene, our method builds the entire scene by
adaptive planes without losing geometry details and preserves sharp features in
the final mesh. We demonstrate the effectiveness of our approach by applying it
onto several RGB-D scans and comparing it to other state-of-the-art
reconstruction methods.Comment: in International Conference on 3D Vision 2018; Models and Code: see
https://github.com/chaowang15/plane-opt-rgbd. arXiv admin note: text overlap
with arXiv:1905.0885
Model-based learning of local image features for unsupervised texture segmentation
Features that capture well the textural patterns of a certain class of images
are crucial for the performance of texture segmentation methods. The manual
selection of features or designing new ones can be a tedious task. Therefore,
it is desirable to automatically adapt the features to a certain image or class
of images. Typically, this requires a large set of training images with similar
textures and ground truth segmentation. In this work, we propose a framework to
learn features for texture segmentation when no such training data is
available. The cost function for our learning process is constructed to match a
commonly used segmentation model, the piecewise constant Mumford-Shah model.
This means that the features are learned such that they provide an
approximately piecewise constant feature image with a small jump set. Based on
this idea, we develop a two-stage algorithm which first learns suitable
convolutional features and then performs a segmentation. We note that the
features can be learned from a small set of images, from a single image, or
even from image patches. The proposed method achieves a competitive rank in the
Prague texture segmentation benchmark, and it is effective for segmenting
histological images
Discrimination of orientation-defined texture edges
AbstractPreattentive texture segregation was examined using textures composed of randomly placed, oriented line segments. A difference in texture element orientation produced an illusory, or orientation-defined, texture edge. Subjects discriminated between two textures, one with a straight texture edge and one with a āwavyā texture edge. Across conditions the orientation of the texture elements and the orientation of the texture edge varied. Although the orientation difference across the texture edge (the ātexture gradientā) is an important determinant of texture segregation performance, it is not the only one. Evidence from several experiments suggests that configural effects are also important. That is, orientation-defined texture edges are strongest when the texture elements (on one side of the edge) are parallel to the edge. This result is not consistent with a number of texture segregation models including feature- and filter-based models. One possible explanation is that the second-order channel used to detect a texture edge of a particular orientation gives greater weight to first-order input channels of that same orientation
Modeling Dynamic Swarms
This paper proposes the problem of modeling video sequences of dynamic swarms
(DS). We define DS as a large layout of stochastically repetitive spatial
configurations of dynamic objects (swarm elements) whose motions exhibit local
spatiotemporal interdependency and stationarity, i.e., the motions are similar
in any small spatiotemporal neighborhood. Examples of DS abound in nature,
e.g., herds of animals and flocks of birds. To capture the local spatiotemporal
properties of the DS, we present a probabilistic model that learns both the
spatial layout of swarm elements and their joint dynamics that are modeled as
linear transformations. To this end, a spatiotemporal neighborhood is
associated with each swarm element, in which local stationarity is enforced
both spatially and temporally. We assume that the prior on the swarm dynamics
is distributed according to an MRF in both space and time. Embedding this model
in a MAP framework, we iterate between learning the spatial layout of the swarm
and its dynamics. We learn the swarm transformations using ICM, which iterates
between estimating these transformations and updating their distribution in the
spatiotemporal neighborhoods. We demonstrate the validity of our method by
conducting experiments on real video sequences. Real sequences of birds, geese,
robot swarms, and pedestrians evaluate the applicability of our model to real
world data.Comment: 11 pages, 17 figures, conference paper, computer visio
Scale detection via keypoint density maps in regular or near-regular textures
In this paper we propose a new method to detect the global scale of images with regular, near regular, or
homogenous textures. We define texture āāscaleāā as the size of the basic elements (texels or textons) that
most frequently occur into the image. We study the distribution of the interest points into the image, at
different scale, by using our Keypoint Density Maps (KDMs) tool. A āāmodeāā vector is built computing the
most frequent values (modes) of the KDMs, at different scales. We observed that the mode vector is quasi
linear with the scale. The mode vector is properly subsampled, depending on the scale of observation, and
compared with a linear model. Texture scale is estimated as the one which minimizes an error function
between the related subsampled vector and the linear model. Results, compared with a state of the art
method, are very encouraging
The visual representation of texture
This research is concerned with texture: a source of visual information, that has motivated a huge amount of psychophysical and computational research. This thesis questions how useful the accepted view of texture perception is. From a theoretical point of view, work to date has largely avoided two critical aspects of a computational theory of texture perception. Firstly, what is texture? Secondly, what is an appropriate representation for texture? This thesis argues that a task dependent definition of texture is necessary, and
proposes a multi-local, statistical scheme for representing texture orientation.
Human performance on a series of psychophysical orientation discrimination tasks are compared to specific predictions from the scheme.
The first set of experiments investigate observers' ability to directly derive statistical estimates from texture. An analogy is reported between the way texture statistics are derived, and the visual processing of spatio-luminance features.
The second set of experiments are concerned with the way texture elements are extracted
from images (an example of the generic grouping problem in vision). The use of
highly constrained experimental tasks, typically texture orientation discriminations, allows for the formulation of simple statistical criteria for setting critical parameters of the model (such as the spatial scale of analysis). It is shown that schemes based on isotropic filtering and symbolic matching do not suffice for performing this grouping, but that the
scheme proposed, base on oriented mechanisms, does.
Taken together these results suggest a view of visual texture processing, not as a
disparate collection of processes, but as a general strategy for deriving statistical representations of images common to a range of visual tasks
- ā¦