1,049 research outputs found
A Framework for Symmetric Part Detection in Cluttered Scenes
The role of symmetry in computer vision has waxed and waned in importance
during the evolution of the field from its earliest days. At first figuring
prominently in support of bottom-up indexing, it fell out of favor as shape
gave way to appearance and recognition gave way to detection. With a strong
prior in the form of a target object, the role of the weaker priors offered by
perceptual grouping was greatly diminished. However, as the field returns to
the problem of recognition from a large database, the bottom-up recovery of the
parts that make up the objects in a cluttered scene is critical for their
recognition. The medial axis community has long exploited the ubiquitous
regularity of symmetry as a basis for the decomposition of a closed contour
into medial parts. However, today's recognition systems are faced with
cluttered scenes, and the assumption that a closed contour exists, i.e. that
figure-ground segmentation has been solved, renders much of the medial axis
community's work inapplicable. In this article, we review a computational
framework, previously reported in Lee et al. (2013), Levinshtein et al. (2009,
2013), that bridges the representation power of the medial axis and the need to
recover and group an object's parts in a cluttered scene. Our framework is
rooted in the idea that a maximally inscribed disc, the building block of a
medial axis, can be modeled as a compact superpixel in the image. We evaluate
the method on images of cluttered scenes.Comment: 10 pages, 8 figure
Texture Structure Analysis
abstract: Texture analysis plays an important role in applications like automated pattern inspection, image and video compression, content-based image retrieval, remote-sensing, medical imaging and document processing, to name a few. Texture Structure Analysis is the process of studying the structure present in the textures. This structure can be expressed in terms of perceived regularity. Our human visual system (HVS) uses the perceived regularity as one of the important pre-attentive cues in low-level image understanding. Similar to the HVS, image processing and computer vision systems can make fast and efficient decisions if they can quantify this regularity automatically. In this work, the problem of quantifying the degree of perceived regularity when looking at an arbitrary texture is introduced and addressed. One key contribution of this work is in proposing an objective no-reference perceptual texture regularity metric based on visual saliency. Other key contributions include an adaptive texture synthesis method based on texture regularity, and a low-complexity reduced-reference visual quality metric for assessing the quality of synthesized textures. In order to use the best performing visual attention model on textures, the performance of the most popular visual attention models to predict the visual saliency on textures is evaluated. Since there is no publicly available database with ground-truth saliency maps on images with exclusive texture content, a new eye-tracking database is systematically built. Using the Visual Saliency Map (VSM) generated by the best visual attention model, the proposed texture regularity metric is computed. The proposed metric is based on the observation that VSM characteristics differ between textures of differing regularity. The proposed texture regularity metric is based on two texture regularity scores, namely a textural similarity score and a spatial distribution score. In order to evaluate the performance of the proposed regularity metric, a texture regularity database called RegTEX, is built as a part of this work. It is shown through subjective testing that the proposed metric has a strong correlation with the Mean Opinion Score (MOS) for the perceived regularity of textures. The proposed method is also shown to be robust to geometric and photometric transformations and outperforms some of the popular texture regularity metrics in predicting the perceived regularity. The impact of the proposed metric to improve the performance of many image-processing applications is also presented. The influence of the perceived texture regularity on the perceptual quality of synthesized textures is demonstrated through building a synthesized textures database named SynTEX. It is shown through subjective testing that textures with different degrees of perceived regularities exhibit different degrees of vulnerability to artifacts resulting from different texture synthesis approaches. This work also proposes an algorithm for adaptively selecting the appropriate texture synthesis method based on the perceived regularity of the original texture. A reduced-reference texture quality metric for texture synthesis is also proposed as part of this work. The metric is based on the change in perceived regularity and the change in perceived granularity between the original and the synthesized textures. The perceived granularity is quantified through a new granularity metric that is proposed in this work. It is shown through subjective testing that the proposed quality metric, using just 2 parameters, has a strong correlation with the MOS for the fidelity of synthesized textures and outperforms the state-of-the-art full-reference quality metrics on 3 different texture databases. Finally, the ability of the proposed regularity metric in predicting the perceived degradation of textures due to compression and blur artifacts is also established.Dissertation/ThesisPh.D. Electrical Engineering 201
Memory-Efficient Deep Salient Object Segmentation Networks on Gridized Superpixels
Computer vision algorithms with pixel-wise labeling tasks, such as semantic
segmentation and salient object detection, have gone through a significant
accuracy increase with the incorporation of deep learning. Deep segmentation
methods slightly modify and fine-tune pre-trained networks that have hundreds
of millions of parameters. In this work, we question the need to have such
memory demanding networks for the specific task of salient object segmentation.
To this end, we propose a way to learn a memory-efficient network from scratch
by training it only on salient object detection datasets. Our method encodes
images to gridized superpixels that preserve both the object boundaries and the
connectivity rules of regular pixels. This representation allows us to use
convolutional neural networks that operate on regular grids. By using these
encoded images, we train a memory-efficient network using only 0.048\% of the
number of parameters that other deep salient object detection networks have.
Our method shows comparable accuracy with the state-of-the-art deep salient
object detection methods and provides a faster and a much more memory-efficient
alternative to them. Due to its easy deployment, such a network is preferable
for applications in memory limited devices such as mobile phones and IoT
devices.Comment: 6 pages, submitted to MMSP 201
A salient and task-irrelevant collinear structure hurts visual search
published_or_final_versio
Low level constraints on dynamic contour path integration
Contour integration is a fundamental visual process. The constraints on integrating
discrete contour elements and the associated neural mechanisms have typically been
investigated using static contour paths. However, in our dynamic natural environment
objects and scenes vary over space and time. With the aim of investigating the
parameters affecting spatiotemporal contour path integration, we measured human
contrast detection performance of a briefly presented foveal target embedded in
dynamic collinear stimulus sequences (comprising five short 'predictor' bars appearing
consecutively towards the fovea, followed by the 'target' bar) in four experiments. The
data showed that participants' target detection performance was relatively unchanged
when individual contour elements were separated by up to 2° spatial gap or 200ms
temporal gap. Randomising the luminance contrast or colour of the predictors, on the
other hand, had similar detrimental effect on grouping dynamic contour path and
subsequent target detection performance. Randomising the orientation of the
predictors reduced target detection performance greater than introducing misalignment
relative to the contour path. The results suggest that the visual system integrates
dynamic path elements to bias target detection even when the continuity of path is
disrupted in terms of spatial (2°), temporal (200ms), colour (over 10 colours) and
luminance (-25% to 25%) information. We discuss how the findings can be largely
reconciled within the functioning of V1 horizontal connections
An Investigation of Starting Point Preferences in Human Performance on Traveling Salesman Problems
Previous studies have shown that people start traveling sales problem tours significantly more often from boundary than from interior nodes. There are a number of possible reasons for such a tendency: first, it may arise as a direct result of the processes involved in tour construction; second, boundary points may be perceptually more salient than interior points, and selected for that reason; third, starting from the boundary may make the task easier or be more likely to result in a better tour than starting from the interior. The present research investigated each of these possibilities by analyzing start point frequencies in previously unpublished data and by conducting an experiment. The analysis of start points provided some slight but contradictory support for the hypothesis that start selections result from the process of tour construction, but no evidence for the perceptual salience explanation. The experiment required participants to start tours either from a boundary or from an interior point, to test whether there was an effect on the quality of tour construction. No evidence was found that starting point affected either the length of tours or the time required to produce them. However, there was some indication that starting from a central location may be more likely to result in crossed arcs
Statistical regularities across trials bias attentional selection
Previous studies have shown that attentional selection can be biased toward locations that are likely to contain a target and away from locations that are likely to contain a distractor. It is assumed that through statistical learning, participants are able to extract the regularities in the display, which in turn biases attentional selection. The present study employed the additional singleton task to examine the ability of participants to extract regularities that occurred across trials. In four experiments, we found that participants were capable of picking up statistical regularities concerning target positions across trials both in the absence and presence of distracting information. It is concluded that through statistical learning, participants are able to extract intertrial statistical associations regarding subsequent target location, which in turn biases attentional selection. We argue here that the weights within the spatial priority map can be dynamically adapted from trial to trial such that the selection of a target at a particular location increases the weights of the upcoming target location within the spatial priority map, giving rise to a more efficient target selection
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
An attention model and its application in man-made scene interpretation
The ultimate aim of research into computer vision is designing a system which interprets
its surrounding environment in a similar way the human can do effortlessly. However, the
state of technology is far from achieving such a goal. In this thesis different components of
a computer vision system that are designed for the task of interpreting man-made scenes,
in particular images of buildings, are described. The flow of information in the proposed
system is bottom-up i.e., the image is first segmented into its meaningful components and
subsequently the regions are labelled using a contextual classifier.
Starting from simple observations concerning the human vision system and the gestalt laws
of human perception, like the law of “good (simple) shape” and “perceptual grouping”, a
blob detector is developed, that identifies components in a 2D image. These components
are convex regions of interest, with interest being defined as significant gradient magnitude
content. An eye tracking experiment is conducted, which shows that the regions identified
by the blob detector, correlate significantly with the regions which drive the attention of
viewers.
Having identified these blobs, it is postulated that a blob represents an object, linguistically
identified with its own semantic name. In other words, a blob may contain a window a
door or a chimney in a building. These regions are used to identify and segment higher
order structures in a building, like facade, window array and also environmental regions
like sky and ground.
Because of inconsistency in the unary features of buildings, a contextual learning algorithm
is used to classify the segmented regions. A model which learns spatial and topological
relationships between different objects from a set of hand-labelled data, is used. This
model utilises this information in a MRF to achieve consistent labellings of new scenes
- …