2,662 research outputs found
A survey of outlier detection methodologies
Outlier detection has been used for centuries to detect and, where appropriate, remove anomalous observations from data. Outliers arise due to mechanical faults, changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. Their detection can identify system faults and fraud before they escalate with potentially catastrophic consequences. It can identify errors and remove their contaminating effect on the data set and as such to purify the data for processing. The original outlier detection methods were arbitrary but now, principled and systematic techniques are used, drawn from the full gamut of Computer Science and Statistics. In this paper, we introduce a survey of contemporary techniques for outlier detection. We identify their respective motivations and distinguish their advantages and disadvantages in a comparative review
Fuzzy clustering with volume prototypes and adaptive cluster merging
Two extensions to the objective function-based fuzzy
clustering are proposed. First, the (point) prototypes are extended to hypervolumes, whose size can be fixed or can be determined automatically from the data being clustered. It is shown that clustering with hypervolume prototypes can be formulated as the minimization of an objective function. Second, a heuristic cluster merging step is introduced where the similarity among the clusters
is assessed during optimization. Starting with an overestimation of the number of clusters in the data, similar clusters are merged in order to obtain a suitable partitioning. An adaptive threshold for merging is proposed. The extensions proposed are applied to
Gustafson–Kessel and fuzzy c-means algorithms, and the resulting extended algorithm is given. The properties of the new algorithm are illustrated by various examples
An attention model and its application in man-made scene interpretation
The ultimate aim of research into computer vision is designing a system which interprets
its surrounding environment in a similar way the human can do effortlessly. However, the
state of technology is far from achieving such a goal. In this thesis different components of
a computer vision system that are designed for the task of interpreting man-made scenes,
in particular images of buildings, are described. The flow of information in the proposed
system is bottom-up i.e., the image is first segmented into its meaningful components and
subsequently the regions are labelled using a contextual classifier.
Starting from simple observations concerning the human vision system and the gestalt laws
of human perception, like the law of “good (simple) shape” and “perceptual grouping”, a
blob detector is developed, that identifies components in a 2D image. These components
are convex regions of interest, with interest being defined as significant gradient magnitude
content. An eye tracking experiment is conducted, which shows that the regions identified
by the blob detector, correlate significantly with the regions which drive the attention of
viewers.
Having identified these blobs, it is postulated that a blob represents an object, linguistically
identified with its own semantic name. In other words, a blob may contain a window a
door or a chimney in a building. These regions are used to identify and segment higher
order structures in a building, like facade, window array and also environmental regions
like sky and ground.
Because of inconsistency in the unary features of buildings, a contextual learning algorithm
is used to classify the segmented regions. A model which learns spatial and topological
relationships between different objects from a set of hand-labelled data, is used. This
model utilises this information in a MRF to achieve consistent labellings of new scenes
The Cat Is On the Mat. Or Is It a Dog? Dynamic Competition in Perceptual Decision Making
Recent neurobiological findings suggest that the brain solves simple perceptual decision-making tasks by means of a dynamic competition in which evidence is accumulated in favor of the alternatives. However, it is unclear if and how the same process applies in more complex, real-world tasks, such as the categorization of ambiguous visual scenes and what elements are considered as evidence in this case. Furthermore, dynamic decision models typically consider evidence accumulation as a passive process disregarding the role of active perception strategies. In this paper, we adopt the principles of dynamic competition and active vision for the realization of a biologically- motivated computational model, which we test in a visual catego- rization task. Moreover, our system uses predictive power of the features as the main dimension for both evidence accumulation and the guidance of active vision. Comparison of human and synthetic data in a common experimental setup suggests that the proposed model captures essential aspects of how the brain solves perceptual ambiguities in time. Our results point to the importance of the proposed principles of dynamic competi- tion, parallel specification, and selection of multiple alternatives through prediction, as well as active guidance of perceptual strategies for perceptual decision-making and the resolution of perceptual ambiguities. These principles could apply to both the simple perceptual decision problems studied in neuroscience and the more complex ones addressed by vision research.Peer reviewe
- …