959 research outputs found
Quasiconvex Programming
We define quasiconvex programming, a form of generalized linear programming
in which one seeks the point minimizing the pointwise maximum of a collection
of quasiconvex functions. We survey algorithms for solving quasiconvex programs
either numerically or via generalizations of the dual simplex method from
linear programming, and describe varied applications of this geometric
optimization technique in meshing, scientific computation, information
visualization, automated algorithm analysis, and robust statistics.Comment: 33 pages, 14 figure
A survey of outlier detection methodologies
Outlier detection has been used for centuries to detect and, where appropriate, remove anomalous observations from data. Outliers arise due to mechanical faults, changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. Their detection can identify system faults and fraud before they escalate with potentially catastrophic consequences. It can identify errors and remove their contaminating effect on the data set and as such to purify the data for processing. The original outlier detection methods were arbitrary but now, principled and systematic techniques are used, drawn from the full gamut of Computer Science and Statistics. In this paper, we introduce a survey of contemporary techniques for outlier detection. We identify their respective motivations and distinguish their advantages and disadvantages in a comparative review
Colour Constancy: Biologically-inspired Contrast Variant Pooling Mechanism
Pooling is a ubiquitous operation in image processing algorithms that allows
for higher-level processes to collect relevant low-level features from a region
of interest. Currently, max-pooling is one of the most commonly used operators
in the computational literature. However, it can lack robustness to outliers
due to the fact that it relies merely on the peak of a function. Pooling
mechanisms are also present in the primate visual cortex where neurons of
higher cortical areas pool signals from lower ones. The receptive fields of
these neurons have been shown to vary according to the contrast by aggregating
signals over a larger region in the presence of low contrast stimuli. We
hypothesise that this contrast-variant-pooling mechanism can address some of
the shortcomings of max-pooling. We modelled this contrast variation through a
histogram clipping in which the percentage of pooled signal is inversely
proportional to the local contrast of an image. We tested our hypothesis by
applying it to the phenomenon of colour constancy where a number of popular
algorithms utilise a max-pooling step (e.g. White-Patch, Grey-Edge and
Double-Opponency). For each of these methods, we investigated the consequences
of replacing their original max-pooling by the proposed
contrast-variant-pooling. Our experiments on three colour constancy benchmark
datasets suggest that previous results can significantly improve by adopting a
contrast-variant-pooling mechanism
Ridge Regression Approach to Color Constancy
This thesis presents the work on color constancy and its application in the field of computer vision. Color constancy is a phenomena of representing (visualizing) the reflectance properties of the scene independent of the illumination spectrum. The motivation behind this work is two folds:The primary motivation is to seek ‘consistency and stability’ in color reproduction and algorithm performance respectively because color is used as one of the important features in many computer vision applications; therefore consistency of the color features is essential for high application success. Second motivation is to reduce ‘computational complexity’ without sacrificing the primary motivation.This work presents machine learning approach to color constancy. An empirical model is developed from the training data. Neural network and support vector machine are two prominent nonlinear learning theories. The work on support vector machine based color constancy shows its superior performance over neural networks based color constancy in terms of stability. But support vector machine is time consuming method. Alternative approach to support vectormachine, is a simple, fast and analytically solvable linear modeling technique known as ‘Ridge regression’. It learns the dependency between the surface reflectance and illumination from a presented training sample of data. Ridge regression provides answer to the two fold motivation behind this work, i.e., stable and computationally simple approach. The proposed algorithms, ‘Support vector machine’ and ‘Ridge regression’ involves three step processes: First, an input matrix constructed from the preprocessed training data set is trained toobtain a trained model. Second, test images are presented to the trained model to obtain the chromaticity estimate of the illuminants present in the testing images. Finally, linear diagonal transformation is performed to obtain the color corrected image. The results show the effectiveness of the proposed algorithms on both calibrated and uncalibrated data set in comparison to the methods discussed in literature review. Finally, thesis concludes with a complete discussion and summary on comparison between the proposed approaches and other algorithms
Cognitive Deficit of Deep Learning in Numerosity
Subitizing, or the sense of small natural numbers, is an innate cognitive
function of humans and primates; it responds to visual stimuli prior to the
development of any symbolic skills, language or arithmetic. Given successes of
deep learning (DL) in tasks of visual intelligence and given the primitivity of
number sense, a tantalizing question is whether DL can comprehend numbers and
perform subitizing. But somewhat disappointingly, extensive experiments of the
type of cognitive psychology demonstrate that the examples-driven black box DL
cannot see through superficial variations in visual representations and distill
the abstract notion of natural number, a task that children perform with high
accuracy and confidence. The failure is apparently due to the learning method
not the CNN computational machinery itself. A recurrent neural network capable
of subitizing does exist, which we construct by encoding a mechanism of
mathematical morphology into the CNN convolutional kernels. Also, we
investigate, using subitizing as a test bed, the ways to aid the black box DL
by cognitive priors derived from human insight. Our findings are mixed and
interesting, pointing to both cognitive deficit of pure DL, and some measured
successes of boosting DL by predetermined cognitive implements. This case study
of DL in cognitive computing is meaningful for visual numerosity represents a
minimum level of human intelligence.Comment: Accepted for presentation at the AAAI-1
The Iray Light Transport Simulation and Rendering System
While ray tracing has become increasingly common and path tracing is well
understood by now, a major challenge lies in crafting an easy-to-use and
efficient system implementing these technologies. Following a purely
physically-based paradigm while still allowing for artistic workflows, the Iray
light transport simulation and rendering system allows for rendering complex
scenes by the push of a button and thus makes accurate light transport
simulation widely available. In this document we discuss the challenges and
implementation choices that follow from our primary design decisions,
demonstrating that such a rendering system can be made a practical, scalable,
and efficient real-world application that has been adopted by various companies
across many fields and is in use by many industry professionals today
- …