35 research outputs found
Understanding Aesthetic Evaluation using Deep Learning
A bottleneck in any evolutionary art system is aesthetic evaluation. Many different methods have been proposed to automate the evaluation of aesthetics, including measures of symmetry, coherence, complexity, contrast and grouping. The interactive genetic algorithm (IGA) relies on human-in-the-loop, subjective evaluation of aesthetics, but limits possibilities for large search due to user fatigue and small population sizes. In this paper we look at how recent advances in deep learning can assist in automating personal aesthetic judgement. Using a leading artist's computer art dataset, we use dimensionality reduction methods to visualise both genotype and phenotype space in order to support the exploration of new territory in any generative system. Convolutional Neural Networks trained on the user's prior aesthetic evaluations are used to suggest new possibilities similar or between known high quality genotype-phenotype mappings
3D color homography model for photo-realistic color transfer re-coding
Color transfer is an image editing process that naturally transfers the color theme of a source image to a target image. In this paper, we propose a 3D color homography model which approximates photo-realistic color transfer algorithm as a combination of a 3D perspective transform and a mean intensity mapping. A key advantage of our approach is that the re-coded color transfer algorithm is simple and accurate. Our evaluation demonstrates that our 3D color homography model delivers leading color transfer re-coding performance. In addition, we also show that our 3D color homography model can be applied to color transfer artifact fixing, complex color transfer acceleration, and color-robust image stitching
Design of a Trichromatic Cone Array
Cones with peak sensitivity to light at long (L), medium (M) and short (S) wavelengths are unequal in number on the human retina: S cones are rare (<10%) while increasing in fraction from center to periphery, and the L/M cone proportions are highly variable between individuals. What optical properties of the eye, and statistical properties of natural scenes, might drive this organization? We found that the spatial-chromatic structure of natural scenes was largely symmetric between the L, M and S sensitivity bands. Given this symmetry, short wavelength attenuation by ocular media gave L/M cones a modest signal-to-noise advantage, which was amplified, especially in the denser central retina, by long-wavelength accommodation of the lens. Meanwhile, total information represented by the cone mosaic remained relatively insensitive to L/M proportions. Thus, the observed cone array design along with a long-wavelength accommodated lens provides a selective advantage: it is maximally informative
Recommended from our members
MOOD 2020: A public Benchmark for Out-of-Distribution Detection and Localization on medical Images
Detecting Out-of-Distribution (OoD) data is one of the greatest challenges in safe and robust deployment of machine learning algorithms in medicine. When the algorithms encounter cases that deviate from the distribution of the training data, they often produce incorrect and over-confident predictions. OoD detection algorithms aim to catch erroneous predictions in advance by analysing the data distribution and detecting potential instances of failure. Moreover, flagging OoD cases may support human readers in identifying incidental findings. Due to the increased interest in OoD algorithms, benchmarks for different domains have recently been established. In the medical imaging domain, for which reliable predictions are often essential, an open benchmark has been missing. We introduce the Medical-Out-Of-Distribution-Analysis-Challenge (MOOD) as an open, fair, and unbiased benchmark for OoD methods in the medical imaging domain. The analysis of the submitted algorithms shows that performance has a strong positive correlation with the perceived difficulty, and that all algorithms show a high variance for different anomalies, making it yet hard to recommend them for clinical practice. We also see a strong correlation between challenge ranking and performance on a simple toy test set, indicating that this might be a valuable addition as a proxy dataset during anomaly detection algorithm development
Multilinear Analysis of Image Ensembles: TensorFaces
Natural images are the composite consequence of multiple factors related to scene structure, illumination, and imaging. Multilinear algebra, the algebra of higher-order tensors, offers a potent mathematical framework for analyzing the multifactor structure of image ensembles and for addressing the difficult problem of disentangling the constituent factors or modes. Our multilinear modeling technique employs a tensor extension of the conventional matrix singular value decomposition (SVD), known as the N-mode SVD.As a concrete example, we consider the multilinear analysis of ensembles of facial images that combine several modes, including different facial geometries (people), expressions, head poses, and lighting conditions. Our resulting "TensorFaces" representation has several advantages over conventional eigenfaces. More generally, multilinear analysis shows promise as a unifying framework for a variety of computer vision problems
Finding a Colour Filter to Make a Camera Colorimetric by Optimisation
The Luther condition states that a camera is colorimetric if its spectral sensitivities are a linear transform from the XYZ colour matching functions. Recently, a method has been proposed for finding the optimal coloured filter that when placed in front of a camera, results in effective sensitivities that satisfy the Luther condition. The advantage of this method is that it finds the best filter for all possible physical capture conditions. The disadvantage is that the statistical information of typical scenes are not taken into account. In this paper we set forth a method for finding the optimal filter given a set of typical surfaces and lights. The problem is formulated as a bilinear least-squares estimation problem (linear both in the filter and the colour correction). This is solved using Alternating Least-Squares (ALS) technique. For a range of cameras we show that it is possible to find an optimal colour correction filter with respect to which the cameras are almost colorimetric