156 research outputs found
Quasar microlensing light curve analysis using deep machine learning
We introduce a deep machine learning approach to studying quasar microlensing
light curves for the first time by analyzing hundreds of thousands of simulated
light curves with respect to the accretion disc size and temperature profile.
Our results indicate that it is possible to successfully classify very large
numbers of diverse light curve data and measure the accretion disc structure.
The detailed shape of the accretion disc brightness profile is found to play a
negligible role, in agreement with Mortonson et al. (2005). The speed and
efficiency of our deep machine learning approach is ideal for quantifying
physical properties in a `big-data' problem setup. This proposed approach looks
promising for analyzing decade-long light curves for thousands of microlensed
quasars, expected to be provided by the Large Synoptic Survey Telescope.Comment: 11 pages, 7 figures, accepted for publication in MNRA
Dimensionality reduction and sparse representations in computer vision
The proliferation of camera equipped devices, such as netbooks, smartphones and game stations, has led to a significant increase in the production of visual content. This visual information could be used for understanding the environment and offering a natural interface between the users and their surroundings. However, the massive amounts of data and the high computational cost associated with them, encumbers the transfer of sophisticated vision algorithms to real life systems, especially ones that exhibit resource limitations such as restrictions in available memory, processing power and bandwidth. One approach for tackling these issues is to generate compact and descriptive representations of image data by exploiting inherent redundancies. We propose the investigation of dimensionality reduction and sparse representations in order to accomplish this task. In dimensionality reduction, the aim is to reduce the dimensions of the space where image data reside in order to allow resource constrained systems to handle them and, ideally, provide a more insightful description. This goal is achieved by exploiting the inherent redundancies that many classes of images, such as faces under different illumination conditions and objects from different viewpoints, exhibit. We explore the description of natural images by low dimensional non-linear models called image manifolds and investigate the performance of computer vision tasks such as recognition and classification using these low dimensional models. In addition to dimensionality reduction, we study a novel approach in representing images as a sparse linear combination of dictionary examples. We investigate how sparse image representations can be used for a variety of tasks including low level image modeling and higher level semantic information extraction. Using tools from dimensionality reduction and sparse representation, we propose the application of these methods in three hierarchical image layers, namely low-level features, mid-level structures and high-level attributes. Low level features are image descriptors that can be extracted directly from the raw image pixels and include pixel intensities, histograms, and gradients. In the first part of this work, we explore how various techniques in dimensionality reduction, ranging from traditional image compression to the recently proposed Random Projections method, affect the performance of computer vision algorithms such as face detection and face recognition. In addition, we discuss a method that is able to increase the spatial resolution of a single image, without using any training examples, according to the sparse representations framework. In the second part, we explore mid-level structures, including image manifolds and sparse models, produced by abstracting information from low-level features and offer compact modeling of high dimensional data. We propose novel techniques for generating more descriptive image representations and investigate their application in face recognition and object tracking. In the third part of this work, we propose the investigation of a novel framework for representing the semantic contents of images. This framework employs high level semantic attributes that aim to bridge the gap between the visual information of an image and its textual description by utilizing low level features and mid level structures. This innovative paradigm offers revolutionary possibilities including recognizing the category of an object from purely textual information without providing any explicit visual example
Quasar microlensing light-curve analysis using deep machine learning
We introduce a deep machine learning approach to studying quasar microlensing light curves for the first time by analysing hundreds of thousands of simulated light curves with respect to the accretion disc size and temperature profile. Our results indicate that it is possible to successfully classify very large numbers of diverse light-curve data and measure the accretion disc structure. The detailed shape of the accretion disc brightness profile is found to play a negligible role. The speed and efficiency of our deep machine learning approach is ideal for quantifying physical properties in a `big-data' problem set-up. This proposed approach looks promising for analysing decade-long light curves for thousands of microlensed quasars, expected to be provided by the Large Synoptic Survey Telescope
Compressed sensing reconstruction of convolved sparse signals
Abstract—This paper addresses the problem of efficient sam-pling and reconstruction of sparse spike signals, which have been convolved with low-pass filters. A modified compressed sensing (CS) framework is proposed, termed dictionary-based deconvolution CS (DDCS) to achieve this goal. DDCS builds on the assumption that a low-pass filter can be represented sparsely in a dictionary of blurring atoms. Identification of both the sparse spike signal and the sparsely parameterized blurring function is performed by an alternating scheme that minimizes each variable independently, while keeping the other constant. Simulation results reveal that the proposed DDSS scheme achieves an improved reconstruction performance when compared to traditional CS recovery. I
Measuring the Substructure Mass Power Spectrum of 23 SLACS Strong Galaxy-Galaxy Lenses with Convolutional Neural Networks
Strong gravitational lensing can be used as a tool for constraining the substructure in the mass distribution of galaxies. In this study we investigate the power spectrum of dark matter perturbations in a population of 23 Hubble Space Telescope images of strong galaxy-galaxy lenses selected from The Sloan Lens ACS (SLACS) survey. We model the dark matter substructure as a Gaussian Random Field perturbation on a smooth lens mass potential, characterized by power-law statistics. We expand upon the previously developed machine learning framework to predict the power-law statistics by using a convolutional neural network (CNN) that accounts for both epistemic and aleatoric uncertainties. For the training sets, we use the smooth lens mass potentials and reconstructed source galaxies that have been previously modelled through traditional fits of analytical and shapelet profiles as a starting point. We train three CNNs with different training set: the first using standard data augmentation on the best-fitting reconstructed sources, the second using different reconstructed sources spaced throughout the posterior distribution, and the third using a combination of the two data sets. We apply the trained CNNs to the SLACS data and find agreement in their predictions. Our results suggest a significant substructure perturbation favoring a high frequency power spectrum across our lens population.23 pages, 22 figure
- …
