5 research outputs found

    Feature Learning for Multispectral Satellite Imagery Classification Using Neural Architecture Search

    Get PDF
    Automated classification of remote sensing data is an integral tool for earth scientists, and deep learning has proven very successful at solving such problems. However, building deep learning models to process the data requires expert knowledge of machine learning. We introduce DELTA, a software toolkit to bridge this technical gap and make deep learning easily accessible to earth scientists. Visual feature engineering is a critical part of the machine learning lifecycle, and hence is a key area that will be automated by DELTA. Hand-engineered features can perform well, but require a cross functional team with expertise in both machine learning and the specific problem domain, which is costly in both researcher time and labor. The problem is more acute with multispectral satellite imagery, which requires considerable computational resources to process. In order to automate the feature learning process, a neural architecture search samples the space of asymmetric and symmetric autoencoders using evolutionary algorithms. Since denoising autoencoders have been shown to perform well for feature learning, the autoencoders are trained on various levels of noise and the features generated by the best performing autoencoders evaluated according to their performance on image classification tasks. The resulting features are demonstrated to be effective for Landsat-8 flood mapping, as well as benchmark datasets CIFAR10 and SVHN

    From Compressive Sensing to Machine Learning in Smart Grids

    Get PDF
    Traditional power grids are a single-layered physical system, while smart grids are an extension of traditional power grids that are cyber-physical networks, and the main difference is smart grids include an information layer. There is a huge amount of information being managed within recent smart grids, and the decentralized power generation adds an extra level of uncertainty to smart grids. The standard methods of monitoring and security available cannot work as expected when collecting and analyzing the large amount of data presented from different parameters in the power network. Compressive sensing is a signal processing tool that is used to monitor single and simultaneous fault locations in smart distribution and transmission networks, to detect harmonic distortions, and to recognize patterns of partial discharge. Compressive sensing reduces the measurement cost and the management cost since it can detect or rebuild a signal from very few samples. In this thesis, we propose to design and implement the fault detection via a feedforward neural network using similar regularizations as in compressive sensing. We shall use the adaptivity of neural networks to tackle with state changes in the smart grid, proving the scalability and the decentralized capability of a neural network for fault detection in the grid. Two codes have been created against two different databases, and it was found that indeed, a feedforward autoencoder would be great at fault detection, however, many things should be considered prior to implementing it on a large scale. The most important part of any autoencoder generation is a good dataset

    Deep Dynamic Factor Models

    Full text link
    We propose a novel deep neural net framework - that we refer to as Deep Dynamic Factor Model (D2FM) -, to encode the information available, from hundreds of macroeconomic and financial time-series into a handful of unobserved latent states. While similar in spirit to traditional dynamic factor models (DFMs), differently from those, this new class of models allows for nonlinearities between factors and observables due to the deep neural net structure. However, by design, the latent states of the model can still be interpreted as in a standard factor model. In an empirical application to the forecast and nowcast of economic conditions in the US, we show the potential of this framework in dealing with high dimensional, mixed frequencies and asynchronously published time series data. In a fully real-time out-of-sample exercise with US data, the D2FM improves over the performances of a state-of-the-art DFM

    Deep Autoencoders for Cross-Modal Retrieval

    Get PDF
    Increased accuracy and affordability of depth sensors such as Kinect has created a great depth-data source for 3D processing. Specifically, 3D model retrieval is attracting attention in the field of computer vision and pattern recognition due to its numerous applications. A cross-domain retrieval approach such as depth image based 3D model retrieval has the challenges of occlusion, noise, and view variability present in both query and training data. In this research, we propose a new supervised deep autoencoder approach followed by semantic modeling to retrieve 3D shapes based on depth images. The key novelty is the two-fold feature abstraction to cope with the incompleteness and ambiguity present in the depth images. First, we develop a supervised autoencoder to extract robust features from both real depth images and synthetic ones rendered from 3D models, which are intended to balance reconstruction and classification capabilities of mix-domain data. We investigate the relation between encoder and decoder layers in a deep autoencoder and claim that an asymmetric structure of a supervised deep autoencoder is more capable of extracting robust features than that of a symmetric one. The asymmetric deep autoencoder features are less invariant to small sample changes in mixed domain data. In addition, semantic modeling of the supervised autoencoder features offers the next level of abstraction to the incompleteness and ambiguity of the depth data. It is interesting that, unlike any other pairwise model structures, the cross-domain retrieval is still possible using only one single deep network trained on real and synthetic data. The experimental results on the NYUD2 and ModelNet10 datasets demonstrate that the proposed supervised method outperforms the recent approaches for cross modal 3D model retrieval

    Machine-learning methods for weak lensing analysis of the ESA Euclid sky survey

    Get PDF
    A clear picture has emerged from the last three decades of research: our Universe is expanding at an accelerated rate. The cause of this expansion remains elusive, but in essence acts as a repulsive force. This so-called dark energy represents about 69% of the energy content in the Universe. A further 26% of the energy is contained in dark matter, a form of matter that is invisible electromagnetically. Understanding the nature of these two major components of the Universe is at the top of the list of unsolved problems. To unveil answers, ambitious experiments are devised to survey an ever larger and deeper fraction of the sky. One such project is the European Space Agency (ESA) telescope Euclid, which will probe dark matter and infer desperately needed information about dark energy. Because light bundles follow null geodesics, their trajectories are affected by the mass distribution along the line of sight, which includes dark matter. This is gravitational lensing. In the vast majority of cases, deformations of the source objects are weak, and profiles are slightly sheared. The nature of the dark components can be fathomed by measuring the shear over a large fraction of the sky. The shear can be recovered by a statistical analysis of a large number of objects. In this thesis, we take on the development of the necessary tools to measure the shear. Shear measurement techniques have been developed and improved for more than two decades. Their performance, however, do not meet the unprecedented requirements imposed by future surveys. Requirements trickle down from the targeted determination of the cosmological parameters. We aim at preparing novel and innovative methods. These methods are tested against the Euclid requirements. Contributions can be classified into two major themes. A key step in the processing of weak gravitational lensing data is the correction of image deformations generated by the instrument itself. This point spread function (PSF) correction is the first theme. The second is the shear measurement itself, and in particular, producing accurate measurements. We explore machine-learning methods, and notably artificial neural networks. These methods are, for the most part, data-driven. Schemes must first be trained against a representative sample of data. Crafting optimal training sets and choosing the method parameters can be crucial for the performance. We dedicate an important fraction of this dissertation to describing simulations behind the datasets and motivating our parameter choices. We propose schemes to build a clean selection of stars and model the PSF to the Euclid requirements in the first part of this thesis. Shear measurements are notoriously biased because of their small size and their low intensity. We introduce an approach that produces unbiased estimates of shear. This is achieved by processing data from any shape measurement technique with artificial neural networks, and predicting corrected estimates of the shape of the galaxies, or directly the shear. We demonstrate that simple networks with simple trainings are sufficient to reach the Euclid requirements on shear measurements
    corecore