5 research outputs found

    Detection of exomoons in simulated light curves with a regularized convolutional neural network

    Full text link
    Many moons have been detected around planets in our Solar System, but none has been detected unambiguously around any of the confirmed extrasolar planets. We test the feasibility of a supervised convolutional neural network to classify photometric transit light curves of planet-host stars and identify exomoon transits, while avoiding false positives caused by stellar variability or instrumental noise. Convolutional neural networks are known to have contributed to improving the accuracy of classification tasks. The network optimization is typically performed without studying the effect of noise on the training process. Here we design and optimize a 1D convolutional neural network to classify photometric transit light curves. We regularize the network by the total variation loss in order to remove unwanted variations in the data features. Using numerical experiments, we demonstrate the benefits of our network, which produces results comparable to or better than the standard network solutions. Most importantly, our network clearly outperforms a classical method used in exoplanet science to identify moon-like signals. Thus the proposed network is a promising approach for analyzing real transit light curves in the future

    Supervised Neural Networks for Helioseismic Ring-Diagram Inversions

    Full text link
    The inversion of ring fit parameters to obtain subsurface flow maps in ring-diagram analysis for 8 years of SDO observations is computationally expensive, requiring ~3200 CPU hours. In this paper we apply machine learning techniques to the inversion in order to speed up calculations. Specifically, we train a predictor for subsurface flows using the mode fit parameters and the previous inversion results, to replace future inversion requirements. We utilize Artificial Neural Networks as a supervised learning method for predicting the flows in 15 degree ring tiles. To demonstrate that the machine learning results still contain the subtle signatures key to local helioseismic studies, we use the machine learning results to study the recently discovered solar equatorial Rossby waves. The Artificial Neural Network is computationally efficient, able to make future flow predictions of an entire Carrington rotation in a matter of seconds, which is much faster than the current ~31 CPU hours. Initial training of the networks requires ~3 CPU hours. The trained Artificial Neural Network can achieve a root mean-square error equal to approximately half that reported for the velocity inversions, demonstrating the accuracy of the machine learning (and perhaps the overestimation of the original errors from the ring-diagram pipeline). We find the signature of equatorial Rossby waves in the machine learning flows covering six years of data, demonstrating that small-amplitude signals are maintained. The recovery of Rossby waves in the machine learning flow maps can be achieved with only one Carrington rotation (27.275 days) of training data. We have shown that machine learning can be applied to, and perform more efficiently than the current ring-diagram inversion. The computation burden of the machine learning includes 3 CPU hours for initial training, then around 0.0001 CPU hours for future predictions.Comment: 10 pages, 10 Figures, Accepted by A&

    Change detection using multi-scale convolutional feature maps of bi-temporal satellite high-resolution images

    No full text
    ABSTRACTChange detection in high-resolution satellite images is essential to understanding the land surface (e.g. agriculture and urban change) or maritime surface (e.g. oil spilling). Many deep-learning-based change detection methods have been proposed to enhance the performance of the classical techniques. However, the massive amount of satellite images and missing ground-truth images are still challenging concerns. In this paper, we propose a supervised deep network for change detection in bi-temporal remote sensing images. We feed multi-level features from convolutional networks of two images (feature-extraction) into one architecture (feature-difference) to have better shape and texture properties using a dual attention module We also utilize a multi-scale dice coefficient error function to decrease overlapping between changed and background pixel. The network is applied to public datasets (ACD, SYSU-CD and OSCD). We compare the proposed architecture with various attention modules and loss functions to verfiy the performance of the proposed method. We also compare the proposed method with the stateof-the-art methods in terms of three metrics: precision, recall and F1-score. The experimental outcomes confirm that the proposed method has good performance compared to benchmark methods
    corecore