2 research outputs found

    Novel applications of machine learning in astronomy and beyond

    Get PDF
    The field of astronomy is currently experiencing a period of unprecedented expansion, predominantly brought about by the vast amounts of data being produced by the latest telescopes and surveys. New methods will be required to have any hope of being able to analyse the data collected, the most widespread of which is machine learning. Machine learning has evolved rapidly over the past decade in an attempt to match the rate of increasing data, and aided by advancements in computer hardware, analyses that would have been impossible in the past are now common place on astronomers’ laptops. However, despite machine learning becoming a favourite tool for many, there is often little consideration for which algorithms are best suited for the job. In this thesis, machine learning is implemented in a variety of different problems ranging from Solar System science and searching for Trans-Neptunian Objects (TNOs), to the cosmological problem of obtaining accurate photometric redshift (photo-z) estimations for distant galaxies. In chapter 2 I implement many different machine learning classifiers to aid the Dark Energy Survey’s search for TNOs, comparing the classifiers to find the most suitable, and demonstrating how machine learning can provide significant increases in efficiency. In chapter 3 I implement machine learning algorithms to provide photo-z estimations for a million galaxies, using the method as an example for how it is possible to benchmark machine learning algorithms to provide information about the scalibility of different methods. In chapter 4 I expand upon the benchmarking of methods developed for obtaining photo-z estimates, applying them instead to deep learning algorithms which directly use image data, before discussing future work and concluding in chapter 5

    Benchmarking and scalability of machine-learning methods for photometric redshift estimation

    Get PDF
    Obtaining accurate photometric redshift (photo-z) estimations is an important aspect of cosmology, remaining a prerequisite of many analyses. In creating novel methods to produce photo-z estimations, there has been a shift towards using machine-learning techniques. However, there has not been as much of a focus on how well different machine-learning methods scale or perform with the ever-increasing amounts of data being produced. Here, we introduce a benchmark designed to analyse the performance and scalability of different supervised machine-learning methods for photo-z estimation. Making use of the Sloan Digital Sky Survey (SDSS – DR12) data set, we analysed a variety of the most used machine-learning algorithms. By scaling the number of galaxies used to train and test the algorithms up to one million, we obtained several metrics demonstrating the algorithms’ performance and scalability for this task. Furthermore, by introducing a new optimization method, time-considered optimization, we were able to demonstrate how a small concession of error can allow for a great improvement in efficiency. From the algorithms tested, we found that the Random Forest performed best with a mean squared error, MSE = 0.0042; however, as other algorithms such as Boosted Decision Trees and k-Nearest Neighbours performed very similarly, we used our benchmarks to demonstrate how different algorithms could be superior in different scenarios. We believe that benchmarks like this will become essential with upcoming surveys, such as the Vera C. Rubin Observatory’s Legacy Survey of Space and Time (LSST), which will capture billions of galaxies requiring photometric redshifts
    corecore