724 research outputs found

    Confirmation of Monoperiodicity Above 20 Seconds for Two Blue Large-Amplitude Pulsators

    Get PDF
    Blue Large-Amplitude Pulsators (BLAPs) are a new class of pulsating variable star. They are located close to the hot subdwarf branch in the Hertzsprung-Russell diagram and have spectral classes of late O or early B. Stellar evolution models indicate that these stars are likely radially pulsating, driven by iron group opacity in their interiors. A number of variable stars with a similar driving mechanism exist near the hot subdwarf branch with multi-periodic oscillations caused by either pressure (p) or gravity (g) modes. No multi-periodic signals were detected in the OGLE discovery light curves since it would be difficult to detect short period signals associated with higher-order p modes with the OGLE cadence. Using the RISE instrument on the Liverpool Telescope, we produced high cadence light curves of two BLAPs, OGLE-BLAP-009 (mv=15.65m_{\mathrm{v}}=15.65 mag) and OGLE-BLAP-014 (mv=16.79m_{\mathrm{v}}=16.79 mag) using a 720720 nm longpass filter. Frequency analysis of these light curves identify a primary oscillation with a period of 31.935±0.009831.935\pm0.0098 mins and an amplitude from a Fourier series fit of 0.2360.236 mag for BLAP-009. The analysis of BLAP-014 identifies a period of 33.625±0.021433.625\pm0.0214 mins and an amplitude of 0.2250.225 mag. Analysis of the residual light curves reveals no additional short period variability down to an amplitude of 15.20±0.2615.20\pm0.26 mmag for BLAP-009 and 58.60±3.4458.60\pm3.44 mmag for BLAP-014 for minimum periods of 2020 s and 6060 s respectively. These results further confirm that the BLAPs are monoperiodic

    The Classification of Periodic Light Curves from non-survey optimized observational data through Automated Extraction of Phase-based Visual Features

    Get PDF
    We implement two hidden-layer feedforward networks to classify 3011 variable star light curves. These light curves are generated from a reduction of non-survey optimized observational images gathered by wide-field cameras mounted on the Liverpool Telescope. We extract 16 features found to be highly informative in previous studies but achieve only 19.82% accuracy on a 30% test set, 5.56% above a random model. Noise and sampling defects present in these light curves poison these features primarily by reducing our Periodogram period match rate to fewer than 5%. We propose using an automated visual feature extraction technique by transforming the phase-folded light curves into image based representations. This eliminates much of the noise and the missing phase data, due to sampling defects, should have a less destructive effect on these shape features as they still remain at least partially present. We produced a set of scaled images with pixels turned either on or off based on a threshold of data points in each pixel defined as at minimum one fifth of those of the most populated pixel for each light curve. Training on the same feedforward network, we achieve 29.13% accuracy, a 13.16% improvement over a random model and we also show this technique scales with an improvement to 33.51% accuracy by increasing the number of hidden layer neurons. We concede that this improvement is not yet sufficient to allow these light curves to be used for automated classification and in conclusion we discuss a new pipeline currently being developed that simultaneously incorporates period estimation and classification. This method is inspired by approximating the manual methods employed by astronomers

    GRAPE: Genetic Routine for Astronomical Period Estimation

    Get PDF
    Period estimation is an important task in the classification of many variable astrophysical objects. Here we present GRAPE: A Genetic Routine for Astronomical Period Estimation, a genetic algorithm optimised for the processing of survey data with spurious and aliased artefacts. It uses a Bayesian Generalised Lomb-Scargle (BGLS) fitness function designed for use with the Skycam survey conducted at the Liverpool Telescope. We construct a set of simulated light curves using both regular survey cadence and the unique Skycam variable cadence with four types of signal: sinusoidal, sawtooth, symmetric eclipsing binary and eccentric eclipsing binary. We apply GRAPE and a frequency spectrum BGLS periodogram to the light curves and show that the performance of GRAPE is superior to the frequency spectrum for any signal well modelled by the fitness function. This is due to treating the parameter space as a continuous variable.We also show that the Skycam sampling is sufficient to correctly estimate the period of over 90% of the sinusoidal shape light curves relative to the more standard regular cadence.We note that GRAPE has a computational overhead which makes it slower on light curves with low numbers of observations and faster with higher numbers of observations and discuss the potential optimisations used to speedup the runtime. Finally, we analyse the period dependence and baseline importance of the performance of both methods and propose improvements which will extend this method to the detection of quasi-periodic signals

    Classifying Periodic Astrophysical Phenomena from non-survey optimized variable-cadence observational data

    Get PDF
    Modern time-domain astronomy is capable of collecting a staggeringly large amount of data on millions of objects in real time. Therefore, the production of methods and systems for the automated classification of time-domain astronomical objects is of great importance. The Liverpool Telescope has a number of wide-field image gathering instruments mounted upon its structure, the Small Telescopes Installed at the Liverpool Telescope. These instruments have been in operation since March 2009 gathering data of large areas of sky around the current field of view of the main telescope generating a large dataset containing millions of light sources. The instruments are inexpensive to run as they do not require a separate telescope to operate but this style of surveying the sky introduces structured artifacts into our data due to the variable cadence at which sky fields are resampled. These artifacts can make light sources appear variable and must be addressed in any processing method. The data from large sky surveys can lead to the discovery of interesting new variable objects. Efficient software and analysis tools are required to rapidly determine which potentially variable objects are worthy of further telescope time. Machine learning offers a solution to the quick detection of variability by characterising the detected signals relative to previously seen exemplars. In this paper, we introduce a processing system designed for use with the Liverpool Telescope identifying potentially interesting objects through the application of a novel representation learning approach to data collected automatically from the wide-field instruments. Our method automatically produces a set of classification features by applying Principal Component Analysis on set of variable light curves using a piecewise polynomial fitted via a genetic algorithm applied to the epoch-folded data. The epoch-folding requires the selection of a candidate period for variable light curves identified using a genetic algorithm period estimation method specifically developed for this dataset. A Random Forest classifier is then used to classify the learned features to determine if a light curve is generated by an object of interest. This system allows for the telescope to automatically identify new targets through passive observations which do not affect day-to-day operations as the unique artifacts resulting from such a survey method are incorporated into the methods. We demonstrate the power of this feature extraction method compared to feature engineering performed by previous studies by training classification models on 859 light curves of 12 known variable star classes from our dataset. We show that our new features produce a model with a superior mean cross-validation F1 score of 0.4729 with a standard deviation of 0.0931 compared with the engineered features at 0.3902 with a standard deviation of 0.0619. We show that the features extracted from the representation learning are given relatively high importance in the final classification model. Additionally, we compare engineered features computed on the interpolated polynomial fits and show that they produce more reliable distributions than those fit to the raw light curve when the period estimation is correct

    A Dynamic, Modular Intelligent-Agent framework for Astronomical Light Curve Analysis and Classification

    Get PDF
    Modern time-domain astronomy is capable of collecting a staggeringly large amount of data on millions of objects in real time. This makes it almost impossible for objects to be identified manually. Therefore the production of methods and systems for the automated classification of time-domain astro-nomical objects is of great importance. The Liverpool Telescope has a number of wide-field image gathering instruments mounted upon its structure. These in-struments have been in operation since March 2009 gathering data of multi-degree sized areas of sky around the current field of view of the main telescope. Utilizing a Structured Query Language database established by a pre-processing operation upon the resultant images, which has identified millions of candidate variable stars with multiple time-varying magnitude observations, we applied a method designed to extract time-translation invariant features from the time-series light curves of each object for future input into a classification system. These efforts were met with limited success due to noise and uneven sampling within the time-series data. Additionally, finely surveying these light curves is a processing intensive task. Fortunately, these algorithms are capable of multi-threaded implementations based on available resources. Therefore we propose a new system designed to utilize multiple intelligent agents that distribute the data analysis across multiple machines whilst simultaneously a powerful intelligence service operates to constrain the light curves and eliminate false signals due to noise and local alias periods. This system will be highly scalable, capable of operating on a wide range of hardware whilst maintaining the production of ac-curate features based on the fitting of harmonic models to the light curves within the initial Structural Query Language database

    An update on the development of ASPIRED

    Get PDF
    We are reporting the updates in version 0.2.0 of the Automated SpectroPhotometric REDuction (ASPIRED) pipeline, designed for common use on different instruments. The default settings support many typical long-slit spectrometer configurations, whilst it also offers a flexible set of functions for users to refine and tailor-make their automated pipelines to an instrument's individual characteristics. Such automation provides near real-time data reduction to allow adaptive observing strategies, which is particularly important in the Time Domain Astronomy. Over the course of last year, significant improvement was made in the internal data handling as well as data I/O, accuracy and repeatability in the wavelength calibration

    Mapping poverty using mobile phone and satellite data

    Get PDF
    Poverty is one of the most important determinants of adverse health outcomes globally, a major cause of societal instability and one of the largest causes of lost human potential. Traditional approaches to measuring and targeting poverty rely heavily on census data, which in most low- and middle-income countries (LMICs) are unavailable or out-of-date. Alternate measures are needed to comp- lement and update estimates between censuses. This study demonstrates how public and private data sources that are commonly available for LMICs can be used to provide novel insight into the spatial distribution of poverty. We evalu- ate the relative value of modelling three traditional poverty measures using aggregate data from mobile operators and widely available geospatial data. Taken together, models combining these data sources providethebest predictive power (highest r 2 ÂĽ 0.78) and lowest error, but generally models employing mobile data only yield comparable results, offering the potential to measure poverty more frequently and at finer granularity. Stratifying models into urban and rural areas highlights the advantage of using mobile data in urban areas and different data in different contexts. The findings indicate the possibility to estimate and continually monitor poverty rates at high spatial resolution in countries with limited capacity to support traditional methods of datacollection

    Multilevel latent class casemix modelling: a novel approach to accommodate patient casemix

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Using routinely collected patient data we explore the utility of multilevel latent class (MLLC) models to adjust for patient casemix and rank Trust performance. We contrast this with ranks derived from Trust standardised mortality ratios (SMRs).</p> <p>Methods</p> <p>Patients with colorectal cancer diagnosed between 1998 and 2004 and resident in Northern and Yorkshire regions were identified from the cancer registry database (n = 24,640). Patient age, sex, stage-at-diagnosis (Dukes), and Trust of diagnosis/treatment were extracted. Socioeconomic background was derived using the Townsend Index. Outcome was survival at 3 years after diagnosis. MLLC-modelled and SMR-generated Trust ranks were compared.</p> <p>Results</p> <p>Patients were assigned to two classes of similar size: one with reasonable prognosis (63.0% died within 3 years), and one with better prognosis (39.3% died within 3 years). In patient class one, all patients diagnosed at stage B or C died within 3 years; in patient class two, all patients diagnosed at stage A, B or C survived. Trusts were assigned two classes with 51.3% and 53.2% of patients respectively dying within 3 years. Differences in the ranked Trust performance between the MLLC model and SMRs were all within estimated 95% CIs.</p> <p>Conclusions</p> <p>A novel approach to casemix adjustment is illustrated, ranking Trust performance whilst facilitating the evaluation of factors associated with the patient journey (e.g. treatments) and factors associated with the processes of healthcare delivery (e.g. delays). Further research can demonstrate the value of modelling patient pathways and evaluating healthcare processes across provider institutions.</p
    • …
    corecore