139 research outputs found

    Natural Coronagraphic Observations of the Eclipsing T Tauri System KH 15D: Evidence for Accretion and Bipolar Outflow in a WTTS

    Full text link
    We present high resolution (R āˆ¼\sim 44,000) UVES spectra of the eclipsing pre-main sequence star KH 15D covering the wavelength range 4780 to 6810 {\AA} obtained at three phases: out of eclipse, near minimum light and during egress. The system evidently acts like a natural coronagraph, enhancing the contrast relative to the continuum of hydrogen and forbidden emission lines during eclipse. At maximum light the HĪ±\alpha equivalent width was āˆ¼\sim2 {\AA} and the profile showed broad wings and a deep central absorption. During egress the equivalent width was much higher (āˆ¼\sim70 {\AA}) and the broad wings, which extend to Ā±\pm 300 km/s, were prominent. During eclipse totality the equivalent width was less than during egress (āˆ¼\sim40 {\AA}) and the high velocity wings were much weaker. HĪ²\beta showed a somewhat different behavior, revealing only the blue-shifted portion of the high velocity component during eclipse and egress. [OI] Ī»Ī»\lambda\lambda6300, 6363 lines are easily seen both out of eclipse and when the photosphere is obscured and exhibit little or no flux variation with eclipse phase. Our interpretation is that KH 15D, although clearly a weak-line T Tauri star by the usual criteria, is still accreting matter from a circumstellar disk, and has a well-collimated bipolar jet. As the knife-edge of the occulting matter passes across the close stellar environment it is evidently revealing structure in the magnetosphere of this pre-main sequence star with unprecedented spatial resolution. We also show that there is only a small, perhaps marginally significant, change in the velocity of the K7 star between the maximum light and egress phases probed here

    New approaches to object classification in synoptic sky surveys

    Get PDF
    Digital synoptic sky surveys pose several new object classification challenges. In surveys where real-time detection and classification of transient events is a science driver, there is a need for an effective elimination of instrument-related artifacts which can masquerade as transient sources in the detection pipeline, e.g., unremoved large cosmic rays, saturation trails, reflections, crosstalk artifacts, etc. We have implemented such an Artifact Filter, using a supervised neural network, for the real-time processing pipeline in the Palomar-Quest (PQ) survey. After the training phase, for each object it takes as input a set of measured morphological parameters and returns the probability of it being a real object. Despite the relatively low number of training cases for many kinds of artifacts, the overall artifact classification rate is around 90%, with no genuine transients misclassified during our real-time scans. Another question is how to assign an optimal star-galaxy classification in a multi-pass survey, where seeing and other conditions change between different epochs, potentially producing inconsistent classifications for the same object. We have implemented a star/galaxy multipass classifier that makes use of external and a priori knowledge to find the optimal classification from the individually derived ones. Both these techniques can be applied to other, similar surveys and data sets

    Towards real-time classification of astronomical transients

    Get PDF
    Exploration of time domain is now a vibrant area of research in astronomy, driven by the advent of digital synoptic sky surveys. While panoramic surveys can detect variable or transient events, typically some follow-up observations are needed; for short-lived phenomena, a rapid response is essential. Ability to automatically classify and prioritize transient events for follow-up studies becomes critical as the data rates increase. We have been developing such methods using the data streams from the Palomar-Quest survey, the Catalina Sky Survey and others, using the VOEventNet framework. The goal is to automatically classify transient events, using the new measurements, combined with archival data (previous and multi-wavelength measurements), and contextual information (e.g., Galactic or ecliptic latitude, presence of a possible host galaxy nearby, etc.); and to iterate them dynamically as the follow-up data come in (e.g., light curves or colors). We have been investigating Bayesian methodologies for classification, as well as discriminated follow-up to optimize the use of available resources, including Naive Bayesian approach, and the non-parametric Gaussian process regression. We will also be deploying variants of the traditional machine learning techniques such as Neural Nets and Support Vector Machines on datasets of reliably classified transients as they build up

    Detecting Transits in Sparsely Sampled Surveys

    Full text link
    The small sizes of low mass stars in principle provide an opportunity to find Earth-like planets and "super-Earths" in habitable zones via transits. Large area synoptic surveys like Pan-STARRS and LSST will observe large numbers of low mass stars, albeit with widely spaced (sparse) time sampling relative to the planets' periods and transit durations. We present simple analytical equations that can be used to estimate the feasibility of a survey by setting upper limits to the number of transiting planets that will be detected. We use Monte Carlo simulations to find upper limits for the number of transiting planets that may be discovered in the Pan-STARRS Medium Deep and 3-pi surveys. Our search for transiting planets and M-dwarf eclipsing binaries in the SDSS-II supernova data is used to illustrate the problems (and successes) in using sparsely sampled surveys.Comment: 7 pages, 2 figures, published in Proceedings of the Conference on Classification and Discovery in Large Astronomical Surveys, 200

    Fine Structure in the Circumstellar Environment of a Young, Solar-like Star: the Unique Eclipses of KH 15D

    Full text link
    Results of an international campaign to photometrically monitor the unique pre-main sequence eclipsing object KH 15D are reported. An updated ephemeris for the eclipse is derived that incorporates a slightly revised period of 48.36 d. There is some evidence that the orbital period is actually twice that value, with two eclipses occurring per cycle. The extraordinary depth (~3.5 mag) and duration (~18 days) of the eclipse indicate that it is caused by circumstellar matter, presumably the inner portion of a disk. The eclipse has continued to lengthen with time and the central brightness reversals are not as extreme as they once were. V-R and V-I colors indicate that the system is slightly bluer near minimum light. Ingress and egress are remarkably well modeled by the passage of a knife-edge across a limb-darkened star. Possible models for the system are briefly discussed.Comment: 19 pages, 5 figure

    Weather on the Nearest Brown Dwarfs: Resolved Simultaneous Multi-Wavelength Variability Monitoring of WISE J104915.57-531906.1AB

    Full text link
    We present two epochs of MPG/ESO 2.2m GROND simultaneous 6-band (rā€²iā€²zā€²JHKr'i'z'JHK) photometric monitoring of the closest known L/T transition brown dwarf binary WISE J104915.57-531906.1AB. We report here the first resolved variability monitoring of both the T0.5 and L7.5 components. We obtained 4 hours of focused observations on the night of UT 2013-04-22, as well as 4 hours of defocused (unresolved) observations on the night of UT 2013-04-16. We note a number of robust trends in our light curves. The rā€²r' and iā€²i' light curves appear to be anticorrelated with zā€²z' and HH for the T0.5 component and in the unresolved lightcurve. In the defocused dataset, JJ appears correlated with zā€²z' and HH and anticorrelated with rā€²r' and iā€²i', while in the focused dataset we measure no variability for JJ at the level of our photometric precision, likely due to evolving weather phenomena. In our focused T0.5 component lightcurve, the KK band lightcurve displays a significant phase offset relative to both HH and zā€²z'. We argue that the measured phase offsets are correlated with atmospheric pressure probed at each band, as estimated from 1D atmospheric models. We also report low-amplitude variability in iā€²i' and zā€²z' intrinsic to the L7.5 component.Comment: 14 pages, 5 figures, accepted to ApJ Letter

    Parametrization and Classification of 20 Billion LSST Objects: Lessons from SDSS

    Get PDF
    The Large Synoptic Survey Telescope (LSST) will be a large, wide-field ground-based system designed to obtain, starting in 2015, multiple images of the sky that is visible from Cerro Pachon in Northern Chile. About 90% of the observing time will be devoted to a deep-wide-fast survey mode which will observe a 20,000 deg2^2 region about 1000 times during the anticipated 10 years of operations (distributed over six bands, ugrizyugrizy). Each 30-second long visit will deliver 5Ļƒ\sigma depth for point sources of rāˆ¼24.5r\sim24.5 on average. The co-added map will be about 3 magnitudes deeper, and will include 10 billion galaxies and a similar number of stars. We discuss various measurements that will be automatically performed for these 20 billion sources, and how they can be used for classification and determination of source physical and other properties. We provide a few classification examples based on SDSS data, such as color classification of stars, color-spatial proximity search for wide-angle binary stars, orbital-color classification of asteroid families, and the recognition of main Galaxy components based on the distribution of stars in the position-metallicity-kinematics space. Guided by these examples, we anticipate that two grand classification challenges for LSST will be 1) rapid and robust classification of sources detected in difference images, and 2) {\it simultaneous} treatment of diverse astrometric and photometric time series measurements for an unprecedentedly large number of objects.Comment: Presented at the "Classification and Discovery in Large Astronomical Surveys" meeting, Ringberg Castle, 14-17 October, 200

    Can programme theory be used as a 'translational toolā€™ to optimise health service delivery in a national early yearsā€™ initiative in Scotland: a case study

    Get PDF
    Background Theory-based evaluation (TBE) approaches are heralded as supporting formative evaluation by facilitating increased use of evaluative findings to guide programme improvement. It is essential that learning from programme implementation is better used to improve delivery and to inform other initiatives, if interventions are to be as effective as they have the potential to be. Nonetheless, few studies describe formative feedback methods, or report direct instrumental use of findings resulting from TBE. This paper uses the case of Scotlandā€™s, National Health Service, early yearsā€™, oral health improvement initiative (Childsmile) to describe the use of TBE as a framework for providing feedback on delivery to programme staff and to assess its impact on programmatic action.<p></p> Methods In-depth, semi-structured interviews and focus groups with key stakeholders explored perceived deviations between the Childsmile programme 'as deliveredā€™ and its Programme Theory (PT). The data was thematically analysed using constant comparative methods. Findings were shared with key programme stakeholders and discussions around likely impact and necessary actions were facilitated by the authors. Documentary review and ongoing observations of programme meetings were undertaken to assess the extent to which learning was acted upon.<p></p> Results On the whole, the activities documented in Childsmileā€™s PT were implemented as intended. This paper purposefully focuses on those activities where variation in delivery was evident. Differences resulted from the stage of roll-out reached and the flexibility given to individual NHS boards to tailor local implementation. Some adaptations were thought to have diverged from the central features of Childsmileā€™s PT, to the extent that there was a risk to achieving outcomes. The methods employed prompted national service improvement action, and proposals for local action by individual NHS boards to address this.<p></p> Conclusions The TBE approach provided a platform, to direct attention to areas of risk within a national health initiative, and to agree which intervention components were 'coreā€™ to its hypothesised success. The study demonstrates that PT can be used as a 'translational toolā€™ to facilitate instrumental use of evaluative findings to optimise implementation within a complex health improvement programme.<p></p&gt

    Variability type classification of multi-epoch surveys

    Full text link
    The classification of time series from photometric large scale surveys into variability types and the description of their properties is difficult for various reasons including but not limited to the irregular sampling, the usually few available photometric bands, and the diversity of variable objects. Furthermore, it can be seen that different physical processes may sometimes produce similar behavior which may end up to be represented as same models. In this article we will also be presenting our approach for processing the data resulting from the Gaia space mission. The approach may be classified into following three broader categories: supervised classification, unsupervised classifications, and "so-called" extractor methods i.e. algorithms that are specialized for particular type of sources. The whole process of classification- from classification attribute extraction to actual classification- is done in an automated manner.Comment: 6 pages, 2 figures. Version with figures as sent to the Editor/AIP (though not as published). Minor corrections mad
    • ā€¦
    corecore