101 research outputs found

    The Kepler Science Operations Center Pipeline Framework Extensions

    Get PDF
    The Kepler Science Operations Center (SOC) is responsible for several aspects of the Kepler Mission, including managing targets, generating on-board data compression tables, monitoring photometer health and status, processing the science data, and exporting the pipeline products to the mission archive. We describe how the generic pipeline framework software developed for Kepler is extended to achieve these goals, including pipeline configurations for processing science data and other support roles, and custom unit of work generators that control how the Kepler data are partitioned and distributed across the computing cluster. We describe the interface between the Java software that manages the retrieval and storage of the data for a given unit of work and the MATLAB algorithms that process these data. The data for each unit of work are packaged into a single file that contains everything needed by the science algorithms, allowing these files to be used to debug and evolve the algorithms offline

    Overview of the Kepler Science Processing Pipeline

    Full text link
    The Kepler Mission Science Operations Center (SOC) performs several critical functions including managing the ~156,000 target stars, associated target tables, science data compression tables and parameters, as well as processing the raw photometric data downlinked from the spacecraft each month. The raw data are first calibrated at the pixel level to correct for bias, smear induced by a shutterless readout, and other detector and electronic effects. A background sky flux is estimated from ~4500 pixels on each of the 84 CCD readout channels, and simple aperture photometry is performed on an optimal aperture for each star. Ancillary engineering data and diagnostic information extracted from the science data are used to remove systematic errors in the flux time series that are correlated with these data prior to searching for signatures of transiting planets with a wavelet-based, adaptive matched filter. Stars with signatures exceeding 7.1 sigma are subjected to a suite of statistical tests including an examination of each star's centroid motion to reject false positives caused by background eclipsing binaries. Physical parameters for each planetary candidate are fitted to the transit signature, and signatures of additional transiting planets are sought in the residual light curve. The pipeline is operational, finding planetary signatures and providing robust eliminations of false positives.Comment: 8 pages, 3 figure

    Photometer Performance Assessment in Kepler Science Data Processing

    Get PDF
    This paper describes the algorithms of the Photometer Performance Assessment (PPA) software component in the science data processing pipeline of the Kepler mission. The PPA performs two tasks: One is to analyze the health and performance of the Kepler photometer based on the long cadence science data down-linked via Ka band approximately every 30 days. The second is to determine the attitude of the Kepler spacecraft with high precision at each long cadence. The PPA component is demonstrated to work effectively with the Kepler flight data

    Pixel-Level Calibration in the Kepler Science Operations Center Pipeline

    Get PDF
    We present an overview of the pixel-level calibration of flight data from the Kepler Mission performed within the Kepler Science Operations Center Science Processing Pipeline. This article describes the calibration (CAL) module, which operates on original spacecraft data to remove instrument effects and other artifacts that pollute the data. Traditional CCD data reduction is performed (removal of instrument/detector effects such as bias and dark current), in addition to pixel-level calibration (correcting for cosmic rays and variations in pixel sensitivity), Kepler-specific corrections (removing smear signals which result from the lack of a shutter on the photometer and correcting for distortions induced by the readout electronics), and additional operations that are needed due to the complexity and large volume of flight data. CAL operates on long (~30 min) and short (~1 min) sampled data, as well as full-frame images, and produces calibrated pixel flux time series, uncertainties, and other metrics that are used in subsequent Pipeline modules. The raw and calibrated data are also archived in the Multi-mission Archive at Space Telescope at the Space Telescope Science Institute for use by the astronomical community

    Data Validation in the Kepler Science Operations Center Pipeline

    Get PDF
    We present an overview of the Data Validation (DV) software component and its context within the Kepler Science Operations Center (SOC) pipeline and overall Kepler Science mission. The SOC pipeline performs a transiting planet search on the corrected light curves for over 150,000 targets across the focal plane array. We discuss the DV strategy for automated validation of Threshold Crossing Events (TCEs) generated in the transiting planet search. For each TCE, a transiting planet model is fitted to the target light curve. A multiple planet search is conducted by repeating the transiting planet search on the residual light curve after the model flux has been removed; if an additional detection occurs, a planet model is fitted to the new TCE. A suite of automated tests are performed after all planet candidates have been identified. We describe a centroid motion test to determine the significance of the motion of the target photocenter during transit and to estimate the coordinates of the transit source within the photometric aperture; a series of eclipsing binary discrimination tests on the parameters of the planet model fits to all transits and the sequences of odd and even transits; and a statistical bootstrap to assess the likelihood that the TCE would have been generated purely by chance given the target light curve with all transits removed. Keywords: photometry, data validation, Kepler, Earth-size planet

    Semi-Weekly Monitoring of the Performance and Attitude of Kepler Using a Sparse Set of Targets

    Get PDF
    The Kepler spacecraft is in a heliocentric Earth-trailing orbit, continuously observing ~160,000 select stars over ~115 square degrees of sky using its photometer containing 42 highly sensitive CCDs. The science data from these stars, consisting of ~6 million pixels at 29.4-minute intervals, is downlinked only every ~30 days. Additional low-rate Xband communications contacts are conducted with the spacecraft twice a week to downlink a small subset of the science data. This paper describes how we assess and monitor the performance of the photometer and the pointing stability of the spacecraft using such a sparse data set

    KOI-54: The Kepler Discovery of Tidally Excited Pulsations and Brightenings in a Highly Eccentric Binary

    Get PDF
    Kepler observations of the star HD 187091 (KIC 8112039, hereafter KOI-54) revealed a remarkable light curve exhibiting sharp periodic brightening events every 41.8 days with a superimposed set of oscillations forming a beating pattern in phase with the brightenings. Spectroscopic observations revealed that this is a binary star with a highly eccentric orbit, e = 0.83. We are able to match the Kepler light curve and radial velocities with a nearly face-on (i = 5 degrees.5) binary star model in which the brightening events are caused by tidal distortion and irradiation of nearly identical A stars during their close periastron passage. The two dominant oscillations in the light curve, responsible for the beating pattern, have frequencies that are the 91st and 90th harmonic of the orbital frequency. The power spectrum of the light curve, after removing the binary star brightening component, reveals a large number of pulsations, 30 of which have a signal-to-noise ratio greater than or similar to 7. Nearly all of these pulsations have frequencies that are either integer multiples of the orbital frequency or are tidally split multiples of the orbital frequency. This pattern of frequencies unambiguously establishes the pulsations as resonances between the dynamic tides at periastron and the free oscillation modes of one or both of the stars. KOI-54 is only the fourth star to show such a phenomenon and is by far the richest in terms of excited modes.NASA, Science Mission DirectorateNASA NNX08AR14GEuropean Research Council under the European Community 227224W.M. Keck FoundationMcDonald Observator

    A Framework for Propagation of Uncertainties in the Kepler Data Analysis Pipeline

    Get PDF
    The Kepler space telescope is designed to detect Earth-like planets around Sun-like stars using transit photometry by simultaneously observing 100,000 stellar targets nearly continuously over a three and a half year period. The 96-megapixel focal plane consists of 42 charge-coupled devices (CCD) each containing two 1024 x 1100 pixel arrays. Cross-correlations between calibrated pixels are introduced by common calibrations performed on each CCD requiring downstream data products access to the calibrated pixel covariance matrix in order to properly estimate uncertainties. The prohibitively large covariance matrices corresponding to the ~75,000 calibrated pixels per CCD preclude calculating and storing the covariance in standard lock-step fashion. We present a novel framework used to implement standard propagation of uncertainties (POU) in the Kepler Science Operations Center (SOC) data processing pipeline. The POU framework captures the variance of the raw pixel data and the kernel of each subsequent calibration transformation allowing the full covariance matrix of any subset of calibrated pixels to be recalled on-the-fly at any step in the calibration process. Singular value decomposition (SVD) is used to compress and low-pass filter the raw uncertainty data as well as any data dependent kernels. The combination of POU framework and SVD compression provide downstream consumers of the calibrated pixel data access to the full covariance matrix of any subset of the calibrated pixels traceable to pixel level measurement uncertainties without having to store, retrieve and operate on prohibitively large covariance matrices. We describe the POU Framework and SVD compression scheme and its implementation in the Kepler SOC pipeline

    The role of CDC48 in the retro-translocation of non-ubiquitinated toxin substrates in plant cells

    Get PDF
    When the catalytic A subunits of the castor bean toxins ricin and Ricinus communis agglutinin (denoted as RTA and RCA A, respectively) are delivered into the endoplasmic reticulum (ER) of tobacco protoplasts, they become substrates for ER-associated protein degradation (ERAD). As such, these orphan polypeptides are retro-translocated to the cytosol, where a significant proportion of each protein is degraded by proteasomes. Here we begin to characterise the ERAD pathway in plant cells, showing that retro-translocation of these lysine-deficient glycoproteins requires the ATPase activity of cytosolic CDC48. Lysine polyubiquitination is not obligatory for this step. We also show that while RCA A is found in a mannose-untrimmed form prior to its retro-translocation, a significant proportion of newly synthesised RTA cycles via the Golgi and becomes modified by downstream glycosylation enzymes. Despite these differences, both proteins are similarly retro-translocated

    Kepler Data Release 4 Notes

    Get PDF
    The Data Analysis Working Group have released long and short cadence materials, including FFIs and Dropped Targets for the Public. The Kepler Science Office considers Data Release 4 to provide "browse quality" data. These notes have been prepared to give Kepler users of the Multimission Archive at STScl (MAST) a summary of how the data were collected and prepared, and how well the data processing pipeline is functioning on flight data. They will be updated for each release of data to the public archive and placed on MAST along with other Kepler documentation, at http://archive.stsci.edu/kepler/documents.html. Data release 3 is meant to give users the opportunity to examine the data for possibly interesting science and to involve the users in improving the pipeline for future data releases. To perform the latter service, users are encouraged to notice and document artifacts, either in the raw or processed data, and report them to the Science Office
    corecore