32 research outputs found

    Communication Bandwidth Considerations for Exploration Medical Care During Space Missions

    Get PDF
    Destinations beyond low Earth orbit, especially Mars, have several important constraints, including limited resupply, limited to no possibility of medical evacuation, and delayed communication with ground support teams. Therefore, medical care is driven towards greater autonomy and necessitates a medical system that supports this paradigm, including the potential for high medical data transfer rates in order to share medical information and coordinate care with the ground in an intermittent fashion as communication allows. The medical data transfer needs for a Martian exploration mission were estimated by defining two medical scenarios that would require high data rate communications between the spacecraft and Earth. One medical scenario involves a case of hydronephrosis (outflow obstruction of the kidney) that evolves into pyelonephritis (kidney infection), then urosepsis (systemic infection originating from the kidney), due to obstruction by a kidney stone. A second medical scenario involved the death of a crewmembers child back on Earth that requires behavioral health care. For each of these scenarios, a data communications timeline was created following the medical care described by the scenario. From these timelines, total medical data transfers and burst transmission rates were estimated. Total data transferred from the vehicle-to-ground were estimated to be 94 gigabytes (GB) and 835 GB for the hydronephrosis and behavioral health scenarios, respectively. Data burst rates were estimated to be 7.7 megabytes per second (MB/s) and 15 MB/s for the hydronephrosis and behavioral health scenarios, respectively. Even though any crewed Mars mission should be capable of functioning autonomously, as long as the possibility of communication between Earth and Mars exists, Earth-based subject matter experts will be relied upon to augment mission medical capability. Therefore, setting an upper boundary limit for medical communication rates can help factor medical system needs into total vehicle communication requirements

    The Kepler DB, a Database Management System for Arrays, Sparse Arrays and Binary Data

    Get PDF
    The Kepler Science Operations Center stores pixel values on approximately six million pixels collected every 30-minutes, as well as data products that are generated as a result of running the Kepler science processing pipeline. The Kepler Database (Kepler DB) management system was created to act as the repository of this information. After one year of ight usage, Kepler DB is managing 3 TiB of data and is expected to grow to over 10 TiB over the course of the mission. Kepler DB is a non-relational, transactional database where data are represented as one dimensional arrays, sparse arrays or binary large objects. We will discuss Kepler DB's APIs, implementation, usage and deployment at the Kepler Science Operations Center

    Transiting Planet Search in the Kepler Pipeline

    Get PDF
    The Kepler Mission simultaneously measures the brightness of more than 160,000 stars every 29.4 minutes over a 3.5-year mission to search for transiting planets. Detecting transits is a signal-detection problem where the signal of interest is a periodic pulse train and the predominant noise source is non-white, non-stationary (1/f) type process of stellar variability. Many stars also exhibit coherent or quasi-coherent oscillations. The detection algorithm first identifies and removes strong oscillations followed by an adaptive, wavelet-based matched filter. We discuss how we obtain super-resolution detection statistics and the effectiveness of the algorithm for Kepler flight data

    The Kepler Science Operations Center Pipeline Framework Extensions

    Get PDF
    The Kepler Science Operations Center (SOC) is responsible for several aspects of the Kepler Mission, including managing targets, generating on-board data compression tables, monitoring photometer health and status, processing the science data, and exporting the pipeline products to the mission archive. We describe how the generic pipeline framework software developed for Kepler is extended to achieve these goals, including pipeline configurations for processing science data and other support roles, and custom unit of work generators that control how the Kepler data are partitioned and distributed across the computing cluster. We describe the interface between the Java software that manages the retrieval and storage of the data for a given unit of work and the MATLAB algorithms that process these data. The data for each unit of work are packaged into a single file that contains everything needed by the science algorithms, allowing these files to be used to debug and evolve the algorithms offline

    Detection of Potential Transit Signals in Sixteen Quarters of Kepler Mission Data

    Full text link
    We present the results of a search for potential transit signals in four years of photometry data acquired by the Kepler Mission. The targets of the search include 111,800 stars which were observed for the entire interval and 85,522 stars which were observed for a subset of the interval. We found that 9,743 targets contained at least one signal consistent with the signature of a transiting or eclipsing object, where the criteria for detection are periodicity of the detected transits, adequate signal-to-noise ratio, and acceptance by a number of tests which reject false positive detections. When targets that had produced a signal were searched repeatedly, an additional 6,542 signals were detected on 3,223 target stars, for a total of 16,285 potential detections. Comparison of the set of detected signals with a set of known and vetted transit events in the Kepler field of view shows that the recovery rate for these signals is 96.9%. The ensemble properties of the detected signals are reviewed.Comment: Accepted by ApJ Supplemen

    Overview of the Kepler Science Processing Pipeline

    Full text link
    The Kepler Mission Science Operations Center (SOC) performs several critical functions including managing the ~156,000 target stars, associated target tables, science data compression tables and parameters, as well as processing the raw photometric data downlinked from the spacecraft each month. The raw data are first calibrated at the pixel level to correct for bias, smear induced by a shutterless readout, and other detector and electronic effects. A background sky flux is estimated from ~4500 pixels on each of the 84 CCD readout channels, and simple aperture photometry is performed on an optimal aperture for each star. Ancillary engineering data and diagnostic information extracted from the science data are used to remove systematic errors in the flux time series that are correlated with these data prior to searching for signatures of transiting planets with a wavelet-based, adaptive matched filter. Stars with signatures exceeding 7.1 sigma are subjected to a suite of statistical tests including an examination of each star's centroid motion to reject false positives caused by background eclipsing binaries. Physical parameters for each planetary candidate are fitted to the transit signature, and signatures of additional transiting planets are sought in the residual light curve. The pipeline is operational, finding planetary signatures and providing robust eliminations of false positives.Comment: 8 pages, 3 figure

    Detection of Potential Transit Signals in the First Three Quarters of Kepler Mission Data

    Full text link
    We present the results of a search for potential transit signals in the first three quarters of photometry data acquired by the Kepler Mission. The targets of the search include 151,722 stars which were observed over the full interval and an additional 19,132 stars which were observed for only 1 or 2 quarters. From this set of targets we find a total of 5,392 detections which meet the Kepler detection criteria: those criteria are periodicity of the signal, an acceptable signal-to-noise ratio, and a composition test which rejects spurious detections which contain non-physical combinations of events. The detected signals are dominated by events with relatively low signal-to-noise ratio and by events with relatively short periods. The distribution of estimated transit depths appears to peak in the range between 40 and 100 parts per million, with a few detections down to fewer than 10 parts per million. The detected signals are compared to a set of known transit events in the Kepler field of view which were derived by a different method using a longer data interval; the comparison shows that the current search correctly identified 88.1% of the known events. A tabulation of the detected transit signals, examples which illustrate the analysis and detection process, a discussion of future plans and open, potentially fruitful, areas of further research are included

    Kepler Data Release 4 Notes

    Get PDF
    The Data Analysis Working Group have released long and short cadence materials, including FFIs and Dropped Targets for the Public. The Kepler Science Office considers Data Release 4 to provide "browse quality" data. These notes have been prepared to give Kepler users of the Multimission Archive at STScl (MAST) a summary of how the data were collected and prepared, and how well the data processing pipeline is functioning on flight data. They will be updated for each release of data to the public archive and placed on MAST along with other Kepler documentation, at http://archive.stsci.edu/kepler/documents.html. Data release 3 is meant to give users the opportunity to examine the data for possibly interesting science and to involve the users in improving the pipeline for future data releases. To perform the latter service, users are encouraged to notice and document artifacts, either in the raw or processed data, and report them to the Science Office

    A Framework for Propagation of Uncertainties in the Kepler Data Analysis Pipeline

    Get PDF
    The Kepler space telescope is designed to detect Earth-like planets around Sun-like stars using transit photometry by simultaneously observing 100,000 stellar targets nearly continuously over a three and a half year period. The 96-megapixel focal plane consists of 42 charge-coupled devices (CCD) each containing two 1024 x 1100 pixel arrays. Cross-correlations between calibrated pixels are introduced by common calibrations performed on each CCD requiring downstream data products access to the calibrated pixel covariance matrix in order to properly estimate uncertainties. The prohibitively large covariance matrices corresponding to the ~75,000 calibrated pixels per CCD preclude calculating and storing the covariance in standard lock-step fashion. We present a novel framework used to implement standard propagation of uncertainties (POU) in the Kepler Science Operations Center (SOC) data processing pipeline. The POU framework captures the variance of the raw pixel data and the kernel of each subsequent calibration transformation allowing the full covariance matrix of any subset of calibrated pixels to be recalled on-the-fly at any step in the calibration process. Singular value decomposition (SVD) is used to compress and low-pass filter the raw uncertainty data as well as any data dependent kernels. The combination of POU framework and SVD compression provide downstream consumers of the calibrated pixel data access to the full covariance matrix of any subset of the calibrated pixels traceable to pixel level measurement uncertainties without having to store, retrieve and operate on prohibitively large covariance matrices. We describe the POU Framework and SVD compression scheme and its implementation in the Kepler SOC pipeline
    corecore