408 research outputs found
Recertification of the air and methane storage vessels at the Langley 8-foot high-temperature structures tunnel
This center operates a number of sophisticated wind tunnels in order to fulfill the needs of its researchers. Compressed air, which is kept in steel storage vessels, is used to power many of these tunnels. Some of these vessels have been in use for many years, and Langley is currently recertifying these vessels to insure their continued structural integrity. One of the first facilities to be recertified under this program was the Langley 8-foot high-temperature structures tunnel. This recertification involved (1) modification, hydrotesting, and inspection of the vessels; (2) repair of all relevant defects; (3) comparison of the original design of the vessel with the current design criteria of Section 8, Division 2, of the 1974 ASME Boiler and Pressure Vessel Code; (4) fracture-mechanics, thermal, and wind-induced vibration analyses of the vessels; and (5) development of operating envelopes and a future inspection plan for the vessels. Following these modifications, analyses, and tests, the vessels were recertified for operation at full design pressure (41.4 MPa (6000 psi)) within the operating envelope developed
Kepler Presearch Data Conditioning II - A Bayesian Approach to Systematic Error Correction
With the unprecedented photometric precision of the Kepler Spacecraft,
significant systematic and stochastic errors on transit signal levels are
observable in the Kepler photometric data. These errors, which include
discontinuities, outliers, systematic trends and other instrumental signatures,
obscure astrophysical signals. The Presearch Data Conditioning (PDC) module of
the Kepler data analysis pipeline tries to remove these errors while preserving
planet transits and other astrophysically interesting signals. The completely
new noise and stellar variability regime observed in Kepler data poses a
significant problem to standard cotrending methods such as SYSREM and TFA.
Variable stars are often of particular astrophysical interest so the
preservation of their signals is of significant importance to the astrophysical
community. We present a Bayesian Maximum A Posteriori (MAP) approach where a
subset of highly correlated and quiet stars is used to generate a cotrending
basis vector set which is in turn used to establish a range of "reasonable"
robust fit parameters. These robust fit parameters are then used to generate a
Bayesian Prior and a Bayesian Posterior Probability Distribution Function (PDF)
which when maximized finds the best fit that simultaneously removes systematic
effects while reducing the signal distortion and noise injection which commonly
afflicts simple least-squares (LS) fitting. A numerical and empirical approach
is taken where the Bayesian Prior PDFs are generated from fits to the light
curve distributions themselves.Comment: 43 pages, 21 figures, Submitted for publication in PASP. Also see
companion paper "Kepler Presearch Data Conditioning I - Architecture and
Algorithms for Error Correction in Kepler Light Curves" by Martin C. Stumpe,
et a
Lessons learned from the introduction of autonomous monitoring to the EUVE science operations center
The University of California at Berkeley's (UCB) Center for Extreme Ultraviolet Astrophysics (CEA), in conjunction with NASA's Ames Research Center (ARC), has implemented an autonomous monitoring system in the Extreme Ultraviolet Explorer (EUVE) science operations center (ESOC). The implementation was driven by a need to reduce operations costs and has allowed the ESOC to move from continuous, three-shift, human-tended monitoring of the science payload to a one-shift operation in which the off shifts are monitored by an autonomous anomaly detection system. This system includes Eworks, an artificial intelligence (AI) payload telemetry monitoring package based on RTworks, and Epage, an automatic paging system to notify ESOC personnel of detected anomalies. In this age of shrinking NASA budgets, the lessons learned on the EUVE project are useful to other NASA missions looking for ways to reduce their operations budgets. The process of knowledge capture, from the payload controllers for implementation in an expert system, is directly applicable to any mission considering a transition to autonomous monitoring in their control center. The collaboration with ARC demonstrates how a project with limited programming resources can expand the breadth of its goals without incurring the high cost of hiring additional, dedicated programmers. This dispersal of expertise across NASA centers allows future missions to easily access experts for collaborative efforts of their own. Even the criterion used to choose an expert system has widespread impacts on the implementation, including the completion time and the final cost. In this paper we discuss, from inception to completion, the areas where our experiences in moving from three shifts to one shift may offer insights for other NASA missions
Kepler Presearch Data Conditioning I - Architecture and Algorithms for Error Correction in Kepler Light Curves
Kepler provides light curves of 156,000 stars with unprecedented precision.
However, the raw data as they come from the spacecraft contain significant
systematic and stochastic errors. These errors, which include discontinuities,
systematic trends, and outliers, obscure the astrophysical signals in the light
curves. To correct these errors is the task of the Presearch Data Conditioning
(PDC) module of the Kepler data analysis pipeline. The original version of PDC
in Kepler did not meet the extremely high performance requirements for the
detection of miniscule planet transits or highly accurate analysis of stellar
activity and rotation. One particular deficiency was that astrophysical
features were often removed as a side-effect to removal of errors. In this
paper we introduce the completely new and significantly improved version of PDC
which was implemented in Kepler SOC 8.0. This new PDC version, which utilizes a
Bayesian approach for removal of systematics, reliably corrects errors in the
light curves while at the same time preserving planet transits and other
astrophysically interesting signals. We describe the architecture and the
algorithms of this new PDC module, show typical errors encountered in Kepler
data, and illustrate the corrections using real light curve examples.Comment: Submitted to PASP. Also see companion paper "Kepler Presearch Data
Conditioning II - A Bayesian Approach to Systematic Error Correction" by Jeff
C. Smith et a
Verification of the Kepler Input Catalog from Asteroseismology of Solar-type Stars
We calculate precise stellar radii and surface gravities from the
asteroseismic analysis of over 500 solar-type pulsating stars observed by the
Kepler space telescope. These physical stellar properties are compared with
those given in the Kepler Input Catalog (KIC), determined from ground-based
multi-color photometry. For the stars in our sample, we find general agreement
but we detect an average overestimation bias of 0.23 dex in the KIC
determination of log (g) for stars with log (g)_KIC > 4.0 dex, and a resultant
underestimation bias of up to 50% in the KIC radii estimates for stars with
R_KIC < 2 R sun. Part of the difference may arise from selection bias in the
asteroseismic sample; nevertheless, this result implies there may be fewer
stars characterized in the KIC with R ~ 1 R sun than is suggested by the
physical properties in the KIC. Furthermore, if the radius estimates are taken
from the KIC for these affected stars and then used to calculate the size of
transiting planets, a similar underestimation bias may be applied to the
planetary radii.Comment: Published in The Astrophysical Journal Letter
Detection of Potential Transit Signals in the First Three Quarters of Kepler Mission Data
We present the results of a search for potential transit signals in the first
three quarters of photometry data acquired by the Kepler Mission. The targets
of the search include 151,722 stars which were observed over the full interval
and an additional 19,132 stars which were observed for only 1 or 2 quarters.
From this set of targets we find a total of 5,392 detections which meet the
Kepler detection criteria: those criteria are periodicity of the signal, an
acceptable signal-to-noise ratio, and a composition test which rejects spurious
detections which contain non-physical combinations of events. The detected
signals are dominated by events with relatively low signal-to-noise ratio and
by events with relatively short periods. The distribution of estimated transit
depths appears to peak in the range between 40 and 100 parts per million, with
a few detections down to fewer than 10 parts per million. The detected signals
are compared to a set of known transit events in the Kepler field of view which
were derived by a different method using a longer data interval; the comparison
shows that the current search correctly identified 88.1% of the known events. A
tabulation of the detected transit signals, examples which illustrate the
analysis and detection process, a discussion of future plans and open,
potentially fruitful, areas of further research are included
The Kepler Science Operations Center Pipeline Framework Extensions
The Kepler Science Operations Center (SOC) is responsible for several aspects of the Kepler Mission, including managing targets, generating on-board data compression tables, monitoring photometer health and status, processing the science data, and exporting the pipeline products to the mission archive. We describe how the generic pipeline framework software developed for Kepler is extended to achieve these goals, including pipeline configurations for processing science data and other support roles, and custom unit of work generators that control how the Kepler data are partitioned and distributed across the computing cluster. We describe the interface between the Java software that manages the retrieval and storage of the data for a given unit of work and the MATLAB algorithms that process these data. The data for each unit of work are packaged into a single file that contains everything needed by the science algorithms, allowing these files to be used to debug and evolve the algorithms offline
- …