3,376 research outputs found

    Investigation of a geodesy coexperiment to the Gravity Probe B relativity gyroscope program

    Get PDF
    Geodesy is the science of measuring the gravitational field of and positions on the Earth. Estimation of the gravitational field via gravitation gradiometry, the measurement of variations in the direction and magnitude of gravitation with respect to position, is this dissertation's focus. Gravity Probe B (GP-B) is a Stanford satellite experiment in gravitational physics. GP-B will measure the precession the rotating Earth causes on the space time around it by observing the precessions of four gyroscopes in a circular, polar, drag-free orbit at 650 km altitude. The gyroscopes are nearly perfect niobium-coated spheres of quartz, operating at 1.8 K to permit observations with extremely low thermal noise. The permissible gyroscope drift rate is miniscule, so the torques on the gyros must be tiny. A drag-free control system, by canceling accelerations caused by nongravitational forces, minimizes the support forces and hence torques. The GP-B system offers two main possibilities for geodesy. One is as a drag-free satellite to be used in trajectory-based estimates of the Earth's gravity field. We described calculations involving that approach in our previous reports, including comparison of laser only, GPS only, and combined tracking and a preliminary estimate of the possibility of estimating relativistic effects on the orbit. The second possibility is gradiometry. This technique has received a more cursory examination in previous reports, so we concentrate on it here. We explore the feasibility of using the residual suspension forces centering the GP-B gyros as gradiometer signals for geodesy. The objective of this work is a statistical prediction of the formal uncertainty in an estimate of the Earth's gravitation field using data from GP-B. We perform an instrument analysis and apply two mathematical techniques to predict uncertainty. One is an analytical approach using a flat-Earth approximation to predict geopotential information quality as a function of spatial wavelength. The second estimates the covariance matrix arising in a least-squares estimate of a spherical harmonic representation of the geopotential using GP-B gradiometer data. The results show that the GP-B data set can be used to create a consistent estimate of the geopotential up to spherical harmonic degree and order 60. The formal uncertainty of all coefficients between degrees 5 and 50 is reduced by factors of up to 30 over current satellite-only estimates and up to 7 over estimates which include surface data. The primary conclusion resulting from this study is that the gravitation gradiometer geodesy coexperiment to GP-B is both feasible and attractive

    Orbit-determination performance of Doppler data for interplanetary cruise trajectories. Part 2: 8.4-GHz performance and data-weighting strategies

    Get PDF
    A consider error covariance analysis was performed in order to investigate the orbit-determination performance attainable using two-way (coherent) 8.4-GHz (X-band) Doppler data for two segments of the planned Mars Observer trajectory. The analysis includes the effects of the current level of calibration errors in tropospheric delay, ionospheric delay, and station locations, with particular emphasis placed on assessing the performance of several candidate elevation-dependent data-weighting functions. One weighting function was found that yields good performance for a variety of tracking geometries. This weighting function is simple and robust; it reduces the danger of error that might exist if an analyst had to select one of several different weighting functions that are highly sensitive to the exact choice of parameters and to the tracking geometry. Orbit-determination accuracy improvements that may be obtained through the use of calibration data derived from Global Positioning System (GPS) satellites also were investigated, and can be as much as a factor of three in some components of the spacecraft state vector. Assuming that both station-location errors and troposphere calibration errors are reduced simultaneously, the recommended data-weighting function need not be changed when GPS calibrations are incorporated in the orbit-determination process

    Archiving multi-epoch data and the discovery of variables in the near infrared

    Full text link
    We present a description of the design and usage of a new synoptic pipeline and database model for time series photometry in the VISTA Data Flow System (VDFS). All UKIRT-WFCAM data and most of the VISTA main survey data will be processed and archived by the VDFS. Much of these data are multi-epoch, useful for finding moving and variable objects. Our new database design allows the users to easily find rare objects of these types amongst the huge volume of data being produced by modern survey telescopes. Its effectiveness is demonstrated through examples using Data Release 5 of the UKIDSS Deep Extragalactic Survey (DXS) and the WFCAM standard star data. The synoptic pipeline provides additional quality control and calibration to these data in the process of generating accurate light-curves. We find that 0.6+-0.1% of stars and 2.3+-0.6% of galaxies in the UKIDSS-DXS with K<15 mag are variable with amplitudes \Delta K>0.015 magComment: 30 pages, 31 figures, MNRAS, in press Minor changes from previous version due to refereeing and proof-readin

    The Bolocam Galactic Plane Survey: Survey Description and Data Reduction

    Get PDF
    We present the Bolocam Galactic Plane Survey (BGPS), a 1.1 mm continuum survey at 33" effective resolution of 170 square degrees of the Galactic Plane visible from the northern hemisphere. The survey is contiguous over the range -10.5 < l < 90.5, |b| < 0.5 and encompasses 133 square degrees, including some extended regions |b| < 1.5. In addition to the contiguous region, four targeted regions in the outer Galaxy were observed: IC1396, a region towards the Perseus Arm, W3/4/5, and Gem OB1. The BGPS has detected approximately 8400 clumps over the entire area to a limiting non-uniform 1-sigma noise level in the range 11 to 53 mJy/beam in the inner Galaxy. The BGPS source catalog is presented in a companion paper (Rosolowsky et al. 2010). This paper details the survey observations and data reduction methods for the images. We discuss in detail the determination of astrometric and flux density calibration uncertainties and compare our results to the literature. Data processing algorithms that separate astronomical signals from time-variable atmospheric fluctuations in the data time-stream are presented. These algorithms reproduce the structure of the astronomical sky over a limited range of angular scales and produce artifacts in the vicinity of bright sources. Based on simulations, we find that extended emission on scales larger than about 5.9' is nearly completely attenuated (> 90%) and the linear scale at which the attenuation reaches 50% is 3.8'. Comparison with other millimeter-wave data sets implies a possible systematic offset in flux calibration, for which no cause has been discovered. This presentation serves as a companion and guide to the public data release through NASA's Infrared Processing and Analysis Center (IPAC) Infrared Science Archive (IRSA). New data releases will be provided through IPAC IRSA with any future improvements in the reduction.Comment: Accepted for publication in Astrophysical Journal Supplemen

    Searching for Variable Stars Using the Large Array Survey Telescope (LAST)

    Full text link
    This paper introduces a novel variability report generator developed for the Large Array Survey Telescope (LAST), a cost-effective multi-purpose telescope array conducting a wide survey of the variable sky in the visible-light spectrum. Designed to automate variability detection, the report generator identifies candidate variable stars by employing adjustable thresholds to detect periodic and non-periodic variables. The program outputs a visual and tabular photometric report for each candidate variable source from a given LAST sub-image. Functioning as a whitepaper, this document also provides a concise overview of LAST, discussing its design, data workflow, and variability search performance.Comment: 21 pages, 9 figures. Proceedings from undergraduate research conducted at Weizmann Institute of Science under the 2023 Kupcinet-Getz International Summer Schoo

    Leveraging External Sensor Data for Enhanced Space Situational Awareness

    Get PDF
    Reliable Space Situational Awareness (SSA) is a recognized requirement in the current congested, contested, and competitive environment of space operations. A shortage of available sensors and reliable data sources are some current limiting factors for maintaining SSA. Unfortunately, cost constraints prohibit drastically increasing the sensor inventory. Alternative methods are sought to enhance current SSA, including utilizing non-traditional data sources (external sensors) to perform basic SSA catalog maintenance functions. Astronomical data, for example, routinely collects serendipitous satellite streaks in the course of observing deep space; but tactics, techniques, and procedures designed to glean useful information from those collects have yet to be rigorously developed. This work examines the feasibility and utility of performing ephemeris positional updates for a Resident Space Object (RSO) catalog using metric data obtained from RSO streaks gathered by astronomical telescopes. The focus of this work is on processing data from three possible streak categories: streaks that only enter, only exit, or cross completely through the astronomical image. Successful use of this data will aid in resolving uncorrelated tracks, space object identification, and threat detection. Incorporation of external data sources will also reduce the number of routine collects required by existing SSA sensors, freeing them up for more demanding tasks. The results clearly demonstrate that accurate orbital reconstruction can be performed using an RSO streak in a distorted image, without applying calibration frames and that partially bound streaks provide similar results to traditional data, with a mean degradation of 6:2% in right ascension and 42:69% in declination. The methodology developed can also be applied to dedicated SSA sensors to extract data from serendipitous streaks gathered while observing other RSOs

    Detection of short Gamma-Ray Bursts with CTA through real-time analysis

    Get PDF
    With respect to the current IACTs, CTA will cover a larger energy range (~20 GeV - 300 TeV) with one order of magnitude better sensitivity. The facility will be provided with a real-time analysis (RTA) software that will automatically generate science alerts and analyse data from on-going observations in real-time. The RTA will play a key role in the search and follow-up of transients from external alerts (i.e. from on-space gamma-ray missions, observatories operating at other energy bands or targets of opportunity provided by neutrinos and gravitational waves detectors). The scope of this study was to investigate the ctools software package feasibility for the RTA, adopting a full-field of view maximum likelihood analysis method. A prototype for the RTA was developed, with natively implemented utilities where required. Its performance was extensively tested for very-short exposure times (far below the lower limit of current Cherenkov science) accounting for sensitivity degradation due to the non-optimal working condition expected of the RTA. The latest IRFs, provided by CTA Performance, were degraded via effective area reduction for this purpose. The reliability of the analysis methods was tested by means of the verification of Wilks' theorem. Through statistical studies on the pipeline parameter space (i.e. minimum required exposure time), the performance was evaluated in terms of localization precision, detection significance and detection rates at short-timescales using the latest available GRB afterglow templates for the source simulation. Future improvements involve further tests (i.e. with an updated population synthesis) as well as post-trials correction of the detection significance. Moreover, implementations allowing the pipeline to dynamically adapt to a range of science cases are required. Prospects of forthcoming collaboration may involve the integration of this pipeline within the on-going work of the gamma-ray bursts experts of CTA Consortium

    Adequate model complexity and data resolution for effective constraint of simulation models by 4D seismic data

    Get PDF
    4D seismic data bears valuable spatial information about production-related changes in the reservoir. It is a challenging task though to make simulation models honour it. Strict spatial tie of seismic data requires adequate model complexity in order to assimilate details of seismic signature. On the other hand, not all the details in the seismic signal are critical or even relevant to the flow characteristics of the simulation model so that fitting them may compromise the predictive capability of models. So, how complex should be a model to take advantage of information from seismic data and what details should be matched? This work aims to show how choices of parameterisation affect the efficiency of assimilating spatial information from the seismic data. Also, the level of details at which the seismic signal carries useful information for the simulation model is demonstrated in light of the limited detectability of events on the seismic map and modelling errors. The problem of the optimal model complexity is investigated in the context of choosing model parameterisation which allows effective assimilation of spatial information in the seismic map. In this study, a model parameterisation scheme based on deterministic objects derived from seismic interpretation creates bias for model predictions which results in poor fit of historic data. The key to rectifying the bias was found to be increasing the flexibility of parameterisation by either increasing the number of parameters or using a scheme that does not impose prior information incompatible with data such as pilot points in this case. Using the history matching experiments with a combined dataset of production and seismic data, a level of match of the seismic maps is identified which results in an optimal constraint of the simulation models. Better constrained models were identified by quality of their forecasts and closeness of the pressure and saturation state to the truth case. The results indicate that a significant amount of details in the seismic maps is not contributing to the constructive constraint by the seismic data which is caused by two factors. First is that smaller details are a specific response of the system-source of observed data, and as such are not relevant to flow characteristics of the model, and second is that the resolution of the seismic map itself is limited by the seismic bandwidth and noise. The results suggest that the notion of a good match for 4D seismic maps commonly equated to the visually close match is not universally applicable
    corecore