711 research outputs found

    Global optimization of data quality checks on 2‐D and 3‐D networks of GPR cross‐well tomographic data for automatic correction of unknown well deviations

    Full text link
    Significant errors related to poor time zero estimation, well deviation or mislocation of the transmitter (TX) and receiver (RX) stations can render even the most sophisticated modeling and inversion routine useless. Previous examples of methods for the analysis and correction of data errors in geophysical tomography include the works of Maurer and Green (1997), Squires et al. (1992) and Peterson (2001). Here we follow the analysis and techniques of Peterson (2001) for data quality control and error correction. Through our data acquisition and quality control procedures we have very accurate control on the surface locations of wells, the travel distance of both the transmitter and receiver within the boreholes, and the change in apparent zero time. However, we often have poor control on well deviations, either because of economic constraints or the nature of the borehole itself prevented the acquisition of well deviation logs. Also, well deviation logs can sometimes have significant errors. Problems with borehole deviations can be diagnosed prior to inversion of travel-time tomography data sets by plotting the apparent velocity of a straight ray connecting a transmitter (TX) to a receiver (RX) against the take-off angle of the ray. Issues with the time-zero pick or distances between wells appear as symmetric smiles or frown in these QC plots. Well deviation or dipping-strong anisotropy will result in an asymmetric correlation between apparent velocity and take-off angle (Figure 1-B). In addition, when a network of interconnected GPR tomography data is available, one has the additional quality constraint of insuring that there is continuity in velocity between immediately adjacent tomograms. A sudden shift in the mean velocity indicates that either position deviations are present or there is a shift in the pick times. Small errors in well geometry may be effectively treated during inversion by including weighting, or relaxation, parameters into the inversion (e.g. Bautu et al., 2006). In the technique of algebraic reconstruction tomography (ART), which is used herein for the travel time inversion (Peterson et al., 1985), a small relaxation parameter will smooth imaging artifacts caused by data errors at the expense of resolution and contrast (Figure 2). However, large data errors such as unaccounted well deviations cannot be adequately suppressed through inversion weighting schemes. Previously, problems with tomograms were treated manually. However, in large data sets and/or networks of data sets, trial and error changes to well geometries become increasingly difficult and ineffective. Mislocation of the transmitter and receiver stations of GPR cross-well tomography data sets can lead to serious imaging artifacts if not accounted for prior to inversion. Previously, problems with tomograms have been treated manually prior to inversion. In large data sets and/or networks of tomographic data sets, trial and error changes to well geometries become increasingly difficult and ineffective. Our approach is to use cross-well data quality checks and a simplified model of borehole deviation with particle swarm optimization (PSO) to automatically correct for source and receiver locations prior to tomographic inversion. We present a simple model of well deviation, which is designed to minimize potential corruption of actual data trends. We also provide quantitative quality control measures based on minimizing correlations between take-off angle and apparent velocity, and a quality check on the continuity of velocity between adjacent wells. This methodology is shown to be accurate and robust for simple 2-D synthetic test cases. Plus, we demonstrate the method on actual field data where it is compared to deviation logs. This study shows the promise for automatic correction of well deviations in GPR tomographic data. Analysis of synthetic data shows that very precise estimates of well deviation can be made for small deviations, even in the presence of static data errors. However, the analysis of the synthetic data and the application of the method to a large network of field data show that the technique is sensitive to data errors varying between neighboring tomograms

    Global water cycle

    Get PDF
    This research is the MSFC component of a joint MSFC/Pennsylvania State University Eos Interdisciplinary Investigation on the global water cycle extension across the earth sciences. The primary long-term objective of this investigation is to determine the scope and interactions of the global water cycle with all components of the Earth system and to understand how it stimulates and regulates change on both global and regional scales. Significant accomplishments in the past year are presented and include the following: (1) water vapor variability; (2) multi-phase water analysis; (3) global modeling; and (4) optimal precipitation and stream flow analysis and hydrologic processes

    Scaling laws governing stochastic growth and division of single bacterial cells

    Get PDF
    Uncovering the quantitative laws that govern the growth and division of single cells remains a major challenge. Using a unique combination of technologies that yields unprecedented statistical precision, we find that the sizes of individual Caulobacter crescentus cells increase exponentially in time. We also establish that they divide upon reaching a critical multiple (\approx1.8) of their initial sizes, rather than an absolute size. We show that when the temperature is varied, the growth and division timescales scale proportionally with each other over the physiological temperature range. Strikingly, the cell-size and division-time distributions can both be rescaled by their mean values such that the condition-specific distributions collapse to universal curves. We account for these observations with a minimal stochastic model that is based on an autocatalytic cycle. It predicts the scalings, as well as specific functional forms for the universal curves. Our experimental and theoretical analysis reveals a simple physical principle governing these complex biological processes: a single temperature-dependent scale of cellular time governs the stochastic dynamics of growth and division in balanced growth conditions.Comment: Text+Supplementar

    Assessment of Rainfall Estimates Using a Standard Z-R Relationship and the Probability Matching Method Applied to Composite Radar Data in Central Florida

    Get PDF
    Precipitation estimates from radar systems are a crucial component of many hydrometeorological applications, from flash flood forecasting to regional water budget studies. For analyses on large spatial scales and long timescales, it is frequently necessary to use composite reflectivities from a network of radar systems. Such composite products are useful for regional or national studies, but introduce a set of difficulties not encountered when using single radars. For instance, each contributing radar has its own calibration and scanning characteristics, but radar identification may not be retained in the compositing procedure. As a result, range effects on signal return cannot be taken into account. This paper assesses the accuracy with which composite radar imagery can be used to estimate precipitation in the convective environment of Florida during the summer of 1991. Results using Z = 30OR(sup 1.4) (WSR-88D default Z-R relationship) are compared with those obtained using the probability matching method (PMM). Rainfall derived from the power law Z-R was found to he highly biased (+90%-l10%) compared to rain gauge measurements for various temporal and spatial integrations. Application of a 36.5-dBZ reflectivity threshold (determined via the PMM) was found to improve the performance of the power law Z-R, reducing the biases substantially to 20%-33%. Correlations between precipitation estimates obtained with either Z-R relationship and mean gauge values are much higher for areal averages than for point locations. Precipitation estimates from the PMM are an improvement over those obtained using the power law in that biases and root-mean-square errors are much lower. The minimum timescale for application of the PMM with the composite radar dataset was found to be several days for area-average precipitation. The minimum spatial scale is harder to quantify, although it is concluded that it is less than 350 sq km. Implications relevant to the WSR-88D system are discussed

    Predictors of mortality over 8 years in type 2 diabetic patients: Translating Research Into Action for Diabetes (TRIAD)

    Get PDF
    OBJECTIVE To examine demographic, socioeconomic, and biological risk factors for all-cause, cardiovascular, and noncardiovascular mortality in patients with type 2 diabetes over 8 years and to construct mortality prediction equations. RESEARCH DESIGN AND METHODS Beginning in 2000, survey and medical record information was obtained from 8,334 participants in Translating Research Into Action for Diabetes (TRIAD), a multicenter prospective observational study of diabetes care in managed care. The National Death Index was searched annually to obtain data on deaths over an 8-year follow-up period (2000–2007). Predictors examined included age, sex, race, education, income, smoking, age at diagnosis of diabetes, duration and treatment of diabetes, BMI, complications, comorbidities, and medication use. RESULTS There were 1,616 (19%) deaths over the 8-year period. In the most parsimonious equation, the predictors of all-cause mortality included older age, male sex, white race, lower income, smoking, insulin treatment, nephropathy, history of dyslipidemia, higher LDL cholesterol, angina/myocardial infarction/other coronary disease/coronary angioplasty/bypass, congestive heart failure, aspirin, β-blocker, and diuretic use, and higher Charlson Index. CONCLUSIONS Risk of death can be predicted in people with type 2 diabetes using simple demographic, socioeconomic, and biological risk factors with fair reliability. Such prediction equations are essential for computer simulation models of diabetes progression and may, with further validation, be useful for patient management

    Antimicrobial drug usage from birth to 180 days of age in Irish dairy calves and in suckler beef calves

    Get PDF
    peer-reviewedConcern about the use of antimicrobials in food producing animals is increasing. The study objective was to quantify antimicrobial drug usage in calves using antimicrobial treatment records from Irish suckler beef and dairy farms. Antimicrobial treatment records for calves born between 1 July 2014 and 30 June 2015 on 79 suckler beef and 44 dairy farms were analyzed. Calves were followed from birth (day 0) until 6 months of age. According to standard farm protocol, calves exhibiting clinical signs of any disease were identified and antimicrobial treatment was administered. Farmers recorded the following information for each treatment administered: calf identification, age at treatment, disease event, drug name, number of treatment days, and amount of drug administered. In total, 3,204 suckler beef calves and 5,358 dairy calves, representing 540,953 and 579,997 calf-days at risk, respectively, were included in the study. A total of 1,770 antimicrobial treatments were administered to suckler beef (n = 841) and dairy calves (n = 929) between birth and 6 months of age. There was large variation in TIDDDvet and TIDCDvet by farm. This study provides new insights into the time periods and indications for which specific antimicrobial substances are used in Irish dairy and beef suckler calves
    corecore