93,725 research outputs found

    A Measurement of Newton's Gravitational Constant

    Get PDF
    A precision measurement of the gravitational constant GG has been made using a beam balance. Special attention has been given to determining the calibration, the effect of a possible nonlinearity of the balance and the zero-point variation of the balance. The equipment, the measurements and the analysis are described in detail. The value obtained for G is 6.674252(109)(54) 10^{-11} m3 kg-1 s-2. The relative statistical and systematic uncertainties of this result are 16.3 10^{-6} and 8.1 10^{-6}, respectively.Comment: 26 pages, 20 figures, Accepted for publication by Phys. Rev.

    Julian Ernst Besag, 26 March 1945 -- 6 August 2010, a biographical memoir

    Full text link
    Julian Besag was an outstanding statistical scientist, distinguished for his pioneering work on the statistical theory and analysis of spatial processes, especially conditional lattice systems. His work has been seminal in statistical developments over the last several decades ranging from image analysis to Markov chain Monte Carlo methods. He clarified the role of auto-logistic and auto-normal models as instances of Markov random fields and paved the way for their use in diverse applications. Later work included investigations into the efficacy of nearest neighbour models to accommodate spatial dependence in the analysis of data from agricultural field trials, image restoration from noisy data, and texture generation using lattice models.Comment: 26 pages, 14 figures; minor revisions, omission of full bibliograph

    Maximum A Posteriori Resampling of Noisy, Spatially Correlated Data

    Get PDF
    In any geologic application, noisy data are sources of consternation for researchers, inhibiting interpretability and marring images with unsightly and unrealistic artifacts. Filtering is the typical solution to dealing with noisy data. However, filtering commonly suffers from ad hoc (i.e., uncalibrated, ungoverned) application. We present here an alternative to filtering: a newly developed method for correcting noise in data by finding the “best” value given available information. The motivating rationale is that data points that are close to each other in space cannot differ by “too much,” where “too much” is governed by the field covariance. Data with large uncertainties will frequently violate this condition and therefore ought to be corrected, or “resampled.” Our solution for resampling is determined by the maximum of the a posteriori density function defined by the intersection of (1) the data error probability density function (pdf) and (2) the conditional pdf, determined by the geostatistical kriging algorithm applied to proximal data values. A maximum a posteriori solution can be computed sequentially going through all the data, but the solution depends on the order in which the data are examined. We approximate the global a posteriori solution by randomizing this order and taking the average. A test with a synthetic data set sampled from a known field demonstrates quantitatively and qualitatively the improvement provided by the maximum a posteriori resampling algorithm. The method is also applied to three marine geology/geophysics data examples, demonstrating the viability of the method for diverse applications: (1) three generations of bathymetric data on the New Jersey shelf with disparate data uncertainties; (2) mean grain size data from the Adriatic Sea, which is a combination of both analytic (low uncertainty) and word-based (higher uncertainty) sources; and (3) side-scan backscatter data from the Martha\u27s Vineyard Coastal Observatory which are, as is typical for such data, affected by speckle noise. Compared to filtering, maximum a posteriori resampling provides an objective and optimal method for reducing noise, and better preservation of the statistical properties of the sampled field. The primary disadvantage is that maximum a posteriori resampling is a computationally expensive procedure

    Accuracy and Precision of Occlusal Contacts of Stereolithographic Casts Mounted by Digital Interocclusal Registrations

    Get PDF
    Statement of problem Little peer-reviewed information is available regarding the accuracy and precision of the occlusal contact reproduction of digitally mounted stereolithographic casts. Purpose The purpose of this in vitro study was to evaluate the accuracy and precision of occlusal contacts among stereolithographic casts mounted by digital occlusal registrations. Material and methods Four complete anatomic dentoforms were arbitrarily mounted on a semi-adjustable articulator in maximal intercuspal position and served as the 4 different simulated patients (SP). A total of 60 digital impressions and digital interocclusal registrations were made with a digital intraoral scanner to fabricate 15 sets of mounted stereolithographic (SLA) definitive casts for each dentoform. After receiving a total of 60 SLA casts, polyvinyl siloxane (PVS) interocclusal records were made for each set. The occlusal contacts for each set of SLA casts were measured by recording the amount of light transmitted through the interocclusal records. To evaluate the accuracy between the SP and their respective SLA casts, the areas of actual contact (AC) and near contact (NC) were calculated. For precision analysis, the coefficient of variation (CoV) was used. The data was analyzed with t tests for accuracy and the McKay and Vangel test for precision (α=.05). Results The accuracy analysis showed a statistically significant difference between the SP and the SLA cast of each dentoform (PPP Conclusions For the accuracy evaluation, statistically significant differences were found between the occlusal contacts of all digitally mounted SLA casts groups, with an increase in AC values and a decrease in NC values. For the precision assessment, the CoV values of the AC and NC showed the digitally articulated cast’s inability to reproduce the uniform occlusal contacts

    Adaptive Aperture Defocused Digital Speckle Photography

    Full text link
    Speckle photography can be used to monitor deformations of solid surfaces. The measuring characteristics, such as range or lateral resolution depend heavily on the optical recording and illumination set-up. This paper shows how, by the addition of two suitably perforated masks, the optical aperture of the system may vary from point to point, accordingly adapting the range and resolution to local requirements. Furthermore, by illuminating narrow areas, speckle size can be chosen independently from the optical aperture, thus lifting an important constraint on its choice. The new technique in described within the framework of digital defocused speckle photography under normal collimated illumination. Mutually limiting relations between range of measurement and spatial frequency resolution turn up both locally and when the whole surface under study is considered. They are deduced and discussed in detail.Comment: Submitted to Optics & Laser Technolog

    Modelling the spatial distribution of DEM Error

    Get PDF
    Assessment of a DEM’s quality is usually undertaken by deriving a measure of DEM accuracy – how close the DEM’s elevation values are to the true elevation. Measures such as Root Mean Squared Error and standard deviation of the error are frequently used. These measures summarise elevation errors in a DEM as a single value. A more detailed description of DEM accuracy would allow better understanding of DEM quality and the consequent uncertainty associated with using DEMs in analytical applications. The research presented addresses the limitations of using a single root mean squared error (RMSE) value to represent the uncertainty associated with a DEM by developing a new technique for creating a spatially distributed model of DEM quality – an accuracy surface. The technique is based on the hypothesis that the distribution and scale of elevation error within a DEM are at least partly related to morphometric characteristics of the terrain. The technique involves generating a set of terrain parameters to characterise terrain morphometry and developing regression models to define the relationship between DEM error and morphometric character. The regression models form the basis for creating standard deviation surfaces to represent DEM accuracy. The hypothesis is shown to be true and reliable accuracy surfaces are successfully created. These accuracy surfaces provide more detailed information about DEM accuracy than a single global estimate of RMSE

    Modeling physical and chemical climate of the northeastern United States for a geographic information system

    Get PDF
    A model of physical and chemical climate was developed for New York and New England that can be used in a GIs for integration with ecosystem models. The variables included are monthly average maximum and minimum daily temperatures, precipitation, humidity, and solar radiation, as well as annual atmospheric deposition of sulfur and nitrogen. Equations generated from regional data bases were combined with a digital elevation model of the region to generate digital coverages of each variable

    Maximum length sequence and Bessel diffusers using active technologies

    Get PDF
    Active technologies can enable room acoustic diffusers to operate over a wider bandwidth than passive devices, by extending the bass response. Active impedance control can be used to generate surface impedance distributions which cause wavefront dispersion, as opposed to the more normal absorptive or pressure-cancelling target functions. This paper details the development of two new types of active diffusers which are difficult, if not impossible, to make as passive wide-band structures. The first type is a maximum length sequence diffuser where the well depths are designed to be frequency dependent to avoid the critical frequencies present in the passive device, and so achieve performance over a finite-bandwidth. The second is a Bessel diffuser, which exploits concepts developed for transducer arrays to form a hybrid absorber–diffuser. Details of the designs are given, and measurements of scattering and impedance used to show that the active diffusers are operating correctly over a bandwidth of about 100 Hz to 1.1 kHz. Boundary element method simulation is used to show how more application-realistic arrays of these devices would behave
    • …
    corecore