5,519 research outputs found

    On the fitting of surfaces to data with covariances

    Get PDF
    Copyright © 2000 IEEEWe consider the problem of estimating parameters of a model described by an equation of special form. Specific models arise in the analysis of a wide class of computer vision problems, including conic fitting and estimation of the fundamental matrix. We assume that noisy data are accompanied by (known) covariance matrices characterizing the uncertainty of the measurements. A cost function is first obtained by considering a maximum-likelihood formulation and applying certain necessary approximations that render the problem tractable. A Newton-like iterative scheme is then generated for determining a minimizer of the cost function. Unlike alternative approaches such as Sampson's method or the renormalization technique, the new scheme has as its theoretical limit the minimizer of the cost function. Furthermore, the scheme is simply expressed, efficient, and unsurpassed as a general technique in our testing. An important feature of the method is that it can serve as a basis for conducting theoretical comparison of various estimation approaches.Wojciech Chojnacki, Michael J. Brooks, Anton van den Hengel and Darren Gawle

    Gravitational waves from BH-NS binaries: Effective Fisher matrices and parameter estimation using higher harmonics

    Get PDF
    Inspiralling black hole-neutron star (BH-NS) binaries emit a complicated gravitational wave signature, produced by multiple harmonics sourced by their strong local gravitational field and further modulated by the orbital plane's precession. Some features of this complex signal are easily accessible to ground-based interferometers (e.g., the rate of change of frequency); others less so (e.g., the polarization content); and others unavailable (e.g., features of the signal out of band). For this reason, an ambiguity function (a diagnostic of dissimilarity) between two such signals varies on many parameter scales and ranges. In this paper, we present a method for computing an approximate, effective Fisher matrix from variations in the ambiguity function on physically pertinent scales which depend on the relevant signal to noise ratio. As a concrete example, we explore how higher harmonics improve parameter measurement accuracy. As previous studies suggest, for our fiducial BH-NS binaries and for plausible signal amplitudes, we see that higher harmonics at best marginally improve our ability to measure parameters. For non-precessing binaries, these Fisher matrices separate into intrinsic (mass, spin) and extrinsic (geometrical) parameters; higher harmonics principally improve our knowledge about the line of sight. For the precessing binaries, the extra information provided by higher harmonics is distributed across several parameters. We provide concrete estimates for measurement accuracy, using coordinates adapted to the precession cone in the detector's sensitive band.Comment: 19 pages, 11 figure

    GPS-derived geoid using artificial neural network and least squares collocation

    Get PDF
    The geoidal undulations are needed for determining the orthometric heights from the Global Positioning System GPS-derived ellipsoidal heights. There ore several methods for geoidal undulation determination. The paper presents a method employing the Artificial Neural Network (ANN) approximation together with the Least Squares Collocation (LSC). The surface obtained by the ANN approximation is used as a trend surface in the least squares collocation. In numerical examples four surfaces were compared: the global geopotential model (EGM96), the European gravimetric quasigeoid 1997 (EGG97), the surface approximated with minimum curvature splines in tension algorithm and the ANN surface approximation. The effectiveness of the ANN surface approximation depends on the number of control points. If the number of well-distributed control points is sufficiently large, the results are better than those obtained by the minimum curvature algorithm and comparable to those obtained by the EGG97 model

    Bayesian optimization for the inverse scattering problem in quantum reaction dynamics

    Full text link
    We propose a machine-learning approach based on Bayesian optimization to build global potential energy surfaces (PES) for reactive molecular systems using feedback from quantum scattering calculations. The method is designed to correct for the uncertainties of quantum chemistry calculations and yield potentials that reproduce accurately the reaction probabilities in a wide range of energies. These surfaces are obtained automatically and do not require manual fitting of the {\it ab initio} energies with analytical functions. The PES are built from a small number of {\it ab initio} points by an iterative process that incrementally samples the most relevant parts of the configuration space. Using the dynamical results of previous authors as targets, we show that such feedback loops produce accurate global PES with 30 {\it ab initio} energies for the three-dimensional H + H2_2 \rightarrow H2_2 + H reaction and 290 {\it ab initio} energies for the six-dimensional OH + H2_2 \rightarrow H2_2O + H reaction. These surfaces are obtained from 360 scattering calculations for H3_3 and 600 scattering calculations for OH3_3. We also introduce a method that quickly converges to an accurate PES without the {\it a priori} knowledge of the dynamical results. By construction, our method illustrates the lowest number of potential energy points (i.e. the minimum information) required for the non-parametric construction of global PES for quantum reactive scattering calculations.Comment: 9 pages, 8 figure

    First-year Sloan Digital Sky Survey-II (SDSS-II) Supernova Results: Hubble Diagram and Cosmological Parameters

    Get PDF
    We present measurements of the Hubble diagram for 103 Type Ia supernovae (SNe) with redshifts 0.04 < z < 0.42, discovered during the first season (Fall 2005) of the Sloan Digital Sky Survey-II (SDSS-II) Supernova Survey. These data fill in the redshift "desert" between low- and high-redshift SN Ia surveys. We combine the SDSS-II measurements with new distance estimates for published SN data from the ESSENCE survey, the Supernova Legacy Survey, the Hubble Space Telescope, and a compilation of nearby SN Ia measurements. Combining the SN Hubble diagram with measurements of Baryon Acoustic Oscillations from the SDSS Luminous Red Galaxy sample and with CMB temperature anisotropy measurements from WMAP, we estimate the cosmological parameters w and Omega_M, assuming a spatially flat cosmological model (FwCDM) with constant dark energy equation of state parameter, w. For the FwCDM model and the combined sample of 288 SNe Ia, we find w = -0.76 +- 0.07(stat) +- 0.11(syst), Omega_M = 0.306 +- 0.019(stat) +- 0.023(syst) using MLCS2k2 and w = -0.96 +- 0.06(stat) +- 0.12(syst), Omega_M = 0.265 +- 0.016(stat) +- 0.025(syst) using the SALT-II fitter. We trace the discrepancy between these results to a difference in the rest-frame UV model combined with a different luminosity correction from color variations; these differences mostly affect the distance estimates for the SNLS and HST supernovae. We present detailed discussions of systematic errors for both light-curve methods and find that they both show data-model discrepancies in rest-frame UU-band. For the SALT-II approach, we also see strong evidence for redshift-dependence of the color-luminosity parameter (beta). Restricting the analysis to the 136 SNe Ia in the Nearby+SDSS-II samples, we find much better agreement between the two analysis methods but with larger uncertainties.Comment: Accepted for publication by ApJ

    Functional principal component analysis of spatially correlated data

    Get PDF
    This paper focuses on the analysis of spatially correlated functional data. We propose a parametric model for spatial correlation and the between-curve correlation is modeled by correlating functional principal component scores of the functional data. Additionally, in the sparse observation framework, we propose a novel approach of spatial principal analysis by conditional expectation to explicitly estimate spatial correlations and reconstruct individual curves. Assuming spatial stationarity, empirical spatial correlations are calculated as the ratio of eigenvalues of the smoothed covariance surface Cov (Xi(s),Xi(t))(Xi(s),Xi(t)) and cross-covariance surface Cov (Xi(s),Xj(t))(Xi(s),Xj(t)) at locations indexed by i and j. Then a anisotropy Matérn spatial correlation model is fitted to empirical correlations. Finally, principal component scores are estimated to reconstruct the sparsely observed curves. This framework can naturally accommodate arbitrary covariance structures, but there is an enormous reduction in computation if one can assume the separability of temporal and spatial components. We demonstrate the consistency of our estimates and propose hypothesis tests to examine the separability as well as the isotropy effect of spatial correlation. Using simulation studies, we show that these methods have some clear advantages over existing methods of curve reconstruction and estimation of model parameters

    Multi-Scale 3D Scene Flow from Binocular Stereo Sequences

    Full text link
    Scene flow methods estimate the three-dimensional motion field for points in the world, using multi-camera video data. Such methods combine multi-view reconstruction with motion estimation. This paper describes an alternative formulation for dense scene flow estimation that provides reliable results using only two cameras by fusing stereo and optical flow estimation into a single coherent framework. Internally, the proposed algorithm generates probability distributions for optical flow and disparity. Taking into account the uncertainty in the intermediate stages allows for more reliable estimation of the 3D scene flow than previous methods allow. To handle the aperture problems inherent in the estimation of optical flow and disparity, a multi-scale method along with a novel region-based technique is used within a regularized solution. This combined approach both preserves discontinuities and prevents over-regularization – two problems commonly associated with the basic multi-scale approaches. Experiments with synthetic and real test data demonstrate the strength of the proposed approach.National Science Foundation (CNS-0202067, IIS-0208876); Office of Naval Research (N00014-03-1-0108
    corecore