385,173 research outputs found

    Variability of contour line alignment on sequential images with the Heidelberg Retina Tomograph

    Get PDF
    •Background: The influence of the contour line alignment software algorithm on the variability of the Heidelberg Retina Tomograph (HRT) parameters remains unclear. •Methods: Nine discrete topographic images were acquired with the HRT from the right eye in six healthy, emmetropic subjects. The variability of topometric data obtained from the same topographic image, analyzed within different samples of images, was evaluated. A total of four mean topographic images was computed for each subject from: all nine discrete images (A), the first six of those images (B), the last six of those nine images (C), and the first three combined with the last three images (D). A contour line was computed on the mean topographic image generated from the nine discrete topographic images (A). This contour line was then applied to the three other mean topographic images (B, C, and D), using the contour line alignment in the HRT software. Subsequently, the contour line on the mean topographic images was applied to each of the discrete members of the particular images subsets used to compute the mean topographic image, and the topometric data for these discrete topographic images was computed successively for each subset. Prior to processing each subset, the contour line on the discrete topographic images was deleted. This strategy provided a total of three analyses on each discrete topographic image: as a member of the nine images (mean topographic image A), and as a member of two subsets of images (mean topographic image B, C, and/or D). The coefficient of variation (100×SD/mean) of the topographic parameters within those three analyses was calculated for each discrete topographic image in each subject ("intraimage” coefficient of variation). In addition, a coefficient of variation between the nine discrete topographic images ("interimage” coefficient of variation) was calculated. •Results: The "intraimage” and "interimage” variability for the various topographic parameters ranged between 0.03% and 3.10% and between 0.03% and 24.07% respectively. The "intraimage” coefficients of variation and "interimage” coefficients of variation correlated significant (r 2=0.77;P<0.0001). •Conclusion: A high "intraimage” variability, i.e. a high variability in contour line alignment between sequential images, might be an important source of test re-test variability between sequential image

    Bayesian subset simulation

    Full text link
    We consider the problem of estimating a probability of failure α\alpha, defined as the volume of the excursion set of a function f:X⊆Rd→Rf:\mathbb{X} \subseteq \mathbb{R}^{d} \to \mathbb{R} above a given threshold, under a given probability measure on X\mathbb{X}. In this article, we combine the popular subset simulation algorithm (Au and Beck, Probab. Eng. Mech. 2001) and our sequential Bayesian approach for the estimation of a probability of failure (Bect, Ginsbourger, Li, Picheny and Vazquez, Stat. Comput. 2012). This makes it possible to estimate α\alpha when the number of evaluations of ff is very limited and α\alpha is very small. The resulting algorithm is called Bayesian subset simulation (BSS). A key idea, as in the subset simulation algorithm, is to estimate the probabilities of a sequence of excursion sets of ff above intermediate thresholds, using a sequential Monte Carlo (SMC) approach. A Gaussian process prior on ff is used to define the sequence of densities targeted by the SMC algorithm, and drive the selection of evaluation points of ff to estimate the intermediate probabilities. Adaptive procedures are proposed to determine the intermediate thresholds and the number of evaluations to be carried out at each stage of the algorithm. Numerical experiments illustrate that BSS achieves significant savings in the number of function evaluations with respect to other Monte Carlo approaches

    Efficient training algorithms for HMMs using incremental estimation

    Get PDF
    Typically, parameter estimation for a hidden Markov model (HMM) is performed using an expectation-maximization (EM) algorithm with the maximum-likelihood (ML) criterion. The EM algorithm is an iterative scheme that is well-defined and numerically stable, but convergence may require a large number of iterations. For speech recognition systems utilizing large amounts of training material, this results in long training times. This paper presents an incremental estimation approach to speed-up the training of HMMs without any loss of recognition performance. The algorithm selects a subset of data from the training set, updates the model parameters based on the subset, and then iterates the process until convergence of the parameters. The advantage of this approach is a substantial increase in the number of iterations of the EM algorithm per training token, which leads to faster training. In order to achieve reliable estimation from a small fraction of the complete data set at each iteration, two training criteria are studied; ML and maximum a posteriori (MAP) estimation. Experimental results show that the training of the incremental algorithms is substantially faster than the conventional (batch) method and suffers no loss of recognition performance. Furthermore, the incremental MAP based training algorithm improves performance over the batch versio

    Qualitative Robustness in Bayesian Inference

    Get PDF
    The practical implementation of Bayesian inference requires numerical approximation when closed-form expressions are not available. What types of accuracy (convergence) of the numerical approximations guarantee robustness and what types do not? In particular, is the recursive application of Bayes' rule robust when subsequent data or posteriors are approximated? When the prior is the push forward of a distribution by the map induced by the solution of a PDE, in which norm should that solution be approximated? Motivated by such questions, we investigate the sensitivity of the distribution of posterior distributions (i.e. posterior distribution-valued random variables, randomized through the data) with respect to perturbations of the prior and data generating distributions in the limit when the number of data points grows towards infinity

    Quasars can be used to verify the parallax zero-point of the Tycho-Gaia Astrometric Solution

    Full text link
    Context. The Gaia project will determine positions, proper motions, and parallaxes for more than one billion stars in our Galaxy. It is known that Gaia's two telescopes are affected by a small but significant variation of the basic angle between them. Unless this variation is taken into account during data processing, e.g. using on-board metrology, it causes systematic errors in the astrometric parameters, in particular a shift of the parallax zero-point. Previously, we suggested an early reduction of Gaia data for the subset of Tycho-2 stars (Tycho-Gaia Astrometric Solution; TGAS). Aims. We aim to investigate whether quasars can be used to independently verify the parallax zero-point already in early data reductions. This is not trivially possible as the observation interval is too short to disentangle parallax and proper motion for the quasar subset. Methods. We repeat TGAS simulations but additionally include simulated Gaia observations of quasars from ground-based surveys. All observations are simulated with basic angle variations. To obtain a full astrometric solution for the quasars in TGAS we explore the use of prior information for their proper motions. Results. It is possible to determine the parallax zero-point for the quasars with a few {\mu}as uncertainty, and it agrees to a similar precision with the zero-point for the Tycho-2 stars. The proposed strategy is robust even for quasars exhibiting significant fictitious proper motion due to a variable source structure, or when the quasar subset is contaminated with stars misidentified as quasars. Conclusions. Using prior information about quasar proper motions we could provide an independent verification of the parallax zero-point in early solutions based on less than one year of Gaia data.Comment: Astronomy & Astrophysics, accepted 25 October 2015, in press. Version 2 contains a few language improvements and a terminology change from 'fictitious proper motions' to 'spurious proper motions

    Comprehensive Two-Point Analyses of Weak Gravitational Lensing Surveys

    Full text link
    We present a framework for analyzing weak gravitational lensing survey data, including lensing and source-density observables, plus spectroscopic redshift calibration data. All two-point observables are predicted in terms of parameters of a perturbed Robertson-Walker metric, making the framework independent of the models for gravity, dark energy, or galaxy properties. For Gaussian fluctuations the 2-point model determines the survey likelihood function and allows Fisher-matrix forecasting. The framework includes nuisance terms for the major systematic errors: shear measurement errors, magnification bias and redshift calibration errors, intrinsic galaxy alignments, and inaccurate theoretical predictions. We propose flexible parameterizations of the many nuisance parameters related to galaxy bias and intrinsic alignment. For the first time we can integrate many different observables and systematic errors into a single analysis. As a first application of this framework, we demonstrate that: uncertainties in power-spectrum theory cause very minor degradation to cosmological information content; nearly all useful information (excepting baryon oscillations) is extracted with ~3 bins per decade of angular scale; and the rate at which galaxy bias varies with redshift substantially influences the strength of cosmological inference. The framework will permit careful study of the interplay between numerous observables, systematic errors, and spectroscopic calibration data for large weak-lensing surveys.Comment: submitted to Ap

    Latitudinal distribution and magnetic signatures of magnetospheric substorms

    Get PDF
    Abstract. The Earth’s magnetic field shields the Earth from the solar wind, forming a magnetic cavity inside the solar wind called the magnetosphere. The magnetosphere is a highly dynamic system, constantly interacting with the solar wind. One of its dynamic features is called the magnetospheric substorm, when the magnetosphere unloads energy from the solar wind. Substorm expansions happen in the nightside of the Earth, as the inner magnetic field lines of the magnetotail reconnect and dipolarize closer to the Earth. During this process, magnetospheric currents are redirected along the magnetic field lines, flowing to the Earth’s ionosphere, where they connect to the westward electrojet. The westward electrojet enhances during each substorms, depressing the Earth’s magnetic field. The magnetic disturbances caused by the westward electrojet are the main subject of this thesis. The magnetic disturbances are studied with ground-based magnetic field measurements. For this purpose, a geomagnetic index called the IL index is formed using the IMAGE magnetometer network to describe the absolute amplitude of these disturbances. The IL index is also used to identify substorm expansion phase onsets. The substorm onsets are identified using an implemented algorithm. A list of substorms is created with these methods for years 1993–2020. This list holds information of the total number of substorms and the duration and amplitude of each substorm. This allows us to study the solar cycle and seasonal variation of these substorm properties. A subset of eleven IMAGE stations is used to study the latitudinal distribution of the substorm properties and the average magnetic signatures using superposed epoch analysis. Also, the solar cycle and seasonal variation of different latitudes is studied. The magnetic signatures show how the westward electrojet descends to lower latitudes if it is enhanced prior to the substorm onset. The magnetic signatures show positive bays at substorm onset at the three southernmost stations of the subset (62.25◦ N, 60.50◦ N and 58.26◦ N). However, these positive bays become less distinct if the westward electrojet is enhanced prior to the onset. The latitudinal distributions give better understanding of which IMAGE stations find more substorms, and how the solar cycle and seasonal variation of the IL substorm properties are strongly affected due to the majority of substorms found by IMAGE stations at higher latitudes (74.50◦ N, 69.76◦ N and 69.66◦ N)
    • …
    corecore