3,894 research outputs found

    Designing a Belief Function-Based Accessibility Indicator to Improve Web Browsing for Disabled People

    Get PDF
    The purpose of this study is to provide an accessibility measure of web-pages, in order to draw disabled users to the pages that have been designed to be ac-cessible to them. Our approach is based on the theory of belief functions, using data which are supplied by reports produced by automatic web content assessors that test the validity of criteria defined by the WCAG 2.0 guidelines proposed by the World Wide Web Consortium (W3C) organization. These tools detect errors with gradual degrees of certainty and their results do not always converge. For these reasons, to fuse information coming from the reports, we choose to use an information fusion framework which can take into account the uncertainty and imprecision of infor-mation as well as divergences between sources. Our accessibility indicator covers four categories of deficiencies. To validate the theoretical approach in this context, we propose an evaluation completed on a corpus of 100 most visited French news websites, and 2 evaluation tools. The results obtained illustrate the interest of our accessibility indicator

    Application of Monte Carlo Algorithms to the Bayesian Analysis of the Cosmic Microwave Background

    Get PDF
    Power spectrum estimation and evaluation of associated errors in the presence of incomplete sky coverage; non-homogeneous, correlated instrumental noise; and foreground emission is a problem of central importance for the extraction of cosmological information from the cosmic microwave background. We develop a Monte Carlo approach for the maximum likelihood estimation of the power spectrum. The method is based on an identity for the Bayesian posterior as a marginalization over unknowns. Maximization of the posterior involves the computation of expectation values as a sample average from maps of the cosmic microwave background and foregrounds given some current estimate of the power spectrum or cosmological model, and some assumed statistical characterization of the foregrounds. Maps of the CMB are sampled by a linear transform of a Gaussian white noise process, implemented numerically with conjugate gradient descent. For time series data with N_{t} samples, and N pixels on the sphere, the method has a computational expense $KO[N^{2} +- N_{t} +AFw-log N_{t}], where K is a prefactor determined by the convergence rate of conjugate gradient descent. Preconditioners for conjugate gradient descent are given for scans close to great circle paths, and the method allows partial sky coverage for these cases by numerically marginalizing over the unobserved, or removed, region.Comment: submitted to Ap

    A Meaner King uses Biased Bases

    Get PDF
    The mean king problem is a quantum mechanical retrodiction problem, in which Alice has to name the outcome of an ideal measurement on a d-dimensional quantum system, made in one of (d+1) orthonormal bases, unknown to Alice at the time of the measurement. Alice has to make this retrodiction on the basis of the classical outcomes of a suitable control measurement including an entangled copy. We show that the existence of a strategy for Alice is equivalent to the existence of an overall joint probability distribution for (d+1) random variables, whose marginal pair distributions are fixed as the transition probability matrices of the given bases. In particular, for d=2 the problem is decided by John Bell's classic inequality for three dichotomic variables. For mutually unbiased bases in any dimension Alice has a strategy, but for randomly chosen bases the probability for that goes rapidly to zero with increasing d.Comment: 5 pages, 1 figur

    Principal Component Analysis with Noisy and/or Missing Data

    Full text link
    We present a method for performing Principal Component Analysis (PCA) on noisy datasets with missing values. Estimates of the measurement error are used to weight the input data such that compared to classic PCA, the resulting eigenvectors are more sensitive to the true underlying signal variations rather than being pulled by heteroskedastic measurement noise. Missing data is simply the limiting case of weight=0. The underlying algorithm is a noise weighted Expectation Maximization (EM) PCA, which has additional benefits of implementation speed and flexibility for smoothing eigenvectors to reduce the noise contribution. We present applications of this method on simulated data and QSO spectra from the Sloan Digital Sky Survey.Comment: Accepted for publication in PASP; v2 with minor updates, mostly to bibliograph

    Diagonal and Low-Rank Matrix Decompositions, Correlation Matrices, and Ellipsoid Fitting

    Get PDF
    In this paper we establish links between, and new results for, three problems that are not usually considered together. The first is a matrix decomposition problem that arises in areas such as statistical modeling and signal processing: given a matrix XX formed as the sum of an unknown diagonal matrix and an unknown low rank positive semidefinite matrix, decompose XX into these constituents. The second problem we consider is to determine the facial structure of the set of correlation matrices, a convex set also known as the elliptope. This convex body, and particularly its facial structure, plays a role in applications from combinatorial optimization to mathematical finance. The third problem is a basic geometric question: given points v1,v2,...,vn∈Rkv_1,v_2,...,v_n\in \R^k (where n>kn > k) determine whether there is a centered ellipsoid passing \emph{exactly} through all of the points. We show that in a precise sense these three problems are equivalent. Furthermore we establish a simple sufficient condition on a subspace UU that ensures any positive semidefinite matrix LL with column space UU can be recovered from D+LD+L for any diagonal matrix DD using a convex optimization-based heuristic known as minimum trace factor analysis. This result leads to a new understanding of the structure of rank-deficient correlation matrices and a simple condition on a set of points that ensures there is a centered ellipsoid passing through them.Comment: 20 page

    Five-dimensional Nernst branes from special geometry

    Get PDF
    We construct Nernst brane solutions, that is black branes with zero entropy density in the extremal limit, of FI-gauged minimal five-dimensional supergravity coupled to an arbitrary number of vector multiplets. While the scalars take specific constant values and dynamically determine the value of the cosmological constant in terms of the FI-parameters, the metric takes the form of a boosted AdS Schwarzschild black brane. This metric can be brought to the Carter-Novotny-Horsky form that has previously been observed to occur in certain limits of boosted D3-branes. By dimensional reduction to four dimensions we recover the four-dimensional Nernst branes of arXiv:1501.07863 and show how the five-dimensional lift resolves all their UV singularities. The dynamics of the compactification circle, which expands both in the UV and in the IR, plays a crucial role. At asymptotic infinity, the curvature singularity of the four-dimensional metric and the run-away behaviour of the four-dimensional scalar combine in such a way that the lifted solution becomes asymptotic to AdS5. Moreover, the existence of a finite chemical potential in four dimensions is related to fact that the compactification circle has a finite minimal value. While it is not clear immediately how to embed our solutions into string theory, we argue that the same type of dictionary as proposed for boosted D3-branes should apply, although with a lower amount of supersymmetry.Comment: 59 pages, 1 figure. Revised version: references added, typos corrected. Final version, accepted by JHEP: two references adde

    How Sample Completeness Affects Gamma-Ray Burst Classification

    Full text link
    Unsupervised pattern recognition algorithms support the existence of three gamma-ray burst classes; Class I (long, large fluence bursts of intermediate spectral hardness), Class II (short, small fluence, hard bursts), and Class III (soft bursts of intermediate durations and fluences). The algorithms surprisingly assign larger membership to Class III than to either of the other two classes. A known systematic bias has been previously used to explain the existence of Class III in terms of Class I; this bias allows the fluences and durations of some bursts to be underestimated (Hakkila et al., ApJ 538, 165, 2000). We show that this bias primarily affects only the longest bursts and cannot explain the bulk of the Class III properties. We resolve the question of Class III existence by demonstrating how samples obtained using standard trigger mechanisms fail to preserve the duration characteristics of small peak flux bursts. Sample incompleteness is thus primarily responsible for the existence of Class III. In order to avoid this incompleteness, we show how a new dual timescale peak flux can be defined in terms of peak flux and fluence. The dual timescale peak flux preserves the duration distribution of faint bursts and correlates better with spectral hardness (and presumably redshift) than either peak flux or fluence. The techniques presented here are generic and have applicability to the studies of other transient events. The results also indicate that pattern recognition algorithms are sensitive to sample completeness; this can influence the study of large astronomical databases such as those found in a Virtual Observatory.Comment: 29 pages, 6 figures, 3 tables, Accepted for publication in The Astrophysical Journa

    Strain control of superlattice implies weak charge-lattice coupling in La0.5_{0.5}Ca0.5_{0.5}MnO3_3

    Full text link
    We have recently argued that manganites do not possess stripes of charge order, implying that the electron-lattice coupling is weak [Phys Rev Lett \textbf{94} (2005) 097202]. Here we independently argue the same conclusion based on transmission electron microscopy measurements of a nanopatterned epitaxial film of La0.5_{0.5}Ca0.5_{0.5}MnO3_3. In strain relaxed regions, the superlattice period is modified by 2-3% with respect to the parent lattice, suggesting that the two are not strongly tied.Comment: 4 pages, 4 figures It is now explained why the work provides evidence to support weak-coupling, and rule out charge orde

    The Time Machine: A Simulation Approach for Stochastic Trees

    Full text link
    In the following paper we consider a simulation technique for stochastic trees. One of the most important areas in computational genetics is the calculation and subsequent maximization of the likelihood function associated to such models. This typically consists of using importance sampling (IS) and sequential Monte Carlo (SMC) techniques. The approach proceeds by simulating the tree, backward in time from observed data, to a most recent common ancestor (MRCA). However, in many cases, the computational time and variance of estimators are often too high to make standard approaches useful. In this paper we propose to stop the simulation, subsequently yielding biased estimates of the likelihood surface. The bias is investigated from a theoretical point of view. Results from simulation studies are also given to investigate the balance between loss of accuracy, saving in computing time and variance reduction.Comment: 22 Pages, 5 Figure
    • …
    corecore