4,756 research outputs found

    Designing a Belief Function-Based Accessibility Indicator to Improve Web Browsing for Disabled People

    Get PDF
    The purpose of this study is to provide an accessibility measure of web-pages, in order to draw disabled users to the pages that have been designed to be ac-cessible to them. Our approach is based on the theory of belief functions, using data which are supplied by reports produced by automatic web content assessors that test the validity of criteria defined by the WCAG 2.0 guidelines proposed by the World Wide Web Consortium (W3C) organization. These tools detect errors with gradual degrees of certainty and their results do not always converge. For these reasons, to fuse information coming from the reports, we choose to use an information fusion framework which can take into account the uncertainty and imprecision of infor-mation as well as divergences between sources. Our accessibility indicator covers four categories of deficiencies. To validate the theoretical approach in this context, we propose an evaluation completed on a corpus of 100 most visited French news websites, and 2 evaluation tools. The results obtained illustrate the interest of our accessibility indicator

    2-(1,4-Dioxo-1,4-dihydro-2-naphthyl)-2-methylpropanoic acid

    Get PDF
    The sterically crowded title compound, C₁₄H₁₂O₄, crystallizes as centrosymmetric hydrogen-bonded dimers involving the carboxyl groups. The naphthoquinone ring system is folded by 11.5 (1)° about a vector joining the 1,4-C atoms, and the quinone O atoms are displaced from the ring plane, presumably because of steric interactions with the bulky substituent

    Diagonal and Low-Rank Matrix Decompositions, Correlation Matrices, and Ellipsoid Fitting

    Get PDF
    In this paper we establish links between, and new results for, three problems that are not usually considered together. The first is a matrix decomposition problem that arises in areas such as statistical modeling and signal processing: given a matrix XX formed as the sum of an unknown diagonal matrix and an unknown low rank positive semidefinite matrix, decompose XX into these constituents. The second problem we consider is to determine the facial structure of the set of correlation matrices, a convex set also known as the elliptope. This convex body, and particularly its facial structure, plays a role in applications from combinatorial optimization to mathematical finance. The third problem is a basic geometric question: given points v1,v2,...,vn∈Rkv_1,v_2,...,v_n\in \R^k (where n>kn > k) determine whether there is a centered ellipsoid passing \emph{exactly} through all of the points. We show that in a precise sense these three problems are equivalent. Furthermore we establish a simple sufficient condition on a subspace UU that ensures any positive semidefinite matrix LL with column space UU can be recovered from D+LD+L for any diagonal matrix DD using a convex optimization-based heuristic known as minimum trace factor analysis. This result leads to a new understanding of the structure of rank-deficient correlation matrices and a simple condition on a set of points that ensures there is a centered ellipsoid passing through them.Comment: 20 page

    Application of Monte Carlo Algorithms to the Bayesian Analysis of the Cosmic Microwave Background

    Get PDF
    Power spectrum estimation and evaluation of associated errors in the presence of incomplete sky coverage; non-homogeneous, correlated instrumental noise; and foreground emission is a problem of central importance for the extraction of cosmological information from the cosmic microwave background. We develop a Monte Carlo approach for the maximum likelihood estimation of the power spectrum. The method is based on an identity for the Bayesian posterior as a marginalization over unknowns. Maximization of the posterior involves the computation of expectation values as a sample average from maps of the cosmic microwave background and foregrounds given some current estimate of the power spectrum or cosmological model, and some assumed statistical characterization of the foregrounds. Maps of the CMB are sampled by a linear transform of a Gaussian white noise process, implemented numerically with conjugate gradient descent. For time series data with N_{t} samples, and N pixels on the sphere, the method has a computational expense $KO[N^{2} +- N_{t} +AFw-log N_{t}], where K is a prefactor determined by the convergence rate of conjugate gradient descent. Preconditioners for conjugate gradient descent are given for scans close to great circle paths, and the method allows partial sky coverage for these cases by numerically marginalizing over the unobserved, or removed, region.Comment: submitted to Ap

    Clustering as an example of optimizing arbitrarily chosen objective functions

    Get PDF
    This paper is a reflection upon a common practice of solving various types of learning problems by optimizing arbitrarily chosen criteria in the hope that they are well correlated with the criterion actually used for assessment of the results. This issue has been investigated using clustering as an example, hence a unified view of clustering as an optimization problem is first proposed, stemming from the belief that typical design choices in clustering, like the number of clusters or similarity measure can be, and often are suboptimal, also from the point of view of clustering quality measures later used for algorithm comparison and ranking. In order to illustrate our point we propose a generalized clustering framework and provide a proof-of-concept using standard benchmark datasets and two popular clustering methods for comparison

    Classification of Message Spreading in a Heterogeneous Social Network

    Get PDF
    Nowadays, social networks such as Twitter, Facebook and LinkedIn become increasingly popular. In fact, they introduced new habits, new ways of communication and they collect every day several information that have different sources. Most existing research works fo-cus on the analysis of homogeneous social networks, i.e. we have a single type of node and link in the network. However, in the real world, social networks offer several types of nodes and links. Hence, with a view to preserve as much information as possible, it is important to consider so-cial networks as heterogeneous and uncertain. The goal of our paper is to classify the social message based on its spreading in the network and the theory of belief functions. The proposed classifier interprets the spread of messages on the network, crossed paths and types of links. We tested our classifier on a real word network that we collected from Twitter, and our experiments show the performance of our belief classifier

    A Meaner King uses Biased Bases

    Get PDF
    The mean king problem is a quantum mechanical retrodiction problem, in which Alice has to name the outcome of an ideal measurement on a d-dimensional quantum system, made in one of (d+1) orthonormal bases, unknown to Alice at the time of the measurement. Alice has to make this retrodiction on the basis of the classical outcomes of a suitable control measurement including an entangled copy. We show that the existence of a strategy for Alice is equivalent to the existence of an overall joint probability distribution for (d+1) random variables, whose marginal pair distributions are fixed as the transition probability matrices of the given bases. In particular, for d=2 the problem is decided by John Bell's classic inequality for three dichotomic variables. For mutually unbiased bases in any dimension Alice has a strategy, but for randomly chosen bases the probability for that goes rapidly to zero with increasing d.Comment: 5 pages, 1 figur

    Statistical significance of communities in networks

    Full text link
    Nodes in real-world networks are usually organized in local modules. These groups, called communities, are intuitively defined as sub-graphs with a larger density of internal connections than of external links. In this work, we introduce a new measure aimed at quantifying the statistical significance of single communities. Extreme and Order Statistics are used to predict the statistics associated with individual clusters in random graphs. These distributions allows us to define one community significance as the probability that a generic clustering algorithm finds such a group in a random graph. The method is successfully applied in the case of real-world networks for the evaluation of the significance of their communities.Comment: 9 pages, 8 figures, 2 tables. The software to calculate the C-score can be found at http://filrad.homelinux.org/cscor

    Principal Component Analysis with Noisy and/or Missing Data

    Full text link
    We present a method for performing Principal Component Analysis (PCA) on noisy datasets with missing values. Estimates of the measurement error are used to weight the input data such that compared to classic PCA, the resulting eigenvectors are more sensitive to the true underlying signal variations rather than being pulled by heteroskedastic measurement noise. Missing data is simply the limiting case of weight=0. The underlying algorithm is a noise weighted Expectation Maximization (EM) PCA, which has additional benefits of implementation speed and flexibility for smoothing eigenvectors to reduce the noise contribution. We present applications of this method on simulated data and QSO spectra from the Sloan Digital Sky Survey.Comment: Accepted for publication in PASP; v2 with minor updates, mostly to bibliograph
    • 

    corecore