48,139 research outputs found

    Polynomial iterative algorithms for coloring and analyzing random graphs

    Get PDF
    We study the graph coloring problem over random graphs of finite average connectivity cc. Given a number qq of available colors, we find that graphs with low connectivity admit almost always a proper coloring whereas graphs with high connectivity are uncolorable. Depending on qq, we find the precise value of the critical average connectivity cqc_q. Moreover, we show that below cqc_q there exist a clustering phase c[cd,cq]c\in [c_d,c_q] in which ground states spontaneously divide into an exponential number of clusters. Furthermore, we extended our considerations to the case of single instances showing consistent results. This lead us to propose a new algorithm able to color in polynomial time random graphs in the hard but colorable region, i.e when c[cd,cq]c\in [c_d,c_q].Comment: 23 pages, 10 eps figure

    Search Bias Quantification: Investigating Political Bias in Social Media and Web Search

    No full text
    Users frequently use search systems on the Web as well as online social media to learn about ongoing events and public opinion on personalities. Prior studies have shown that the top-ranked results returned by these search engines can shape user opinion about the topic (e.g., event or person) being searched. In case of polarizing topics like politics, where multiple competing perspectives exist, the political bias in the top search results can play a significant role in shaping public opinion towards (or away from) certain perspectives. Given the considerable impact that search bias can have on the user, we propose a generalizable search bias quantification framework that not only measures the political bias in ranked list output by the search system but also decouples the bias introduced by the different sources—input data and ranking system. We apply our framework to study the political bias in searches related to 2016 US Presidential primaries in Twitter social media search and find that both input data and ranking system matter in determining the final search output bias seen by the users. And finally, we use the framework to compare the relative bias for two popular search systems—Twitter social media search and Google web search—for queries related to politicians and political events. We end by discussing some potential solutions to signal the bias in the search results to make the users more aware of them.publishe

    Protein docking refinement by convex underestimation in the low-dimensional subspace of encounter complexes

    Get PDF
    We propose a novel stochastic global optimization algorithm with applications to the refinement stage of protein docking prediction methods. Our approach can process conformations sampled from multiple clusters, each roughly corresponding to a different binding energy funnel. These clusters are obtained using a density-based clustering method. In each cluster, we identify a smooth “permissive” subspace which avoids high-energy barriers and then underestimate the binding energy function using general convex polynomials in this subspace. We use the underestimator to bias sampling towards its global minimum. Sampling and subspace underestimation are repeated several times and the conformations sampled at the last iteration form a refined ensemble. We report computational results on a comprehensive benchmark of 224 protein complexes, establishing that our refined ensemble significantly improves the quality of the conformations of the original set given to the algorithm. We also devise a method to enhance the ensemble from which near-native models are selected.Published versio

    A review of Monte Carlo simulations of polymers with PERM

    Full text link
    In this review, we describe applications of the pruned-enriched Rosenbluth method (PERM), a sequential Monte Carlo algorithm with resampling, to various problems in polymer physics. PERM produces samples according to any given prescribed weight distribution, by growing configurations step by step with controlled bias, and correcting "bad" configurations by "population control". The latter is implemented, in contrast to other population based algorithms like e.g. genetic algorithms, by depth-first recursion which avoids storing all members of the population at the same time in computer memory. The problems we discuss all concern single polymers (with one exception), but under various conditions: Homopolymers in good solvents and at the Θ\Theta point, semi-stiff polymers, polymers in confining geometries, stretched polymers undergoing a forced globule-linear transition, star polymers, bottle brushes, lattice animals as a model for randomly branched polymers, DNA melting, and finally -- as the only system at low temperatures, lattice heteropolymers as simple models for protein folding. PERM is for some of these problems the method of choice, but it can also fail. We discuss how to recognize when a result is reliable, and we discuss also some types of bias that can be crucial in guiding the growth into the right directions.Comment: 29 pages, 26 figures, to be published in J. Stat. Phys. (2011

    Local Rule-Based Explanations of Black Box Decision Systems

    Get PDF
    The recent years have witnessed the rise of accurate but obscure decision systems which hide the logic of their internal decision processes to the users. The lack of explanations for the decisions of black box systems is a key ethical issue, and a limitation to the adoption of machine learning components in socially sensitive and safety-critical contexts. %Therefore, we need explanations that reveals the reasons why a predictor takes a certain decision. In this paper we focus on the problem of black box outcome explanation, i.e., explaining the reasons of the decision taken on a specific instance. We propose LORE, an agnostic method able to provide interpretable and faithful explanations. LORE first leans a local interpretable predictor on a synthetic neighborhood generated by a genetic algorithm. Then it derives from the logic of the local interpretable predictor a meaningful explanation consisting of: a decision rule, which explains the reasons of the decision; and a set of counterfactual rules, suggesting the changes in the instance's features that lead to a different outcome. Wide experiments show that LORE outperforms existing methods and baselines both in the quality of explanations and in the accuracy in mimicking the black box

    A Bayesian framework for optimal motion planning with uncertainty

    Get PDF
    Modeling robot motion planning with uncertainty in a Bayesian framework leads to a computationally intractable stochastic control problem. We seek hypotheses that can justify a separate implementation of control, localization and planning. In the end, we reduce the stochastic control problem to path- planning in the extended space of poses x covariances; the transitions between states are modeled through the use of the Fisher information matrix. In this framework, we consider two problems: minimizing the execution time, and minimizing the final covariance, with an upper bound on the execution time. Two correct and complete algorithms are presented. The first is the direct extension of classical graph-search algorithms in the extended space. The second one is a back-projection algorithm: uncertainty constraints are propagated backward from the goal towards the start state

    Programmatic and Direct Manipulation, Together at Last

    Full text link
    Direct manipulation interfaces and programmatic systems have distinct and complementary strengths. The former provide intuitive, immediate visual feedback and enable rapid prototyping, whereas the latter enable complex, reusable abstractions. Unfortunately, existing systems typically force users into just one of these two interaction modes. We present a system called Sketch-n-Sketch that integrates programmatic and direct manipulation for the particular domain of Scalable Vector Graphics (SVG). In Sketch-n-Sketch, the user writes a program to generate an output SVG canvas. Then the user may directly manipulate the canvas while the system immediately infers a program update in order to match the changes to the output, a workflow we call live synchronization. To achieve this, we propose (i) a technique called trace-based program synthesis that takes program execution history into account in order to constrain the search space and (ii) heuristics for dealing with ambiguities. Based on our experience with examples spanning 2,000 lines of code and from the results of a preliminary user study, we believe that Sketch-n-Sketch provides a novel workflow that can augment traditional programming systems. Our approach may serve as the basis for live synchronization in other application domains, as well as a starting point for yet more ambitious ways of combining programmatic and direct manipulation.Comment: PLDI 2016 Paper + Supplementary Appendice
    corecore