197 research outputs found

    Une Ă©pitaphe aux discours d’austĂ©ritĂ©? Une approche expĂ©rimentale des Ă©volutions de l’opinion publique et des dynamiques de classe pendant la crise de la Covid-19

    Get PDF
    The Covid-19 pandemic is disrupting the international political economy context unlike any event since World War II. As a consequence, the French government has, at least momentarily, reversed decades of fiscal consolidation policies sedimented around austerity narratives by instating a costly emergency furlough scheme for a third of the workforce. This crisis provides a natural setting to investigate the relations among an emerging “critical juncture” in political economy, public preferences, and the salience of austerity narratives. We collected panel data and administered two experiments to test if citizens’ viewpoints are sensitive to the trade-off between health and economics, still receptive to austerity narratives, and conditioned by socioeconomic status in supporting them. We find public viewpoints were highly swayable between health and economic concerns at the first peak of the epidemic outbreak in April 2020, but they were not influenced by the austerity narratives during the phase-out of the lockdown in June, with the exception of the upper class. Overall, public support is shifting in favor of increased social spending, and austerity might no longer inhabit the majority’s “common sense.” We conclude with further implications for the study of class and conflict in a post-pandemic world.La pandĂ©mie de Covid-19 bouleverse le contexte de l’économie politique internationale comme aucun Ă©vĂ©nement ne l’a fait depuis la Seconde Guerre mondiale. En consĂ©quence, le gouvernement français a, au moins momentanĂ©ment, foulĂ© au pied des dĂ©cennies de politiques d’assainissement budgĂ©taire appuyĂ©es sur des discours d’austĂ©ritĂ©, en mettant en place une aide d’urgence onĂ©reuse Ă  destination d’un tiers de la population active. Cette crise offre donc un cadre naturel pour enquĂȘter sur les relations entre ce «moment critique» Ă©mergent de l’économie politique, l’opinion publique et la prĂ©pondĂ©rance du rĂ©cit justifiant les mesures d’austĂ©ritĂ©. Nous avons collectĂ© des donnĂ©es auprĂšs d’un panel et menĂ© deux expĂ©riences pour tester si les points de vue exprimĂ©s par les citoyens sont sensibles au compromis entre mesures privilĂ©giant la santĂ© ou l’économie, s’ils se montrent toujours rĂ©ceptifs aux rĂ©cits d’austĂ©ritĂ© et si leur soutien est conditionnĂ© par leur statut socio-Ă©conomique. Nous avons pu constater que si les points de vue exprimĂ©s lors du premier pic Ă©pidĂ©mique d’avril 2020 oscillaient aisĂ©ment des prĂ©occupations sanitaires aux prĂ©occupations Ă©conomiques, ils n’étaient plus permĂ©ables aux discours d’austĂ©ritĂ© lors de la sortie progressive du confinement en juin, Ă  l’exception de la classe supĂ©rieure. Dans l’ensemble, le soutien du public semble basculer en faveur d’une augmentation des dĂ©penses sociales et l’austĂ©ritĂ© ne plus appartenir au «sens commun» de la majoritĂ© de la population. Nous concluons avec des implications pour l’étude des classes sociales et des conflits dans un monde post-pandĂ©mique.1 Introduction 2 Contextualizing public opinion shifts in political economy 3 Measuring public preferences and their manipulability 4 Moving away from austerity narratives? For the many, not the few 5 Concluding remarks 6 Data and methodological note Reference

    Lightweight Lempel-Ziv Parsing

    Full text link
    We introduce a new approach to LZ77 factorization that uses O(n/d) words of working space and O(dn) time for any d >= 1 (for polylogarithmic alphabet sizes). We also describe carefully engineered implementations of alternative approaches to lightweight LZ77 factorization. Extensive experiments show that the new algorithm is superior in most cases, particularly at the lowest memory levels and for highly repetitive data. As a part of the algorithm, we describe new methods for computing matching statistics which may be of independent interest.Comment: 12 page

    The Tree Inclusion Problem: In Linear Space and Faster

    Full text link
    Given two rooted, ordered, and labeled trees PP and TT the tree inclusion problem is to determine if PP can be obtained from TT by deleting nodes in TT. This problem has recently been recognized as an important query primitive in XML databases. Kilpel\"ainen and Mannila [\emph{SIAM J. Comput. 1995}] presented the first polynomial time algorithm using quadratic time and space. Since then several improved results have been obtained for special cases when PP and TT have a small number of leaves or small depth. However, in the worst case these algorithms still use quadratic time and space. Let nSn_S, lSl_S, and dSd_S denote the number of nodes, the number of leaves, and the %maximum depth of a tree S∈{P,T}S \in \{P, T\}. In this paper we show that the tree inclusion problem can be solved in space O(nT)O(n_T) and time: O(\min(l_Pn_T, l_Pl_T\log \log n_T + n_T, \frac{n_Pn_T}{\log n_T} + n_{T}\log n_{T})). This improves or matches the best known time complexities while using only linear space instead of quadratic. This is particularly important in practical applications, such as XML databases, where the space is likely to be a bottleneck.Comment: Minor updates from last tim

    Graphs Cannot Be Indexed in Polynomial Time for Sub-quadratic Time String Matching, Unless SETH Fails

    Get PDF
    The string matching problem on a node-labeled graph G= (V, E) asks whether a given pattern string P has an occurrence in G, in the form of a path whose concatenation of node labels equals P. This is a basic primitive in various problems in bioinformatics, graph databases, or networks, but only recently proven to have a O(|E||P|)-time lower bound, under the Orthogonal Vectors Hypothesis (OVH). We consider here its indexed version, in which we can index the graph in order to support time-efficient string queries. We show that, under OVH, no polynomial-time indexing scheme of the graph can support querying P in time O(| P| + | E| ÎŽ| P| ÎČ), with either ÎŽ< 1 or ÎČ< 1. As a side-contribution, we introduce the notion of linear independent-components (lic) reduction, allowing for a simple proof of our result. As another illustration that hardness of indexing follows as a corollary of a lic reduction, we also translate the quadratic conditional lower bound of Backurs and Indyk (STOC 2015) for the problem of matching a query string inside a text, under edit distance. We obtain an analogous tight quadratic lower bound for its indexed version, improving the recent result of Cohen-Addad, Feuilloley and Starikovskaya (SODA 2019), but with a slightly different boundary condition.Peer reviewe

    Storage and retrieval of individual genomes

    Get PDF
    Volume: 5541A repetitive sequence collection is one where portions of a base sequence of length n are repeated many times with small variations, forming a collection of total length N. Examples of such collections are version control data and genome sequences of individuals, where the differences can be expressed by lists of basic edit operations. Flexible and efficient data analysis on a such typically huge collection is plausible using suffix trees. However, suffix tree occupies O(N log N) bits, which very soon inhibits in-memory analyses. Recent advances in full-text self-indexing reduce the space of suffix tree to O(N log σ) bits, where σ is the alphabet size. In practice, the space reduction is more than 10-fold, for example on suffix tree of Human Genome. However, this reduction factor remains constant when more sequences are added to the collection. We develop a new family of self-indexes suited for the repetitive sequence collection setting. Their expected space requirement depends only on the length n of the base sequence and the number s of variations in its repeated copies. That is, the space reduction factor is no longer constant, but depends on N / n. We believe the structures developed in this work will provide a fundamental basis for storage and retrieval of individual genomes as they become available due to rapid progress in the sequencing technologies.Peer reviewe

    Run-Length Compressed Indexes Are Superior for Highly Repetitive Sequence Collections

    Get PDF
    A repetitive sequence collection is one where portions of a base sequence of length n are repeated many times with small variations, forming a collection of total length N. Examples of such collections are version control data and genome sequences of individuals, where the differences can be expressed by lists of basic edit operations. This paper is devoted to studying ways to store massive sets of highly repetitive sequence collections in space-efficient manner so that retrieval of the content as well as queries on the content of the sequences can be provided time-efficiently. We show that the state-of-the-art entropy-bound full-text self-indexes do not yet provide satisfactory space bounds for this specific task. We engineer some new structures that use run-length encoding and give empirical evidence that these structures are superior to the current structures

    On the Representability of Complete Genomes by Multiple Competing Finite-Context (Markov) Models

    Get PDF
    A finite-context (Markov) model of order yields the probability distribution of the next symbol in a sequence of symbols, given the recent past up to depth . Markov modeling has long been applied to DNA sequences, for example to find gene-coding regions. With the first studies came the discovery that DNA sequences are non-stationary: distinct regions require distinct model orders. Since then, Markov and hidden Markov models have been extensively used to describe the gene structure of prokaryotes and eukaryotes. However, to our knowledge, a comprehensive study about the potential of Markov models to describe complete genomes is still lacking. We address this gap in this paper. Our approach relies on (i) multiple competing Markov models of different orders (ii) careful programming techniques that allow orders as large as sixteen (iii) adequate inverted repeat handling (iv) probability estimates suited to the wide range of context depths used. To measure how well a model fits the data at a particular position in the sequence we use the negative logarithm of the probability estimate at that position. The measure yields information profiles of the sequence, which are of independent interest. The average over the entire sequence, which amounts to the average number of bits per base needed to describe the sequence, is used as a global performance measure. Our main conclusion is that, from the probabilistic or information theoretic point of view and according to this performance measure, multiple competing Markov models explain entire genomes almost as well or even better than state-of-the-art DNA compression methods, such as XM, which rely on very different statistical models. This is surprising, because Markov models are local (short-range), contrasting with the statistical models underlying other methods, where the extensive data repetitions in DNA sequences is explored, and therefore have a non-local character
    • 

    corecore