4,047 research outputs found

    Locally Testable Codes and Cayley Graphs

    Full text link
    We give two new characterizations of (\F_2-linear) locally testable error-correcting codes in terms of Cayley graphs over \F_2^h: \begin{enumerate} \item A locally testable code is equivalent to a Cayley graph over \F_2^h whose set of generators is significantly larger than hh and has no short linear dependencies, but yields a shortest-path metric that embeds into â„“1\ell_1 with constant distortion. This extends and gives a converse to a result of Khot and Naor (2006), which showed that codes with large dual distance imply Cayley graphs that have no low-distortion embeddings into â„“1\ell_1. \item A locally testable code is equivalent to a Cayley graph over \F_2^h that has significantly more than hh eigenvalues near 1, which have no short linear dependencies among them and which "explain" all of the large eigenvalues. This extends and gives a converse to a recent construction of Barak et al. (2012), which showed that locally testable codes imply Cayley graphs that are small-set expanders but have many large eigenvalues. \end{enumerate}Comment: 22 page

    Combinatorial Construction of Locally Testable Codes

    Get PDF
    An error correcting code is said to be locally testable if there is a test that checks whether a given string is a codeword, or rather far from the code, by reading only a constant number of symbols of the string. While the best known construction of LTCs by Ben-Sasson and Sudan (STOC 2005) and Dinur (J. ACM 54(3)) achieves very e cient parameters, it relies heavily on algebraic tools and on PCP machinery. In this work we present a new and arguably simpler construction of LTCs that is purely combinatorial, does not rely on PCP machinery and matches the parameters of the best known construction. However, unlike the latter construction, our construction is not entirely explicit

    The Physics of (good) LDPC Codes I. Gauging and dualities

    Full text link
    Low-depth parity check (LDPC) codes are a paradigm of error correction that allow for spatially non-local interactions between (qu)bits, while still enforcing that each (qu)bit interacts only with finitely many others. On expander graphs, they can give rise to ``good codes'' that combine a finite encoding rate with an optimal scaling of the code distance, which governs the code's robustness against noise. Such codes have garnered much recent attention due to two breakthrough developments: the construction of good quantum LDPC codes and good locally testable classical LDPC codes, using similar methods. Here we explore these developments from a physics lens, establishing connections between LDPC codes and ordered phases of matter defined for systems with non-local interactions and on non-Euclidean geometries. We generalize the physical notions of Kramers-Wannier (KW) dualities and gauge theories to this context, using the notion of chain complexes as an organizing principle. We discuss gauge theories based on generic classical LDPC codes and make a distinction between two classes, based on whether their excitations are point-like or extended. For the former, we describe KW dualities, analogous to the 1D Ising model and describe the role played by ``boundary conditions''. For the latter we generalize Wegner's duality to obtain generic quantum LDPC codes within the deconfined phase of a Z_2 gauge theory. We show that all known examples of good quantum LDPC codes are obtained by gauging locally testable classical codes. We also construct cluster Hamiltonians from arbitrary classical codes, related to the Higgs phase of the gauge theory, and formulate generalizations of the Kennedy-Tasaki duality transformation. We use the chain complex language to discuss edge modes and non-local order parameters for these models, initiating the study of SPT phases in non-Euclidean geometries

    Testing mean-variance efficiency in CAPM with possibly non-gaussian errors: an exact simulation-based approach

    Get PDF
    In this paper we propose exact likelihood-based mean-variance efficiency tests of the market portfolio in the context of Capital Asset Pricing Model (CAPM), allowing for a wide class of error distributions which include normality as a special case. These tests are developed in the framework of multivariate linear regressions (MLR). It is well known however that despite their simple statistical structure, standard asymptotically justified MLR-based tests are unreliable. In financial econometrics, exact tests have been proposed for a few specific hypotheses [Jobson and Korkie (Journal of Financial Economics, 1982), MacKinlay (Journal of Financial Economics, 1987), Gibbons, Ross and Shanken (Econometrica, 1989), Zhou (Journal of Finance 1993)] most of which depend on normality. For the gaussian model, our tests correspond to Gibbons, Ross and Shanken's mean-variance efficiency tests. In non-gaussian contexts, we reconsider mean-variance efficiency tests allowing for multivariate Student-t and gaussian mixture errors. Our framework allows to cast more evidence on whether the normality assumption is too restrictive when testing the CAPM. We also propose exact multivariate diagnostic checks (including tests for multivariate GARCH and multivariate generalization of the well known variance ratio tests) and goodness of fit tests as well as a set estimate for the intervening nuisance parameters. Our results [over five-year subperiods] show the following: (i) multivariate normality is rejected in most subperiods, (ii) residual checks reveal no significant departures from the multivariate i.i.d. assumption, and (iii) mean-variance efficiency tests of the market portfolio is not rejected as frequently once it is allowed for the possibility of non-normal errors. -- In diesem Papier schlagen wir exakte likelihood-basierte Tests auf Mittelwert-Varianz- Effizienz im Rahmen des CAPM vor. Dabei wird eine breite Klasse von Verteilungen für den stochastischen Term zugelassen. Normalverteilung ist ein Spezialfall. Die Tests werden im Rahmen von multivariablen linearen Regressionen (MLR) entwickelt. Bekanntlich sind Standardtests, die auf MLR basieren und asymptotisch gerechtfertigt werden, nicht zuverlässig. In der Finanzökonometrie sind exakte Tests für einige wenige Hypothesen vorgeschlagen worden. Die meisten hängen von der Annahme der Normalverteilung ab (Jobson und Korkie (1982), Mac Kinley (1987), Gibbons, Ross und Shanken (1989), Zhou (1993)). Für das gaussianische Modell entsprechen unsere Tests denen von Gibbons, Ross und Shanken. Im nichtgaussianischen Modell betrachten wir Mittelwert-Varianz-Effizienz-Tests, wobei multivariate-Student-t und ?gemischte? Normalverteilungen zugelassen werden. Unser Ansatz gibt mehr Aufschluß darüber, ob die Annahme der Normalverteilung zu restriktiv ist, wenn das CAPM gestestet wird. Wir schlagen auch exakte multivariate Diagnosen (einschließlich Tests für multivariate GARCH-Modelle und multivariate Verallgemeinerungen der bekannten Varianz- Relationen-Tests) sowie Tests auf die Anpassungsgüte und eine Schätzung für die störenden Verschmutzungsparameter vor. Unsere Ergebnisse (für 5-Jahres-Perioden) zeigen das Folgende: (i) multivariate Normalität wird für die meisten Perioden verworfen (ii) die Überprüfung der Residuen zeigt keine signifikante Abweichung von der Annahme einer multivariaten i.i.d. Verteilung (iii), wenn man nichtnormalverteilte Fehler zulässt, werden Mittelwert-Varianz-Effizienz Tests des Marktportfolios seltener verworfen.capital assed pricing model,CAPM,mean-variance efficiency,nonnormality,multivariate linear regression,uniform linear hypothesis,exact test

    Fast Decoding of Explicit Almost Optimal ?-Balanced q-Ary Codes And Fast Approximation of Expanding k-CSPs

    Get PDF

    Evolution of the genetic code: partial optimization of a random code for robustness to translation error in a rugged fitness landscape

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The standard genetic code table has a distinctly non-random structure, with similar amino acids often encoded by codons series that differ by a single nucleotide substitution, typically, in the third or the first position of the codon. It has been repeatedly argued that this structure of the code results from selective optimization for robustness to translation errors such that translational misreading has the minimal adverse effect. Indeed, it has been shown in several studies that the standard code is more robust than a substantial majority of random codes. However, it remains unclear how much evolution the standard code underwent, what is the level of optimization, and what is the likely starting point.</p> <p>Results</p> <p>We explored possible evolutionary trajectories of the genetic code within a limited domain of the vast space of possible codes. Only those codes were analyzed for robustness to translation error that possess the same block structure and the same degree of degeneracy as the standard code. This choice of a small part of the vast space of possible codes is based on the notion that the block structure of the standard code is a consequence of the structure of the complex between the cognate tRNA and the codon in mRNA where the third base of the codon plays a minimum role as a specificity determinant. Within this part of the fitness landscape, a simple evolutionary algorithm, with elementary evolutionary steps comprising swaps of four-codon or two-codon series, was employed to investigate the optimization of codes for the maximum attainable robustness. The properties of the standard code were compared to the properties of four sets of codes, namely, purely random codes, random codes that are more robust than the standard code, and two sets of codes that resulted from optimization of the first two sets. The comparison of these sets of codes with the standard code and its locally optimized version showed that, on average, optimization of random codes yielded evolutionary trajectories that converged at the same level of robustness to translation errors as the optimization path of the standard code; however, the standard code required considerably fewer steps to reach that level than an average random code. When evolution starts from random codes whose fitness is comparable to that of the standard code, they typically reach much higher level of optimization than the standard code, i.e., the standard code is much closer to its local minimum (fitness peak) than most of the random codes with similar levels of robustness. Thus, the standard genetic code appears to be a point on an evolutionary trajectory from a random point (code) about half the way to the summit of the local peak. The fitness landscape of code evolution appears to be extremely rugged, containing numerous peaks with a broad distribution of heights, and the standard code is relatively unremarkable, being located on the slope of a moderate-height peak.</p> <p>Conclusion</p> <p>The standard code appears to be the result of partial optimization of a random code for robustness to errors of translation. The reason the code is not fully optimized could be the trade-off between the beneficial effect of increasing robustness to translation errors and the deleterious effect of codon series reassignment that becomes increasingly severe with growing complexity of the evolving system. Thus, evolution of the code can be represented as a combination of adaptation and frozen accident.</p> <p>Reviewers</p> <p>This article was reviewed by David Ardell, Allan Drummond (nominated by Laura Landweber), and Rob Knight.</p> <p>Open Peer Review</p> <p>This article was reviewed by David Ardell, Allan Drummond (nominated by Laura Landweber), and Rob Knight.</p

    Dynamic Protocol Reverse Engineering a Grammatical Inference Approach

    Get PDF
    Round trip engineering of software from source code and reverse engineering of software from binary files have both been extensively studied and the state-of-practice have documented tools and techniques. Forward engineering of protocols has also been extensively studied and there are firmly established techniques for generating correct protocols. While observation of protocol behavior for performance testing has been studied and techniques established, reverse engineering of protocol control flow from observations of protocol behavior has not received the same level of attention. State-of-practice in reverse engineering the control flow of computer network protocols is comprised of mostly ad hoc approaches. We examine state-of-practice tools and techniques used in three open source projects: Pidgin, Samba, and rdesktop . We examine techniques proposed by computational learning researchers for grammatical inference. We propose to extend the state-of-art by inferring protocol control flow using grammatical inference inspired techniques to reverse engineer automata representations from captured data flows. We present evidence that grammatical inference is applicable to the problem domain under consideration

    How people-centred health systems can reach the grassroots: experiences implementing community-level quality improvement in rural Tanzania and Uganda

    Get PDF
    Background Quality improvement (QI) methods engage stakeholders in identifying problems, creating strategies called change ideas to address those problems, testing those change ideas and scaling them up where successful. These methods have rarely been used at the community level in low-income country settings. Here we share experiences from rural Tanzania and Uganda, where QI was applied as part of the Expanded Quality Management Using Information Power (EQUIP) intervention with the aim of improving maternal and newborn health. Village volunteers were taught how to generate change ideas to improve health-seeking behaviours and home-based maternal and newborn care practices. Interaction was encouraged between communities and health staff. Aim To describe experiences implementing EQUIP’s QI approach at the community level. Methods A mixed methods process evaluation of community-level QI was conducted in Tanzania and a feasibility study in Uganda. We outlined how village volunteers were trained in and applied QI techniques and examined the interaction between village volunteers and health facilities, and in Tanzania, the interaction with the wider community also. Results Village volunteers had the capacity to learn and apply QI techniques to address local maternal and neonatal health problems. Data collection and presentation was a persistent challenge for village volunteers, overcome through intensive continuous mentoring and coaching. Village volunteers complemented health facility staff, particularly to reinforce behaviour change on health facility delivery and birth preparedness. There was some evidence of changing social norms around maternal and newborn health, which EQUIP helped to reinforce. Conclusions Community-level QI is a participatory research approach that engaged volunteers in Tanzania and Uganda, putting them in a central position within local health systems to increase health-seeking behaviours and improve preventative maternal and newborn health practices

    Access to Primary Care Physicians Care Services Among African American Children With Asthma in Urban Areas

    Get PDF
    Access to appropriate asthma care may be challenging for low-income African-American parents. Parents’ and caregivers’ perceptions regarding access to primary care services for asthma treatment for their children were explored using a qualitative design. The Anderson behavioral model was the conceptual framework that guided the study. This model helps understand patients’ use of health services. The research questions asked about primary care for asthma treatment, barriers to treatment, and possible facilitators to seeking appropriate care for children with asthma. A general qualitative design was applied, with the thematic analysis used to determine the findings. Ten parents and guardians participated in a one-on-one interview via Zoom. Seven themes and subthemes were discovered. The themes included, for example, (a) Symptoms of Serious Illness in Child Encouraged Parents to Use Primary Care Services and (b) Difficulty in Finding Easily Accessible and Reliable Medical Facilities or Pediatricians was a Barrier. NVivo software helped with data analysis to sort codes and categories to develop overarching themes. The results indicated that more specialists, particularly African-American doctors, are needed to diagnose children rather than using general pediatricians. Health disparities and cultural competence were also noted in the results. Positive social change may be found in recognizing the need for African American children to have access to appropriate diagnosis and treatment for asthma within primary care clinics that include physicians who are also African American

    Fault-tolerant hyperbolic Floquet quantum error correcting codes

    Full text link
    A central goal in quantum error correction is to reduce the overhead of fault-tolerant quantum computing by increasing noise thresholds and reducing the number of physical qubits required to sustain a logical qubit. We introduce a potential path towards this goal based on a family of dynamically generated quantum error correcting codes that we call "hyperbolic Floquet codes." These codes are defined by a specific sequence of non-commuting two-body measurements arranged periodically in time that stabilize a topological code on a hyperbolic manifold with negative curvature. We focus on a family of lattices for nn qubits that, according to our prescription that defines the code, provably achieve a finite encoding rate (1/8+2/n)(1/8+2/n) and have a depth-3 syndrome extraction circuit. Similar to hyperbolic surface codes, the distance of the code at each time-step scales at most logarithmically in nn. The family of lattices we choose indicates that this scaling is achievable in practice. We develop and benchmark an efficient matching-based decoder that provides evidence of a threshold near 0.1% in a phenomenological noise model. Utilizing weight-two check operators and a qubit connectivity of 3, one of our hyperbolic Floquet codes uses 400 physical qubits to encode 52 logical qubits with a code distance of 8, i.e., it is a [[400,52,8]][[400,52,8]] code. At small error rates, comparable logical error suppression to this code requires 5x as many physical qubits (1924) when using the honeycomb Floquet code with the same noise model and decoder.Comment: 15 pages, 7 figure
    • …
    corecore