581 research outputs found

    Marked central nervous system pathology in CD59 knockout rats following passive transfer of Neuromyelitis optica immunoglobulin G.

    Get PDF
    Neuromyelitis optica spectrum disorders (herein called NMO) is an inflammatory demyelinating disease of the central nervous system in which pathogenesis involves complement-dependent cytotoxicity (CDC) produced by immunoglobulin G autoantibodies targeting aquaporin-4 (AQP4-IgG) on astrocytes. We reported evidence previously, using CD59-/- mice, that the membrane-associated complement inhibitor CD59 modulates CDC in NMO (Zhang and Verkman, J. Autoimmun. 53:67-77, 2014). Motivated by the observation that rats, unlike mice, have human-like complement activity, here we generated CD59-/- rats to investigate the role of CD59 in NMO and to create NMO pathology by passive transfer of AQP4-IgG under conditions in which minimal pathology is produced in normal rats. CD59-/- rats generated by CRISPR/Cas9 technology showed no overt phenotype at baseline except for mild hemolysis. CDC assays in astrocyte cultures and cerebellar slices from CD59-/- rats showed much greater sensitivity to AQP4-IgG and complement than those from CD59+/+ rats. Intracerebral administration of AQP4-IgG in CD59-/- rats produced marked NMO pathology, with astrocytopathy, inflammation, deposition of activated complement, and demyelination, whereas identically treated CD59+/+ rats showed minimal pathology. A single, intracisternal injection of AQP4-IgG in CD59-/- rats produced hindlimb paralysis by 3 days, with inflammation and deposition of activated complement in spinal cord, optic nerves and brain periventricular and surface matter, with most marked astrocyte injury in cervical spinal cord. These results implicate an important role of CD59 in modulating NMO pathology in rats and demonstrate amplification of AQP4-IgG-induced NMO disease with CD59 knockout

    Residual-Based Measurement of Peer and Link Lifetimes in Gnutella Networks

    Get PDF
    Existing methods of measuring lifetimes in P2P systems usually rely on the so-called create-based method (CBM), which divides a given observation window into two halves and samples users created in the first half every Delta time units until they die or the observation period ends. Despite its frequent use, this approach has no rigorous accuracy or overhead analysis in the literature. To shed more light on its performance, we flrst derive a model for CBM and show that small window size or large Delta may lead to highly inaccurate lifetime distributions. We then show that create-based sampling exhibits an inherent tradeoff between overhead and accuracy, which does not allow any fundamental improvement to the method. Instead, we propose a completely different approach for sampling user dynamics that keeps track of only residual lifetimes of peers and uses a simple renewal-process model to recover the actual lifetimes from the observed residuals. Our analysis indicates that for reasonably large systems, the proposed method can reduce bandwidth consumption by several orders of magnitude compared to prior approaches while simultaneously achieving higher accuracy. We finish the paper by implementing a two-tier Gnutella network crawler equipped with the proposed sampling method and obtain the distribution of ultrapeer lifetimes in a network of 6.4 million users and 60 million links. Our experimental results show that ultrapeer lifetimes are Pareto with shape a alpha ap 1.1; however, link lifetimes exhibit much lighter tails with alpha ap 1.9

    Residual-Based Estimation of Peer and Link Lifetimes in P2P Networks

    Get PDF
    Existing methods of measuring lifetimes in P2P systems usually rely on the so-called Create-BasedMethod (CBM), which divides a given observation window into two halves and samples users ldquocreatedrdquo in the first half every Delta time units until they die or the observation period ends. Despite its frequent use, this approach has no rigorous accuracy or overhead analysis in the literature. To shed more light on its performance, we first derive a model for CBM and show that small window size or large Delta may lead to highly inaccurate lifetime distributions. We then show that create-based sampling exhibits an inherent tradeoff between overhead and accuracy, which does not allow any fundamental improvement to the method. Instead, we propose a completely different approach for sampling user dynamics that keeps track of only residual lifetimes of peers and uses a simple renewal-process model to recover the actual lifetimes from the observed residuals. Our analysis indicates that for reasonably large systems, the proposed method can reduce bandwidth consumption by several orders of magnitude compared to prior approaches while simultaneously achieving higher accuracy. We finish the paper by implementing a two-tier Gnutella network crawler equipped with the proposed sampling method and obtain the distribution of ultrapeer lifetimes in a network of 6.4 million users and 60 million links. Our experimental results show that ultrapeer lifetimes are Pareto with shape alpha ap 1.1; however, link lifetimes exhibit much lighter tails with alpha ap 1.8

    On Node Isolation under Churn in Unstructured P2P Networks with Heavy-Tailed Lifetimes

    Get PDF
    Previous analytical studies [12], [18] of unstructured P2P resilience have assumed exponential user lifetimes and only considered age-independent neighbor replacement. In this paper, we overcome these limitations by introducing a general node-isolation model for heavy-tailed user lifetimes and arbitrary neighbor-selection algorithms. Using this model, we analyze two age-biased neighbor-selection strategies and show that they significantly improve the residual lifetimes of chosen users, which dramatically reduces the probability of user isolation and graph partitioning compared to uniform selection of neighbors. In fact, the second strategy based on random walks on age-weighted graphs demonstrates that for lifetimes with infinite variance, the system monotonically increases its resilience as its age and size grow. Specifically, we show that the probability of isolation converges to zero as these two metrics tend to infinity. We finish the paper with simulations in finite-size graphs that demonstrate the effect of this result in practice

    Enhancing thermoelectric figure-of-merit by low-dimensional electrical transport in phonon-glass crystals

    Full text link
    Low-dimensional electronic and glassy phononic transport are two important ingredients of highly-efficient thermoelectric material, from which two branches of the thermoelectric research emerge. One focuses on controlling electronic transport in the low dimension, while the other on multiscale phonon engineering in the bulk. Recent work has benefited much from combining these two approaches, e.g., phonon engineering in low-dimensional materials. Here, we propose to employ the low-dimensional electronic structure in bulk phonon-glass crystal as an alternative way to increase the thermoelectric efficiency. Through first-principles electronic structure calculation and classical molecular dynamics simulation, we show that the ¤Ç\pi-¤Ç\pi stacking Bis-Dithienothiophene molecular crystal is a natural candidate for such an approach. This is determined by the nature of its chemical bonding. Without any optimization of the material parameter, we obtain a maximum room-temperature figure of merit, ZTZT, of 1.481.48 at optimal doping, thus validating our idea.Comment: Nano Lett.201

    Security and privacy for data mining of RFID-enabled product supply chains

    Get PDF
    The e-Pedigree used for verifying the authenticity of the products in RFID-enabled product supply chains plays a very important role in product anti-counterfeiting and risk management, but it is also vulnerable to malicious attacks and privacy leakage. While the radio frequency identification (RFID) technology bears merits such as automatic wireless identification without direct eye-sight contact, its security has been one of the main concerns in recent researches such as tag data tampering and cloning. Moreover, privacy leakage of the partners along the supply chains may lead to complete compromise of the whole system, and in consequence all authenticated products may be replaced by the faked ones! Quite different from other conventional databases, datasets in supply chain scenarios are temporally correlated, and every party of the system can only be semi-trusted. In this paper, a system that incorporates merits of both the secure multi-party computing and differential privacy is proposed to address the security and privacy issues, focusing on the vulnerability analysis of the data mining with distributed EPCIS datasets of e-pedigree having temporal relations from multiple range and aggregate queries in typical supply chain scenarios and the related algorithms. Theoretical analysis shows that our proposed system meets perfectly our preset design goals, while some of the other problems leave for future research

    Generalized Tsirelson Inequalities, Commuting-Operator Provers, and Multi-Prover Interactive Proof Systems

    Full text link
    A central question in quantum information theory and computational complexity is how powerful nonlocal strategies are in cooperative games with imperfect information, such as multi-prover interactive proof systems. This paper develops a new method for proving limits of nonlocal strategies that make use of prior entanglement among players (or, provers, in the terminology of multi-prover interactive proofs). Instead of proving the limits for usual isolated provers who initially share entanglement, this paper proves the limits for "commuting-operator provers", who share private space, but can apply only such operators that are commutative with any operator applied by other provers. Commuting-operator provers are at least as powerful as usual isolated but prior-entangled provers, and thus, limits for commuting-operator provers immediately give limits for usual entangled provers. Using this method, we obtain an n-party generalization of the Tsirelson bound for the Clauser-Horne- Shimony-Holt inequality for every n. Our bounds are tight in the sense that, in every n-party case, the equality is achievable by a usual nonlocal strategy with prior entanglement. We also apply our method to a 3-prover 1-round binary interactive proof for NEXP. Combined with the technique developed by Kempe, Kobayashi, Matsumoto, Toner and Vidick to analyze the soundness of the proof system, it is proved to be NP-hard to distinguish whether the entangled value of a 3-prover 1-round binary-answer game is equal to 1 or at most 1-1/p(n) for some polynomial p, where n is the number of questions. This is in contrast to the 2-prover 1-round binary-answer case, where the corresponding problem is efficiently decidable. Alternatively, NEXP has a 3-prover 1-round binary interactive proof system with perfect completeness and soundness 1-2^{-poly}.Comment: 20 pages. v2: An incorrect statement in the abstract about the two-party case is corrected. Relation between this work and a preliminary work by Sun, Yao and Preda is clarifie
    • ÔÇŽ
    corecore