58 research outputs found

    Unitary limit and quantum interference effect in disordered two-dimensional crystals with nearly half-filled bands

    Full text link
    Based on the self-consistent TT-matrix approximation, the quantum interference (QI) effect is studied with the diagrammatic technique in weakly-disordered two-dimensional crystals with nearly half-filled bands. In addition to the usual 0-mode cooperon and diffuson, there exist π\pi-mode cooperon and diffuson in the unitary limit due to the particle-hole symmetry. The diffusive π\pi-modes are gapped by the deviation from the exactly-nested Fermi surface. The conductivity diagrams with the gapped π\pi-mode cooperon or diffuson are found to give rise to unconventional features of the QI effect. Besides the inelastic scattering, the thermal fluctuation is shown to be also an important dephasing mechanism in the QI processes related with the diffusive π\pi-modes. In the proximity of the nesting case, a power-law anti-localization effect appears due to the π\pi-mode diffuson. For large deviation from the nested Fermi surface, this anti-localization effect is suppressed, and the conductivity remains to have the usual logarithmic weak-localization correction contributed by the 0-mode cooperon. As a result, the dc conductivity in the unitary limit becomes a non-monotonic function of the temperature or the sample size, which is quite different from the prediction of the usual weak-localization theory.Comment: 21 pages, 4 figure

    BigDL: A Distributed Deep Learning Framework for Big Data

    Full text link
    This paper presents BigDL (a distributed deep learning framework for Apache Spark), which has been used by a variety of users in the industry for building deep learning applications on production big data platforms. It allows deep learning applications to run on the Apache Hadoop/Spark cluster so as to directly process the production data, and as a part of the end-to-end data analysis pipeline for deployment and management. Unlike existing deep learning frameworks, BigDL implements distributed, data parallel training directly on top of the functional compute model (with copy-on-write and coarse-grained operations) of Spark. We also share real-world experience and "war stories" of users that have adopted BigDL to address their challenges(i.e., how to easily build end-to-end data analysis and deep learning pipelines for their production data).Comment: In ACM Symposium of Cloud Computing conference (SoCC) 201

    Contribution of plasma cells and B cells to hidradenitis suppurativa pathogenesis

    Get PDF
    Hidradenitis suppurativa (HS) is a debilitating chronic inflammatory skin disease characterized by chronic abscess formation and development of multiple draining sinus tracts in the groin, axillae, and perineum. Using proteomic and transcriptomic approaches, we characterized the inflammatory responses in HS in depth, revealing immune responses centered on IFN-γ, IL-36, and TNF, with lesser contribution from IL-17A. We further identified B cells and plasma cells, with associated increases in immunoglobulin production and complement activation, as pivotal players in HS pathogenesis, with Bruton’s tyrosine kinase (BTK) and spleen tyrosine kinase (SYK) pathway activation as a central signal transduction network in HS. These data provide preclinical evidence to accelerate the path toward clinical trials targeting BTK and SYK signaling in moderate-to-severe HS

    Integrating sequence and array data to create an improved 1000 Genomes Project haplotype reference panel

    Get PDF
    A major use of the 1000 Genomes Project (1000GP) data is genotype imputation in genome-wide association studies (GWAS). Here we develop a method to estimate haplotypes from low-coverage sequencing data that can take advantage of single-nucleotide polymorphism (SNP) microarray genotypes on the same samples. First the SNP array data are phased to build a backbone (or 'scaffold') of haplotypes across each chromosome. We then phase the sequence data 'onto' this haplotype scaffold. This approach can take advantage of relatedness between sequenced and non-sequenced samples to improve accuracy. We use this method to create a new 1000GP haplotype reference set for use by the human genetic community. Using a set of validation genotypes at SNP and bi-allelic indels we show that these haplotypes have lower genotype discordance and improved imputation performance into downstream GWAS samples, especially at low-frequency variants. © 2014 Macmillan Publishers Limited. All rights reserved

    Mapping and characterization of structural variation in 17,795 human genomes

    Get PDF
    A key goal of whole-genome sequencing for studies of human genetics is to interrogate all forms of variation, including single-nucleotide variants, small insertion or deletion (indel) variants and structural variants. However, tools and resources for the study of structural variants have lagged behind those for smaller variants. Here we used a scalable pipeline1 to map and characterize structural variants in 17,795 deeply sequenced human genomes. We publicly release site-frequency data to create the largest, to our knowledge, whole-genome-sequencing-based structural variant resource so far. On average, individuals carry 2.9 rare structural variants that alter coding regions; these variants affect the dosage or structure of 4.2 genes and account for 4.0–11.2% of rare high-impact coding alleles. Using a computational model, we estimate that structural variants account for 17.2% of rare alleles genome-wide, with predicted deleterious effects that are equivalent to loss-of-function coding alleles; approximately 90% of such structural variants are noncoding deletions (mean 19.1 per genome). We report 158,991 ultra-rare structural variants and show that 2% of individuals carry ultra-rare megabase-scale structural variants, nearly half of which are balanced or complex rearrangements. Finally, we infer the dosage sensitivity of genes and noncoding elements, and reveal trends that relate to element class and conservation. This work will help to guide the analysis and interpretation of structural variants in the era of whole-genome sequencing

    Staffan Bergsten, Den trösterika gåtan. Tio essäer om Tomas Tranströmers lyrik. FIB:s lyrikklubbs årsbok 1989

    No full text
    Abstract. This paper centers on a novel data mining technique we term supervised clustering. Unlike traditional clustering, supervised clustering is applied to classified examples and has the goal of identifying class-uniform clusters that have a high probability density. This paper focuses on how data mining techniques in general, and classification techniques in particular, can benefit from knowledge obtained through supervised clustering. We discuss how better nearest neighbor classifiers can be constructed with the knowledge generated by supervised clustering, and provide experimental evidence that they are more efficient and more accurate than a traditional 1-nearest-neighbor classifier. Finally, we demonstrate how supervised clustering can be used to enhance simple classifiers.

    Personal Name Disambiguation in Web Search Results Based on a Semi-supervised Clustering Approach

    No full text
    • …
    corecore