7,859 research outputs found

    Algebraic spin liquid in an exactly solvable spin model

    Full text link
    We have proposed an exactly solvable quantum spin-3/2 model on a square lattice. Its ground state is a quantum spin liquid with a half integer spin per unit cell. The fermionic excitations are gapless with a linear dispersion, while the topological "vison" excitations are gapped. Moreover, the massless Dirac fermions are stable. Thus, this model is, to the best of our knowledge, the first exactly solvable model of half-integer spins whose ground state is an "algebraic spin liquid."Comment: 4 pages, 1 figur

    Non-Abelian Quantum Hall Effect in Topological Flat Bands

    Full text link
    Inspired by recent theoretical discovery of robust fractional topological phases without a magnetic field, we search for the non-Abelian quantum Hall effect (NA-QHE) in lattice models with topological flat bands (TFBs). Through extensive numerical studies on the Haldane model with three-body hard-core bosons loaded into a TFB, we find convincing numerical evidence of a stable ν=1\nu=1 bosonic NA-QHE, with the characteristic three-fold quasi-degeneracy of ground states on a torus, a quantized Chern number, and a robust spectrum gap. Moreover, the spectrum for two-quasihole states also shows a finite energy gap, with the number of states in the lower energy sector satisfying the same counting rule as the Moore-Read Pfaffian state.Comment: 5 pages, 7 figure

    Temporal relation discovery between events and temporal expressions identified in clinical narrative

    Get PDF
    AbstractThe automatic detection of temporal relations between events in electronic medical records has the potential to greatly augment the value of such records for understanding disease progression and patients’ responses to treatments. We present a three-step methodology for labeling temporal relations using machine learning and deterministic rules over an annotated corpus provided by the 2012 i2b2 Shared Challenge. We first create an expanded training network of relations by computing the transitive closure over the annotated data; we then apply hand-written rules and machine learning with a feature set that casts a wide net across potentially relevant lexical and syntactic information; finally, we employ a voting mechanism to resolve global contradictions between the local predictions made by the learned classifier. Results over the testing data illustrate the contributions of initial prediction and conflict resolution

    Construction of minimum-norm fixed points of pseudocontractions in Hilbert spaces

    Get PDF
    Abstract An iterative algorithm is introduced for the construction of the minimum-norm fixed point of a pseudocontraction on a Hilbert space. The algorithm is proved to be strongly convergent. MSC:47H05, 47H10, 47H17

    Robust Classification with Convolutional Prototype Learning

    Full text link
    Convolutional neural networks (CNNs) have been widely used for image classification. Despite its high accuracies, CNN has been shown to be easily fooled by some adversarial examples, indicating that CNN is not robust enough for pattern classification. In this paper, we argue that the lack of robustness for CNN is caused by the softmax layer, which is a totally discriminative model and based on the assumption of closed world (i.e., with a fixed number of categories). To improve the robustness, we propose a novel learning framework called convolutional prototype learning (CPL). The advantage of using prototypes is that it can well handle the open world recognition problem and therefore improve the robustness. Under the framework of CPL, we design multiple classification criteria to train the network. Moreover, a prototype loss (PL) is proposed as a regularization to improve the intra-class compactness of the feature representation, which can be viewed as a generative model based on the Gaussian assumption of different classes. Experiments on several datasets demonstrate that CPL can achieve comparable or even better results than traditional CNN, and from the robustness perspective, CPL shows great advantages for both the rejection and incremental category learning tasks
    • …
    corecore