19 research outputs found

    Learning Representations that Support Extrapolation

    Full text link
    Extrapolation -- the ability to make inferences that go beyond the scope of one's experiences -- is a hallmark of human intelligence. By contrast, the generalization exhibited by contemporary neural network algorithms is largely limited to interpolation between data points in their training corpora. In this paper, we consider the challenge of learning representations that support extrapolation. We introduce a novel visual analogy benchmark that allows the graded evaluation of extrapolation as a function of distance from the convex domain defined by the training data. We also introduce a simple technique, temporal context normalization, that encourages representations that emphasize the relations between objects. We find that this technique enables a significant improvement in the ability to extrapolate, considerably outperforming a number of competitive techniques.Comment: ICML 202

    The Relational Bottleneck as an Inductive Bias for Efficient Abstraction

    Full text link
    A central challenge for cognitive science is to explain how abstract concepts are acquired from limited experience. This effort has often been framed in terms of a dichotomy between empiricist and nativist approaches, most recently embodied by debates concerning deep neural networks and symbolic cognitive models. Here, we highlight a recently emerging line of work that suggests a novel reconciliation of these approaches, by exploiting an inductive bias that we term the relational bottleneck. We review a family of models that employ this approach to induce abstractions in a data-efficient manner, emphasizing their potential as candidate models for the acquisition of abstract concepts in the human mind and brain

    Symptom-based stratification of patients with primary Sjögren's syndrome: multi-dimensional characterisation of international observational cohorts and reanalyses of randomised clinical trials

    Get PDF
    Background Heterogeneity is a major obstacle to developing effective treatments for patients with primary Sjögren's syndrome. We aimed to develop a robust method for stratification, exploiting heterogeneity in patient-reported symptoms, and to relate these differences to pathobiology and therapeutic response. Methods We did hierarchical cluster analysis using five common symptoms associated with primary Sjögren's syndrome (pain, fatigue, dryness, anxiety, and depression), followed by multinomial logistic regression to identify subgroups in the UK Primary Sjögren's Syndrome Registry (UKPSSR). We assessed clinical and biological differences between these subgroups, including transcriptional differences in peripheral blood. Patients from two independent validation cohorts in Norway and France were used to confirm patient stratification. Data from two phase 3 clinical trials were similarly stratified to assess the differences between subgroups in treatment response to hydroxychloroquine and rituximab. Findings In the UKPSSR cohort (n=608), we identified four subgroups: Low symptom burden (LSB), high symptom burden (HSB), dryness dominant with fatigue (DDF), and pain dominant with fatigue (PDF). Significant differences in peripheral blood lymphocyte counts, anti-SSA and anti-SSB antibody positivity, as well as serum IgG, κ-free light chain, β2-microglobulin, and CXCL13 concentrations were observed between these subgroups, along with differentially expressed transcriptomic modules in peripheral blood. Similar findings were observed in the independent validation cohorts (n=396). Reanalysis of trial data stratifying patients into these subgroups suggested a treatment effect with hydroxychloroquine in the HSB subgroup and with rituximab in the DDF subgroup compared with placebo. Interpretation Stratification on the basis of patient-reported symptoms of patients with primary Sjögren's syndrome revealed distinct pathobiological endotypes with distinct responses to immunomodulatory treatments. Our data have important implications for clinical management, trial design, and therapeutic development. Similar stratification approaches might be useful for patients with other chronic immune-mediated diseases. Funding UK Medical Research Council, British Sjogren's Syndrome Association, French Ministry of Health, Arthritis Research UK, Foundation for Research in Rheumatology

    Finishing the euchromatic sequence of the human genome

    Get PDF
    The sequence of the human genome encodes the genetic instructions for human physiology, as well as rich information about human evolution. In 2001, the International Human Genome Sequencing Consortium reported a draft sequence of the euchromatic portion of the human genome. Since then, the international collaboration has worked to convert this draft into a genome sequence with high accuracy and nearly complete coverage. Here, we report the result of this finishing process. The current genome sequence (Build 35) contains 2.85 billion nucleotides interrupted by only 341 gaps. It covers ∼99% of the euchromatic genome and is accurate to an error rate of ∼1 event per 100,000 bases. Many of the remaining euchromatic gaps are associated with segmental duplications and will require focused work with new methods. The near-complete sequence, the first for a vertebrate, greatly improves the precision of biological analyses of the human genome including studies of gene number, birth and death. Notably, the human enome seems to encode only 20,000-25,000 protein-coding genes. The genome sequence reported here should serve as a firm foundation for biomedical research in the decades ahead

    No Coincidence, George: Capacity-Limits as the Curse of Compositionality

    No full text
    There is striking diversity in the capacity of different cognitive processes. In some settings, humans preserve only a few bits of information over computation: for example, tasks involving working memory and attention, perceptual identification, and numerosity estimation are famously limited (Miller, 1956). Other cognitive processes seem essentially unbounded, both in what we could possibly represent (e.g., the meanings of novel sentences in natural language) as well as what we can remember, once represented (e.g., episodic memory). These strengths and apparent weaknesses are intimately related. We integrate ideas from information-theory and the cognitive sciences to argue that in order to generalize efficiently–a key cognitive strength—processing capacity will not just be finite, but profoundly limited–a famous cognitive weakness. A unified computational framework precisely predicts classic error rates and patterns of response times in working memory tasks, explains why only a few items can be enumerated with accuracy and speed ("subitizing"), and why only a few items in a set can be accurately ranked ("absolute identification"). This computational framework suggests that the human mind is optimized for a particular objective: efficient generalization, at the expense of processing capacity

    An architecture for encoding sentence meaning in left mid-superior temporal cortex

    No full text
    corecore