6 research outputs found

    Superconductivity in a new layered cobalt oxychalcogenide Na6_{6}Co3_{3}Se6_{6}O3_{3} with a 3d5d^{5} triangular lattice

    Full text link
    Unconventional superconductivity in bulk materials under ambient pressure is extremely rare among the 3dd transition-metal compounds outside the layered cuprates and iron-based family. It is predominantly linked to highly anisotropic electronic properties and quasi-two-dimensional (2D) Fermi surfaces. To date, the only known example of the Co-based exotic superconductor was the hydrated layered cobaltate, Nax_{x}CoO2⋅_{2}\cdot yH2_{2}O, and its superconductivity is realized in the vicinity of a spin-1/2 Mott state. However, the nature of the superconductivity in these materials is still an active subject of debate, and therefore, finding new class of superconductors will help unravel the mysteries of their unconventional superconductivity. Here we report the discovery of unconventional superconductivity at ∼\sim 6.3 K in our newly synthesized layered compound Na6_{6}Co3_{3}Se6_{6}O3_{3}, in which the edge-shared CoSe6_{6} octahedra form [CoSe2_{2}] layers with a perfect triangular lattice of Co ions. It is the first 3dd transition-metal oxychalcogenide superconductor with distinct structural and chemical characteristics. Despite its relatively low TcT_{c}, material exhibits extremely high superconducting upper critical fields, μ0Hc2(0)\mu_{0}H_{c2}(0), which far exceeds the Pauli paramagnetic limit by a factor of 3 - 4. First-principles calculations show that Na6_{6}Co3_{3}Se6_{6}O3_{3} is a rare example of negative charge transfer superconductor. This new cobalt oxychalcogenide with a geometrical frustration among Co spins, shows great potential as a highly appealing candidate for the realization of high-TcT_{c} and/or unconventional superconductivity beyond the well-established Cu- and Fe-based superconductor families, and opened a new field in physics and chemistry of low-dimensional superconductors

    Superconductivity in a Layered Cobalt Oxychalcogenide Na<sub>2</sub>CoSe<sub>2</sub>O with a Triangular Lattice

    No full text
    Unconventional superconductivity in bulk materials under ambient pressure is extremely rare among the 3d transition metal compounds outside the layered cuprates and iron-based family. It is predominantly linked to highly anisotropic electronic properties and quasi-two-dimensional (2D) Fermi surfaces. To date, the only known example of a Co-based exotic superconductor is the hydrated layered cobaltate, NaxCoO2·yH2O, and its superconductivity is realized in the vicinity of a spin-1/2 Mott state. However, the nature of the superconductivity in these materials is still a subject of intense debate, and therefore, finding a new class of superconductors will help unravel the mysteries of their unconventional superconductivity. Here, we report the discovery of superconductivity at ∼6.3 K in our newly synthesized layered compound Na2CoSe2O, in which the edge-shared CoSe6 octahedra form [CoSe2] layers with a perfect triangular lattice of Co ions. It is the first 3d transition metal oxychalcogenide superconductor with distinct structural and chemical characteristics. Despite its relatively low TC, this material exhibits very high superconducting upper critical fields, μ0HC2(0), which far exceeds the Pauli paramagnetic limit by a factor of 3–4. First-principles calculations show that Na2CoSe2O is a rare example of a negative charge transfer superconductor. This cobalt oxychalcogenide with a geometrical frustration among Co spins shows great potential as a highly appealing candidate for the realization of unconventional and/or high-TC superconductivity beyond the well-established Cu- and Fe-based superconductor families and opens a new field in the physics and chemistry of low-dimensional superconductors

    Beyond the imitation game: Quantifying and extrapolating the capabilities of language models

    No full text
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting

    Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

    Get PDF
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting.Comment: 27 pages, 17 figures + references and appendices, repo: https://github.com/google/BIG-benc
    corecore