6 research outputs found
Superconductivity in a new layered cobalt oxychalcogenide NaCoSeO with a 3 triangular lattice
Unconventional superconductivity in bulk materials under ambient pressure is
extremely rare among the 3 transition-metal compounds outside the layered
cuprates and iron-based family. It is predominantly linked to highly
anisotropic electronic properties and quasi-two-dimensional (2D) Fermi
surfaces. To date, the only known example of the Co-based exotic superconductor
was the hydrated layered cobaltate, NaCoO yHO, and its
superconductivity is realized in the vicinity of a spin-1/2 Mott state.
However, the nature of the superconductivity in these materials is still an
active subject of debate, and therefore, finding new class of superconductors
will help unravel the mysteries of their unconventional superconductivity. Here
we report the discovery of unconventional superconductivity at 6.3 K in
our newly synthesized layered compound NaCoSeO, in
which the edge-shared CoSe octahedra form [CoSe] layers with a
perfect triangular lattice of Co ions. It is the first 3 transition-metal
oxychalcogenide superconductor with distinct structural and chemical
characteristics. Despite its relatively low , material exhibits
extremely high superconducting upper critical fields, , which
far exceeds the Pauli paramagnetic limit by a factor of 3 - 4. First-principles
calculations show that NaCoSeO is a rare example of
negative charge transfer superconductor. This new cobalt oxychalcogenide with a
geometrical frustration among Co spins, shows great potential as a highly
appealing candidate for the realization of high- and/or unconventional
superconductivity beyond the well-established Cu- and Fe-based superconductor
families, and opened a new field in physics and chemistry of low-dimensional
superconductors
Superconductivity in a Layered Cobalt Oxychalcogenide Na<sub>2</sub>CoSe<sub>2</sub>O with a Triangular Lattice
Unconventional superconductivity in bulk materials under
ambient
pressure is extremely rare among the 3d transition metal compounds
outside the layered cuprates and iron-based family. It is predominantly
linked to highly anisotropic electronic properties and quasi-two-dimensional
(2D) Fermi surfaces. To date, the only known example of a Co-based
exotic superconductor is the hydrated layered cobaltate, NaxCoO2·yH2O, and its superconductivity is realized in the vicinity of a spin-1/2
Mott state. However, the nature of the superconductivity in these
materials is still a subject of intense debate, and therefore, finding
a new class of superconductors will help unravel the mysteries of
their unconventional superconductivity. Here, we report the discovery
of superconductivity at ∼6.3 K in our newly synthesized layered
compound Na2CoSe2O, in which the edge-shared
CoSe6 octahedra form [CoSe2] layers with a perfect
triangular lattice of Co ions. It is the first 3d transition metal
oxychalcogenide superconductor with distinct structural and chemical
characteristics. Despite its relatively low TC, this material exhibits very high superconducting upper critical
fields, μ0HC2(0), which
far exceeds the Pauli paramagnetic limit by a factor of 3–4.
First-principles calculations show that Na2CoSe2O is a rare example of a negative charge transfer superconductor.
This cobalt oxychalcogenide with a geometrical frustration among Co
spins shows great potential as a highly appealing candidate for the
realization of unconventional and/or high-TC superconductivity beyond the well-established Cu- and Fe-based superconductor
families and opens a new field in the physics and chemistry of low-dimensional
superconductors
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting
Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models
Language models demonstrate both quantitative improvement and new qualitative
capabilities with increasing scale. Despite their potentially transformative
impact, these new capabilities are as yet poorly characterized. In order to
inform future research, prepare for disruptive new model capabilities, and
ameliorate socially harmful effects, it is vital that we understand the present
and near-future capabilities and limitations of language models. To address
this challenge, we introduce the Beyond the Imitation Game benchmark
(BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442
authors across 132 institutions. Task topics are diverse, drawing problems from
linguistics, childhood development, math, common-sense reasoning, biology,
physics, social bias, software development, and beyond. BIG-bench focuses on
tasks that are believed to be beyond the capabilities of current language
models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense
transformer architectures, and Switch-style sparse transformers on BIG-bench,
across model sizes spanning millions to hundreds of billions of parameters. In
addition, a team of human expert raters performed all tasks in order to provide
a strong baseline. Findings include: model performance and calibration both
improve with scale, but are poor in absolute terms (and when compared with
rater performance); performance is remarkably similar across model classes,
though with benefits from sparsity; tasks that improve gradually and
predictably commonly involve a large knowledge or memorization component,
whereas tasks that exhibit "breakthrough" behavior at a critical scale often
involve multiple steps or components, or brittle metrics; social bias typically
increases with scale in settings with ambiguous context, but this can be
improved with prompting.Comment: 27 pages, 17 figures + references and appendices, repo:
https://github.com/google/BIG-benc