363 research outputs found

    Promoting School Competition Through School Choice: A Market Design Approach

    Get PDF
    We study the effect of different school choice mechanisms on schools' incentives for quality improvement. To do so, we introduce the following criterion: A mechanism respects improvements of school quality if each school becomes weakly better off whenever that school becomes more preferred by students. We first show that no stable mechanism, or mechanism that is Pareto efficient for students (such as the Boston and top trading cycles mechanisms), respects improvements of school quality. Nevertheless, for large school districts, we demonstrate that any stable mechanism approximately respects improvements of school quality; by contrast, the Boston and top trading cycles mechanisms fail to do so. Thus a stable mechanism may provide better incentives for schools to improve themselves than the Boston and top trading cycles mechanisms.Matching; School Choice; School Competition; Stability; Efficiency

    Heat stress is a potent stimulus for enhancing rescue efficiency of recombinant Borna disease virus.

    Get PDF
    Recently developed vector systems based on Borna disease virus (BDV) hold promise as platforms for efficient and stable gene delivery to the central nervous system (CNS). However, because it currently takes several weeks to rescue recombinant BDV (rBDV), an improved rescue procedure would enhance the utility of this system. Heat stress reportedly enhances the rescue efficiency of other recombinant viruses. Here, heat stress was demonstrated to increase the amount of BDV genome in persistently BDV-infected cells without obvious cytotoxicity. Further analyses suggested that the effect of heat stress on BDV infection is not caused by an increase in the activity of BDV polymerase. More cells in which BDV replication occurs were obtained in the initial phase of rBDV rescue by using heat stress than when it was not used. Thus, heat stress is a useful improvement on the published rescue procedure for rBDV. The present findings may accelerate the practical use of BDV vector systems in basic science and the clinic and thus enable broader adoption of this viral vector, which is uniquely suited for gene delivery to the CNS

    A ternary complex model of Sirtuin4-NAD+-Glutamate dehydrogenase

    Get PDF
    Sirtuin4 (Sirt4) is one of the mammalian homologues of Silent information regulator 2 (Sir2), which promotes the longevity of yeast, C. elegans, fruit flies and mice. Sirt4 is localized in the mitochondria, where it contributes to preventing the development of cancers and ischemic heart disease through regulating energy metabolism. The ADP-ribosylation of glutamate dehydrogenase (GDH), which is catalyzed by Sirt4, downregulates the TCA cycle. However, this reaction mechanism is obscure, because the structure of Sirt4 is unknown. We here constructed structural models of Sirt4 by homology modeling and threading, and docked nicotinamide adenine dinucleotide+ (NAD+) to Sirt4. In addition, a partial GDH structure was docked to the Sirt4-NAD+ complex model. In the ternary complex model of Sirt4-NAD+-GDH, the acetylated lysine 171 of GDH is located close to NAD+. This suggests a possible mechanism underlying the ADP-ribosylation at cysteine 172, which may occur through a transient intermediate with ADP-ribosylation at the acetylated lysine 171. These results may be useful in designing drugs for the treatment of cancers and ischemic heart disease

    Unnatural Error Correction: GPT-4 Can Almost Perfectly Handle Unnatural Scrambled Text

    Full text link
    While Large Language Models (LLMs) have achieved remarkable performance in many tasks, much about their inner workings remains unclear. In this study, we present novel experimental insights into the resilience of LLMs, particularly GPT-4, when subjected to extensive character-level permutations. To investigate this, we first propose the Scrambled Bench, a suite designed to measure the capacity of LLMs to handle scrambled input, in terms of both recovering scrambled sentences and answering questions given scrambled context. The experimental results indicate that most powerful LLMs demonstrate the capability akin to typoglycemia, a phenomenon where humans can understand the meaning of words even when the letters within those words are scrambled, as long as the first and last letters remain in place. More surprisingly, we found that only GPT-4 nearly flawlessly processes inputs with unnatural errors, even under the extreme condition, a task that poses significant challenges for other LLMs and often even for humans. Specifically, GPT-4 can almost perfectly reconstruct the original sentences from scrambled ones, decreasing the edit distance by 95%, even when all letters within each word are entirely scrambled. It is counter-intuitive that LLMs can exhibit such resilience despite severe disruption to input tokenization caused by scrambled text.Comment: EMNLP 2023 (with an additional analysis section in appendix

    Lurasidone‐induced hyperosmolar hyperglycemic syndrome: A case report

    Get PDF
    [Introduction] Lurasidone has few metabolic adverse effects and is recommended as an alternative when other antipsychotic drugs considerably increase body weight or blood sugar concentrations. [Case presentation] An 81-year-old man with bipolar disorder developed hyperosmolar hyperglycemic syndrome as a side effect of lurasidone. Routine monitoring of blood glucose concentrations led to the early detection and treatment of this disease, preventing life-threatening complications. [Discussion and conclusion] We describe a rare case of lurasidone-induced hyperosmolar hyperglycemic syndrome. The mortality rate of this syndrome is estimated to be up to 20%. This rate is significantly higher than that of diabetic ketoacidosis (currently <2%). Although lurasidone is considered to have a low risk of raising blood glucose concentrations, symptoms of hyperglycemia must be evaluated and blood glucose concentrations should be monitored regularly

    Homogeneous and Heterogeneous Photocatalytic Water Oxidation by Persulfate

    Full text link
    Photocatalytic water oxidation by persulfate (Na2S2O8) with [Ru(bpy)3]2+ (bpy=2,2′‐bipyridine) as a photocatalyst provides a standard protocol to study the catalytic reactivity of water oxidation catalysts. The yield of evolved oxygen per persulfate is regarded as a good index for the catalytic reactivity because the oxidation of bpy of [Ru(bpy)3]2+ and organic ligands of catalysts competes with the catalytic water oxidation. A variety of metal complexes act as catalysts in the photocatalytic water oxidation by persulfate with [Ru(bpy)3]2+ as a photocatalyst. Herein, the catalytic mechanisms are discussed for homogeneous water oxidation catalysis. Some metal complexes are converted to metal oxide or hydroxide nanoparticles during the photocatalytic water oxidation by persulfate, acting as precursors for the actual catalysts. The catalytic reactivity of various metal oxides is compared based on the yield of evolved oxygen and turnover frequency. A heteropolynuclear cyanide complex is the best catalyst reported so far for the photocatalytic water oxidation by persulfate and [Ru(bpy)3]2+, affording 100 % yield of O2 per persulfate.Waterworld: Homogeneous and heterogeneous catalysis and mechanisms of photocatalytic oxidation of water by persulfate with [Ru(bpy)]32+ are compared and discussed including the conversion from homogeneous precatalysts to heterogeneous catalysts.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/137224/1/asia201501329.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/137224/2/asia201501329_am.pd

    Large Language Models are Zero-Shot Reasoners

    Full text link
    Pretrained large language models (LLMs) are widely used in many sub-fields of natural language processing (NLP) and generally known as excellent few-shot learners with task-specific exemplars. Notably, chain of thought (CoT) prompting, a recent technique for eliciting complex multi-step reasoning through step-by-step answer examples, achieved the state-of-the-art performances in arithmetics and symbolic reasoning, difficult system-2 tasks that do not follow the standard scaling laws for LLMs. While these successes are often attributed to LLMs' ability for few-shot learning, we show that LLMs are decent zero-shot reasoners by simply adding "Let's think step by step" before each answer. Experimental results demonstrate that our Zero-shot-CoT, using the same single prompt template, significantly outperforms zero-shot LLM performances on diverse benchmark reasoning tasks including arithmetics (MultiArith, GSM8K, AQUA-RAT, SVAMP), symbolic reasoning (Last Letter, Coin Flip), and other logical reasoning tasks (Date Understanding, Tracking Shuffled Objects), without any hand-crafted few-shot examples, e.g. increasing the accuracy on MultiArith from 17.7% to 78.7% and GSM8K from 10.4% to 40.7% with 175B parameter InstructGPT model, as well as similar magnitudes of improvements with another off-the-shelf large model, 540B parameter PaLM. The versatility of this single prompt across very diverse reasoning tasks hints at untapped and understudied fundamental zero-shot capabilities of LLMs, suggesting high-level, multi-task broad cognitive capabilities may be extracted by simple prompting. We hope our work not only serves as the minimal strongest zero-shot baseline for the challenging reasoning benchmarks, but also highlights the importance of carefully exploring and analyzing the enormous zero-shot knowledge hidden inside LLMs before crafting finetuning datasets or few-shot exemplars.Comment: Accepted to NeurIPS2022. Our code is available at https://github.com/kojima-takeshi188/zero_shot_co
    corecore