26 research outputs found

    Making Scalable Meta Learning Practical

    Full text link
    Despite its flexibility to learn diverse inductive biases in machine learning programs, meta learning (i.e., learning to learn) has long been recognized to suffer from poor scalability due to its tremendous compute/memory costs, training instability, and a lack of efficient distributed training support. In this work, we focus on making scalable meta learning practical by introducing SAMA, which combines advances in both implicit differentiation algorithms and systems. Specifically, SAMA is designed to flexibly support a broad range of adaptive optimizers in the base level of meta learning programs, while reducing computational burden by avoiding explicit computation of second-order gradient information, and exploiting efficient distributed training techniques implemented for first-order gradients. Evaluated on multiple large-scale meta learning benchmarks, SAMA showcases up to 1.7/4.8x increase in throughput and 2.0/3.8x decrease in memory consumption respectively on single-/multi-GPU setups compared to other baseline meta learning algorithms. Furthermore, we show that SAMA-based data optimization leads to consistent improvements in text classification accuracy with BERT and RoBERTa large language models, and achieves state-of-the-art results in both small- and large-scale data pruning on image classification tasks, demonstrating the practical applicability of scalable meta learning across language and vision domains

    Recurrent Syncope Triggered by Temporal Lobe Epilepsy: Ictal Bradycardia Syndrome

    Get PDF
    Ictal asystole is potentially lethal, and known to originate from the involvement of limbic autonomic regions. Appropriate treatment must include an antiepileptic drug and the implantation of a pacemaker. We report the case of a 54-year-old male with recurrent syncope secondary to ictal asystole triggered by temporal lobe epilepsy. This was confirmed by combined Holter and video-electroencephalogram monitoring

    Traumatic Entrapment of the Vertebrobasilar Junction Due to a Longitudinal Clival Fracture: A Case Report

    Get PDF
    Vertebrobasilar junction entrapment due to a clivus fracture is a rare clinical observation. The present case report describes a 54-yr-old man who sustained a major craniofacial injury. The patient displayed a stuporous mental state (Glasgow Coma Scale [GCS]=8) and left hemiparesis (Grade 3). The initial computed tomography (CT) scan revealed a right subdural hemorrhage in the frontotemporal region, with a midline shift and longitudinal clival fracture. A decompressive craniectomy with removal of the hematoma was performed. Two days after surgery, a follow-up CT scan showed cerebellar and brain stem infarction, and a CT angiogram revealed occlusion of the left vertebral artery and entrapment of vertebrobasilar junction by the clival fracture. A decompressive suboccipital craniectomy was performed and the patient gradually recovered. This appears to be a rare case of traumatic vertebrobasilar junction entrapment due to a longitudinal clival fracture, including a cerebellar infarction caused by a left vertebral artery occlusion. A literature review is provided

    Betty: An Automatic Differentiation Library for Multilevel Optimization

    Full text link
    Multilevel optimization has been widely adopted as a mathematical foundation for a myriad of machine learning problems, such as hyperparameter optimization, meta-learning, and reinforcement learning, to name a few. Nonetheless, implementing multilevel optimization programs oftentimes requires expertise in both mathematics and programming, stunting research in this field. We take an initial step towards closing this gap by introducing Betty, a high-level software library for gradient-based multilevel optimization. To this end, we develop an automatic differentiation procedure based on a novel interpretation of multilevel optimization as a dataflow graph. We further abstract the main components of multilevel optimization as Python classes, to enable easy, modular, and maintainable programming. We empirically demonstrate that Betty can be used as a high-level programming interface for an array of multilevel optimization programs, while also observing up to 11\% increase in test accuracy, 14\% decrease in GPU memory usage, and 20\% decrease in wall time over existing implementations on multiple benchmarks. The code is available at http://github.com/leopard-ai/betty

    Glutamate의 락트산유도 근육수축 저하 방지효과

    No full text
    The accumulation of lactate in exercising muscle causes a fatigue phenomenon with deterioration in muscle performance. In a relative anaerobic condition, it is a normal physiological response to generate lactate for the compensation of NAD/NADH balance of the tissues in order to continue ATP production. The alternative route of lactate metabolism in such a state would be opened via transamination reaction to alanine through pyruvate. Since LDH and GPT are near-equilibrium enzymes, whose kinetics are operated under the simple mass-action law of substrates, it is possible to modulate the metabolism of lactate through the supply of glutamate. In the present experiment, we analyzed the effect of lactate perfusion on muscular contractility and the preventive effect of glutamate on the lactate-induced decrease of muscle performance. The maximal twitch tension was decreased by lactate infusion in contrast to its increase by glutamate and its rapid decrease by alanine infusion. From these results, it can be concluded that the administration of glutamate may improve exercise efficiency through prevention of lactate accumulation in exercising muscle tissues
    corecore