3,652 research outputs found

    FlexRound: Learnable Rounding based on Element-wise Division for Post-Training Quantization

    Full text link
    Post-training quantization (PTQ) has been gaining popularity for the deployment of deep neural networks on resource-limited devices since unlike quantization-aware training, neither a full training dataset nor end-to-end training is required at all. As PTQ schemes based on reconstructing each layer or block output turn out to be effective to enhance quantized model performance, recent works have developed algorithms to devise and learn a new weight-rounding scheme so as to better reconstruct each layer or block output. In this work, we propose a simple yet effective new weight-rounding mechanism for PTQ, coined FlexRound, based on element-wise division instead of typical element-wise addition such that FlexRound enables jointly learning a common quantization grid size as well as a different scale for each pre-trained weight. Thanks to the reciprocal rule of derivatives induced by element-wise division, FlexRound is inherently able to exploit pre-trained weights when updating their corresponding scales, and thus, flexibly quantize pre-trained weights depending on their magnitudes. We empirically validate the efficacy of FlexRound on a wide range of models and tasks. To the best of our knowledge, our work is the first to carry out comprehensive experiments on not only image classification and natural language understanding but also natural language generation, assuming a per-tensor uniform PTQ setting. Moreover, we demonstrate, for the first time, that large language models can be efficiently quantized, with only a negligible impact on performance compared to half-precision baselines, achieved by reconstructing the output in a block-by-block manner.Comment: Accepted to ICML 202

    Xenopus: An alternative model system for identifying muco-active agents

    Get PDF
    The airway epithelium in human plays a central role as the first line of defense against environmental contaminants. Most respiratory diseases such as chronic obstructive pulmonary disease (COPD), asthma, and respiratory infections, disturb normal muco-ciliary functions by stimulating the hypersecretion of mucus. Several muco-active agents have been used to treat hypersecretion symptoms in patients. Current muco-active reagents control mucus secretion by modulating either airway inflammation, cholinergic parasympathetic nerve activities or by reducing the viscosity by cleaving crosslinking in mucin and digesting DNAs in mucus. However, none of the current medication regulates mucus secretion by directly targeting airway goblet cells. The major hurdle for screening potential muco-active agents that directly affect the goblet cells, is the unavailability of in vivo model systems suitable for high-throughput screening. In this study, we developed a high-throughput in vivo model system for identifying muco-active reagents using Xenopus laevis embryos. We tested mucus secretion under various conditions and developed a screening strategy to identify potential muco-regulators. Using this novel screening technique, we identified narasin as a potential muco-regulator. Narasin treatment of developing Xenopus embryos significantly reduced mucus secretion. Furthermore, the human lung epithelial cell line, Calu-3, responded similarly to narasin treatment, validating our technique for discovering muco-active reagent

    Titanium dioxide induces apoptotic cell death through reactive oxygen species-mediated Fas upregulation and Bax activation

    Get PDF
    Background: Titanium dioxide (TiO2) has been widely used in many areas, including biomedicine, cosmetics, and environmental engineering. Recently, it has become evident that some TiO2 particles have a considerable cytotoxic effect in normal human cells. However, the molecular basis for the cytotoxicity of TiO2 has yet to be defined.Methods and results: In this study, we demonstrated that combined treatment with TiO2 nanoparticles sized less than 100 nm and ultraviolet A irradiation induces apoptotic cell death through reactive oxygen species-dependent upregulation of Fas and conformational activation of Bax in normal human cells. Treatment with P25 TiO2 nanoparticles with a hydrodynamic size distribution centered around 70 nm (TiO2P25-70) together with ultraviolet A irradiation-induced caspase-dependent apoptotic cell death, accompanied by transcriptional upregulation of the death receptor, Fas, and conformational activation of Bax. In line with these results, knockdown of either Fas or Bax with specific siRNA significantly inhibited TiO2-induced apoptotic cell death. Moreover, inhibition of reactive oxygen species with an antioxidant, N-acetyl-L-cysteine, clearly suppressed upregulation of Fas, conformational activation of Bax, and subsequent apoptotic cell death in response to combination treatment using TiO2P25-70 and ultraviolet A irradiation.Conclusion: These results indicate that sub-100 nm sized TiO2 treatment under ultraviolet A irradiation induces apoptotic cell death through reactive oxygen species-mediated upregulation of the death receptor, Fas, and activation of the preapoptotic protein, Bax. Elucidating the molecular mechanisms by which nanosized particles induce activation of cell death signaling pathways would be critical for the development of prevention strategies to minimize the cytotoxicity of nanomaterials.This work was supported by the Korea Ministry of Environment and The Eco-Technopia 21 Project (091-091-081)

    VR/AR head-mounted display system based measurement and evaluation of dynamic visual acuity

    Get PDF
    This study evaluated the dynamic visual acuity of candidates by implementing a King–Devick (K-D) test chart in a virtual reality head-mounted display (VR HMD) and an augmented reality head-mounted display (AR HMD). Hard-copy KD (HCKD), VR HMD KD (VHKD), and AR HMD KD (AHKD) tests were conducted in 30 male and female candidates in the age of 10S and 20S and subjective symptom surveys were conducted. In the subjective symptom surveys, all except one of the VHKD questionnaire items showed subjective symptoms of less than 1 point. In the comparison between HCKD and VHKD, HCKD was measured more rapidly than VHKD in all tests. In the comparison between HCKD and AHKD, HCKD was measured more rapidly than AHKD in Tests 1, 2, and 3. In the comparison between VHKD and AHKD, AHKD was measured more rapidly than VHKD in Tests 1, 2, and 3. In the correlation analyses of test platforms, all platforms were correlated with each other, except for the correlation between HCKD and VHKD in Tests 1 and 2. There was no significant difference in the frequency of errors among Tests 1, 2, and 3 across test platforms. VHKD and AHKD, which require the body to be moved to read the chart, required longer measurement time than HCKD. In the measurements of each platform, AHKD was measured closer to HCKD than VHKD, which may be because the AHKD environment is closer to the actual environment than the VHKD environment. The effectiveness of VHKD and AHKD proposed in this research was evaluated experimentally. The results suggest that treatment and training could be performed concurrently through the use of clinical test and content development of VHKD and AHKD

    Abducens Nerve Palsy Complicated by Inferior Petrosal Sinus Septic Thrombosis Due to Mastoiditis

    Get PDF
    We present a very rare case of a 29-month-old boy with acute onset right abducens nerve palsy complicated by inferior petrosal sinus septic thrombosis due to mastoiditis without petrous apicitis. Four months after mastoidectomy, the patient fully recovered from an esotropia of 30 prism diopters and an abduction limitation (-4) in his right eye

    Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization

    Full text link
    Large language models (LLMs) face the challenges in fine-tuning and deployment due to their high memory demands and computational costs. While parameter-efficient fine-tuning (PEFT) methods aim to reduce the memory usage of the optimizer state during fine-tuning, the inherent size of pre-trained LLM weights continues to be a pressing concern. Even though quantization techniques are widely proposed to ease memory demands and accelerate LLM inference, most of these techniques are geared towards the deployment phase. To bridge this gap, this paper presents Parameter-Efficient and Quantization-aware Adaptation (PEQA) - a simple yet effective method that combines the advantages of PEFT with quantized LLMs. By updating solely the quantization scales, PEQA can be directly applied to quantized LLMs, ensuring seamless task transitions. Parallel to existing PEFT methods, PEQA significantly reduces the memory overhead associated with the optimizer state. Furthermore, it leverages the advantages of quantization to substantially reduce model sizes. Even after fine-tuning, the quantization structure of a PEQA-tuned LLM remains intact, allowing for accelerated inference on the deployment stage. We employ PEQA-tuning for task-specific adaptation on LLMs with up to 65 billion parameters. To assess the logical reasoning and language comprehension of PEQA-tuned LLMs, we fine-tune low-bit quantized LLMs using a instruction dataset. Our results show that even when LLMs are quantized to below 4-bit precision, their capabilities in language modeling, few-shot in-context learning, and comprehension can be resiliently restored to (or even improved over) their full-precision original performances with PEQA.Comment: Published at NeurIPS 2023. Camera-ready versio

    Gianotti-Crosti Syndrome Following Novel Influenza A (H1N1) Vaccination

    Get PDF
    corecore