1,464 research outputs found

    Inelastic Strength Behavior of Horizontally Curved Composite I-Girder Bridge Structural Systems

    Get PDF
    This research investigates the strength behavior of horizontally curved composite I-girder bridge structural systems, and the representation of this behavior by the AASHTO (2004b) LRFD provisions. The primary focus is on the design of a representative curved composite I-girder bridge tested at the FHWA Turner-Fairbank Highway Research Center, interpretation of the results from the testing of this bridge, including correlation with extensive linear and nonlinear finite element analysis solutions, and parametric extension of the test results using finite element models similar to those validated against the physical tests. These studies support the potential liberalization of the AASHTO (2004b) provisions by the use of a plastic moment based resistance, reduced by flange lateral bending effects, for composite I-girders in positive bending.Ph.D.Committee Chair: Dr. Donald W. White; Committee Member: Dr. Kenneth M. Will; Committee Member: Dr. Olivier Bauchau; Committee Member: Dr. Rami Haj-Ali; Committee Member: Dr. Roberto T. Leo

    FlexRound: Learnable Rounding based on Element-wise Division for Post-Training Quantization

    Full text link
    Post-training quantization (PTQ) has been gaining popularity for the deployment of deep neural networks on resource-limited devices since unlike quantization-aware training, neither a full training dataset nor end-to-end training is required at all. As PTQ schemes based on reconstructing each layer or block output turn out to be effective to enhance quantized model performance, recent works have developed algorithms to devise and learn a new weight-rounding scheme so as to better reconstruct each layer or block output. In this work, we propose a simple yet effective new weight-rounding mechanism for PTQ, coined FlexRound, based on element-wise division instead of typical element-wise addition such that FlexRound enables jointly learning a common quantization grid size as well as a different scale for each pre-trained weight. Thanks to the reciprocal rule of derivatives induced by element-wise division, FlexRound is inherently able to exploit pre-trained weights when updating their corresponding scales, and thus, flexibly quantize pre-trained weights depending on their magnitudes. We empirically validate the efficacy of FlexRound on a wide range of models and tasks. To the best of our knowledge, our work is the first to carry out comprehensive experiments on not only image classification and natural language understanding but also natural language generation, assuming a per-tensor uniform PTQ setting. Moreover, we demonstrate, for the first time, that large language models can be efficiently quantized, with only a negligible impact on performance compared to half-precision baselines, achieved by reconstructing the output in a block-by-block manner.Comment: Accepted to ICML 202

    Refining Generative Process with Discriminator Guidance in Score-based Diffusion Models

    Full text link
    The proposed method, Discriminator Guidance, aims to improve sample generation of pre-trained diffusion models. The approach introduces a discriminator that gives explicit supervision to a denoising sample path whether it is realistic or not. Unlike GANs, our approach does not require joint training of score and discriminator networks. Instead, we train the discriminator after score training, making discriminator training stable and fast to converge. In sample generation, we add an auxiliary term to the pre-trained score to deceive the discriminator. This term corrects the model score to the data score at the optimal discriminator, which implies that the discriminator helps better score estimation in a complementary way. Using our algorithm, we achive state-of-the-art results on ImageNet 256x256 with FID 1.83 and recall 0.64, similar to the validation data's FID (1.68) and recall (0.66). We release the code at https://github.com/alsdudrla10/DG.Comment: International Conference on Machine Learning (ICML23

    Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization

    Full text link
    Large language models (LLMs) face the challenges in fine-tuning and deployment due to their high memory demands and computational costs. While parameter-efficient fine-tuning (PEFT) methods aim to reduce the memory usage of the optimizer state during fine-tuning, the inherent size of pre-trained LLM weights continues to be a pressing concern. Even though quantization techniques are widely proposed to ease memory demands and accelerate LLM inference, most of these techniques are geared towards the deployment phase. To bridge this gap, this paper presents Parameter-Efficient and Quantization-aware Adaptation (PEQA) - a simple yet effective method that combines the advantages of PEFT with quantized LLMs. By updating solely the quantization scales, PEQA can be directly applied to quantized LLMs, ensuring seamless task transitions. Parallel to existing PEFT methods, PEQA significantly reduces the memory overhead associated with the optimizer state. Furthermore, it leverages the advantages of quantization to substantially reduce model sizes. Even after fine-tuning, the quantization structure of a PEQA-tuned LLM remains intact, allowing for accelerated inference on the deployment stage. We employ PEQA-tuning for task-specific adaptation on LLMs with up to 65 billion parameters. To assess the logical reasoning and language comprehension of PEQA-tuned LLMs, we fine-tune low-bit quantized LLMs using a instruction dataset. Our results show that even when LLMs are quantized to below 4-bit precision, their capabilities in language modeling, few-shot in-context learning, and comprehension can be resiliently restored to (or even improved over) their full-precision original performances with PEQA.Comment: Published at NeurIPS 2023. Camera-ready versio

    Protective Effects of Emodin and Chrysophanol Isolated from Marine Fungus Aspergillus sp. on Ethanol-Induced Toxicity in HepG2/CYP2E1 Cells

    Get PDF
    Alcohol-induced liver injury progresses from fatty infiltration followed by a harmful cause of inflammation leading to an irreversible damage. In this study, two compounds (emodin and chrysophanol) isolated from marine fungus Aspergillus sp. were examined for their protective effects against ethanol-induced toxicity in vitro. Ethanol-induced HepG2/CYP2E1 cells were treated with the compounds at various concentrations, and the results showed that there was a dose-dependent decrease of gamma-glutamyl transpeptidase (GGT) activity and increase of glutathione (GSH) in the culture media with an increase in cell viability. Furthermore, the protective effects of the compounds were evaluated by protein expression levels of GGT, GSH, and CYP2E1 using Western blot. Among the compounds, emodin addressed to the ethanol-induced cytotoxicity more effectively compared to the chrysophanol. It could be suggested that emodin isolated from this genus would be a potential candidate for attenuating ethanol induced liver damage for further industrial applications such as functional food and pharmaceutical developments
    corecore