1,228 research outputs found
Oscillation-free Quantization for Low-bit Vision Transformers
Weight oscillation is an undesirable side effect of quantization-aware
training, in which quantized weights frequently jump between two quantized
levels, resulting in training instability and a sub-optimal final model. We
discover that the learnable scaling factor, a widely-used
setting in quantization aggravates weight oscillation. In this study, we
investigate the connection between the learnable scaling factor and quantized
weight oscillation and use ViT as a case driver to illustrate the findings and
remedies. In addition, we also found that the interdependence between quantized
weights in and of a self-attention layer makes
ViT vulnerable to oscillation. We, therefore, propose three techniques
accordingly: statistical weight quantization () to improve
quantization robustness compared to the prevalent learnable-scale-based method;
confidence-guided annealing () that freezes the weights with
and calms the oscillating weights; and
- reparameterization () to resolve the
query-key intertwined oscillation and mitigate the resulting gradient
misestimation. Extensive experiments demonstrate that these proposed techniques
successfully abate weight oscillation and consistently achieve substantial
accuracy improvement on ImageNet. Specifically, our 2-bit DeiT-T/DeiT-S
algorithms outperform the previous state-of-the-art by 9.8% and 7.7%,
respectively. Code and models are available at: https://github.com/nbasyl/OFQ.Comment: Proceedings of the 40 th International Conference on Machine
Learning, Honolulu, Hawaii, USA. PMLR 202, 202
Team Quotients, Resilience, and Performance of Software Development Projects
Past studies have examined actions and strategies that software project teams can take to reduce the negative impact of uncertainties, such as changing requirements. Software development project teams often have to be flexible to follow the pre-defined plans and strive to meet project goals. Sometimes uncertainty may go extreme to temporarily slow projects down and set project teams into reduced productivity. Project teams should be resilient to recover from the reduce productivity condition and move forward toward predefined goals. This study focuses on understanding the importance of team resilience for software project teams and exploring the antecedents of team resilience. Specifically, we investigate the impacts of intelligence and emotional quotient on team resilience capability, the extent to which project team can recover from the impediment and move forward. This is a research-in-progress work. A future empirical test plan has been discussed at the end
An effective hybrid of hill climbing and genetic algorithm for 2D triangular protein structure prediction
Mechanism of thermal field and electric field in resistive random access memory using the high/low-k side wall structure
In the Internet of things (IoT) era, low power consumption memory will be a critical issue for further device development. Among many kinds of next-generation memories, resistive random access memory (RRAM) is considered as having the most potential due to its high performance. To prevent unrecoverable hard break-down of a RRAM device, the RRAM should be collocated with a transistor for external current compliance. With decreasing device cell size, however, the operating voltage of the transistor will become smaller and smaller. Previous study has determined that the forming voltage of RRAM increases when device cell size is reduced, which is a very crucial issue especially when the device is scaled down. We have proposed a high-k sidewall spacer structure in RRAM to solve the dilemma of increasing forming voltages for device cell scaling down. Based on the COMSOL-simulated electrical field distributions in the high-k RRAM. In addition, thermal conductivity of sidewall spacer influenced resistive switching behavior. Suitable thermal conductivity of sidewall materials can enhance resistive switching behavior.
Please click Additional Files below to see the full abstract
Efficient Quantization-aware Training with Adaptive Coreset Selection
The expanding model size and computation of deep neural networks (DNNs) have
increased the demand for efficient model deployment methods. Quantization-aware
training (QAT) is a representative model compression method to leverage
redundancy in weights and activations. However, most existing QAT methods
require end-to-end training on the entire dataset, which suffers from long
training time and high energy costs. Coreset selection, aiming to improve data
efficiency utilizing the redundancy of training data, has also been widely used
for efficient training. In this work, we propose a new angle through the
coreset selection to improve the training efficiency of quantization-aware
training. Based on the characteristics of QAT, we propose two metrics: error
vector score and disagreement score, to quantify the importance of each sample
during training. Guided by these two metrics of importance, we proposed a
quantization-aware adaptive coreset selection (ACS) method to select the data
for the current training epoch. We evaluate our method on various networks
(ResNet-18, MobileNetV2), datasets(CIFAR-100, ImageNet-1K), and under different
quantization settings. Compared with previous coreset selection methods, our
method significantly improves QAT performance with different dataset fractions.
Our method can achieve an accuracy of 68.39% of 4-bit quantized ResNet-18 on
the ImageNet-1K dataset with only a 10% subset, which has an absolute gain of
4.24% compared to the baseline.Comment: Code: https://github.com/HuangOwen/QAT-AC
LLM-FP4: 4-Bit Floating-Point Quantized Transformers
We propose LLM-FP4 for quantizing both weights and activations in large
language models (LLMs) down to 4-bit floating-point values, in a post-training
manner. Existing post-training quantization (PTQ) solutions are primarily
integer-based and struggle with bit widths below 8 bits. Compared to integer
quantization, floating-point (FP) quantization is more flexible and can better
handle long-tail or bell-shaped distributions, and it has emerged as a default
choice in many hardware platforms. One characteristic of FP quantization is
that its performance largely depends on the choice of exponent bits and
clipping range. In this regard, we construct a strong FP-PTQ baseline by
searching for the optimal quantization parameters. Furthermore, we observe a
high inter-channel variance and low intra-channel variance pattern in
activation distributions, which adds activation quantization difficulty. We
recognize this pattern to be consistent across a spectrum of transformer models
designed for diverse tasks, such as LLMs, BERT, and Vision Transformer models.
To tackle this, we propose per-channel activation quantization and show that
these additional scaling factors can be reparameterized as exponential biases
of weights, incurring a negligible cost. Our method, for the first time, can
quantize both weights and activations in the LLaMA-13B to only 4-bit and
achieves an average score of 63.1 on the common sense zero-shot reasoning
tasks, which is only 5.8 lower than the full-precision model, significantly
outperforming the previous state-of-the-art by 12.7 points. Code is available
at: https://github.com/nbasyl/LLM-FP4.Comment: EMNLP 2023 Main Conferenc
General Versus Spinal Anesthesia: Which is a Risk Factor for Octogenarian Hip Fracture Repair Patients?
SummaryBackgroundMost studies have shown no difference between the two types of anesthesia administered to hip fracture patients. This study compared postoperative morbidity and mortality in octogenarian patients who received either general or spinal anesthesia for hip fracture repair.MethodsWe retrospectively analyzed the hospital records of 335 octogenarian patients who received hip fracture repair in our teaching hospital between 2002 and 2006. A total of 167 and 168 patients received general and spinal anesthesia, respectively. Morbidity, mortality, and intraoperative and preoperative variables were compared between groups.ResultsThere were no mortality differences between spinal and general anesthesia groups. However, the overall morbidity was greater in the general anesthesia group than in the spinal anesthesia group (21/167 [12.6%] vs. 9/168 [5.4%]; p = 0.02). Respiratory system-related morbidity was also higher in the general anesthesia group than in the spinal anesthesia group (11/167 [6.6%] vs. 3/168 [1.8%]; p = 0.03). Logistic regression analysis revealed two significant predictors of postoperative morbidity: anesthesia type (general; odds ratio, 2.39) and preexisting respiratory diseases (odds ratio, 3.38).ConclusionGeneral anesthesia increased the risk of postoperative morbidity in octogenarian patients after hip fracture repair, and patients with preexisting respiratory diseases were especially vulnerable. Spinal anesthesia is strongly recommended in such individuals
Understanding the Role of Knowledge Co-Production between Users and Developers in ISD Project: An Intellectual Capital Perspective
Information system development (ISD) has long been treated as that process that system developers craft an artifact to support business operation based on their special expertise. However, a significant portion of projects still have failed because the developed outcome cannot fit usersâ needs. An emerging internal service concept indicates that, by treating ISD as one type of service, the requirement definition can be viewed as a co-production process in which users and developers integrate their own knowledge. By incorporating this concept into research design and taking intellectual capital perspective into account, this study proposed a model to examine the antecedents and consequences of knowledge co-production between users and developers. Data collected from 267 developers confirmed our hypotheses that knowledge co-production can benefit ISD outcomes, and common knowledge, relational capital and participative decision-making between these two parties increase the effectiveness of knowledge co-production effectively. Lastly, the implications toward academic and practitioner are also provided
Anthropomorphism of AI-based Intelligent Customer Service, and Its Affective and Behavioral Consequences
Recently, as many users turn to social media to interact with service providers, organizations apply Artificial intelligence (AI) to improve the efficiency and effectiveness of the operation. This type of customer service system is called intelligent customer service (ICS) which one of the most commonly adopted tools is chatbot. Since chatbot is AI-empowered, whether this system can effectively interact with customers and solve their problems is critical. However, the quality of ICS has received significant attention recently, and a lack of systematic study on the outcomes of anthropomorphism leaves this question unanswered in an ICS context. Based on a cognitive-affective-behavioral framework, this study attempts to understand whether anthropomorphism can promote desired behaviors (including usage and citizen-ship behaviors) through enhancing affective out-comes, such as satisfaction and identity. Data collected from 183 chatbot-ICS users, this study illustrates how anthropomorphism can increase quality, enhance satisfaction and identity. Furthermore, we also show that satisfaction and identity lead to further usage and citizenship behaviors. This highlights the importance of increasing anthropomorphism for the chatbot-ICS
- âŠ