42 research outputs found
A hybrid adaptive MCMC algorithm in function spaces
The preconditioned Crank-Nicolson (pCN) method is a Markov Chain Monte Carlo
(MCMC) scheme, specifically designed to perform Bayesian inferences in function
spaces. Unlike many standard MCMC algorithms, the pCN method can preserve the
sampling efficiency under the mesh refinement, a property referred to as being
dimension independent. In this work we consider an adaptive strategy to further
improve the efficiency of pCN. In particular we develop a hybrid adaptive MCMC
method: the algorithm performs an adaptive Metropolis scheme in a chosen finite
dimensional subspace, and a standard pCN algorithm in the complement space of
the chosen subspace. We show that the proposed algorithm satisfies certain
important ergodicity conditions. Finally with numerical examples we demonstrate
that the proposed method has competitive performance with existing adaptive
algorithms.Comment: arXiv admin note: text overlap with arXiv:1511.0583
ZeroQuant-FP: A Leap Forward in LLMs Post-Training W4A8 Quantization Using Floating-Point Formats
In the complex domain of large language models (LLMs), striking a balance
between computational efficiency and maintaining model quality is a formidable
challenge. Navigating the inherent limitations of uniform quantization,
particularly when dealing with outliers, and motivated by the launch of
NVIDIA's H100 hardware, this study delves into the viability of floating-point
(FP) quantization, particularly focusing on FP8 and FP4, as a potential
solution. Our comprehensive investigation reveals that for LLMs, FP8 activation
consistently outshines its integer (INT8) equivalent, with the performance edge
becoming more noticeable in models possessing parameters beyond one billion.
For weight quantization, our findings indicate that FP4 exhibits comparable, if
not superior, performance to INT4, simplifying deployment on FP-supported
hardware like H100. To mitigate the overhead from precision alignment caused by
the disparity between weights and activations, we propose two scaling
constraints for weight quantization that negligibly impact the performance
compared to the standard W4A8 model. We additionally enhance our quantization
methods by integrating the Low Rank Compensation (LoRC) strategy, yielding
improvements especially in smaller models. The results of our investigation
emphasize the immense potential of FP quantization for LLMs, paving the way for
high-efficiency deployment in resource-limited settings