185 research outputs found
Sparse Array Enabled Near-Field Communications: Beam Pattern Analysis and Hybrid Beamforming Design
Extremely large-scale array (XL-array) has emerged as a promising technology
to enable near-field communications for achieving enhanced spectrum efficiency
and spatial resolution, by drastically increasing the number of antennas.
However, this also inevitably incurs higher hardware and energy cost, which may
not be affordable in future wireless systems. To address this issue, we propose
in this paper to exploit two types of sparse arrays (SAs) for enabling
near-field communications. Specifically, we first consider the linear sparse
array (LSA) and characterize its near-field beam pattern. It is shown that
despite the achieved beam-focusing gain, the LSA introduces several undesired
grating-lobes, which have comparable beam power with the main-lobe and are
focused on specific regions. An efficient hybrid beamforming design is then
proposed for the LSA to deal with the potential strong inter-user interference
(IUI). Next, we consider another form of SA, called extended coprime array
(ECA), which is composed of two LSA subarrays with different (coprime)
inter-antenna spacing. By characterizing the ECA near-field beam pattern, we
show that compared with the LSA with the same array sparsity, the ECA can
greatly suppress the beam power of near-field grating-lobes thanks to the
offset effect of the two subarrays, albeit with a larger number of
grating-lobes. This thus motivates us to propose a customized two-phase hybrid
beamforming design for the ECA. Finally, numerical results are presented to
demonstrate the rate performance gain of the proposed two SAs over the
conventional uniform linear array (ULA).Comment: In this paper, we propose to exploit sparse arrays for enabling
near-field communications and characterize its unique beam pattern for
facilitating its hybrid beamforming desig
Learning Transferable Self-attentive Representations for Action Recognition in Untrimmed Videos with Weak Supervision
Action recognition in videos has attracted a lot of attention in the past
decade. In order to learn robust models, previous methods usually assume videos
are trimmed as short sequences and require ground-truth annotations of each
video frame/sequence, which is quite costly and time-consuming. In this paper,
given only video-level annotations, we propose a novel weakly supervised
framework to simultaneously locate action frames as well as recognize actions
in untrimmed videos. Our proposed framework consists of two major components.
First, for action frame localization, we take advantage of the self-attention
mechanism to weight each frame, such that the influence of background frames
can be effectively eliminated. Second, considering that there are trimmed
videos publicly available and also they contain useful information to leverage,
we present an additional module to transfer the knowledge from trimmed videos
for improving the classification performance in untrimmed ones. Extensive
experiments are conducted on two benchmark datasets (i.e., THUMOS14 and
ActivityNet1.3), and experimental results clearly corroborate the efficacy of
our method
Resfusion: Prior Residual Noise embedded Denoising Diffusion Probabilistic Models
Recently, Denoising Diffusion Probabilistic Models have been widely used in
image segmentation, by generating segmentation masks conditioned on the input
image. However, previous works can not seamlessly integrate existing end-to-end
models with denoising diffusion models. Existing research can only select
acceleration steps based on experience rather than calculating them
specifically. Moreover, most methods are limited to small models and
small-scale datasets, unable to generalize to general datasets and a wider
range of tasks. Therefore, we propose Resfusion with a novel resnoise-diffusion
process, which gradually generates segmentation masks or any type of target
image, seamlessly integrating state-of-the-art end-to-end models and denoising
diffusion models. Resfusion bridges the discrepancy between the likelihood
output and the ground truth output through a Markov process. Through the novel
smooth equivalence transformation in resnoise-diffusion process, we determine
the optimal acceleration step. Experimental results demonstrate that Resfusion
combines the capabilities of existing end-to-end models and denoising diffusion
models, further enhancing performance and achieving outstanding results.
Moreover, Resfusion is not limited to segmentation tasks, it can easily
generalize to any general tasks of image generation and exhibit strong
competitiveness
Not All Weights Are Created Equal: Enhancing Energy Efficiency in On-Device Streaming Speech Recognition
Power consumption plays an important role in on-device streaming speech
recognition, as it has a direct impact on the user experience. This study
delves into how weight parameters in speech recognition models influence the
overall power consumption of these models. We discovered that the impact of
weight parameters on power consumption varies, influenced by factors including
how often they are invoked and their placement in memory. Armed with this
insight, we developed design guidelines aimed at optimizing on-device speech
recognition models. These guidelines focus on minimizing power use without
substantially affecting accuracy. Our method, which employs targeted
compression based on the varying sensitivities of weight parameters,
demonstrates superior performance compared to state-of-the-art compression
methods. It achieves a reduction in energy usage of up to 47% while maintaining
similar model accuracy and improving the real-time factor
LLM-QAT: Data-Free Quantization Aware Training for Large Language Models
Several post-training quantization methods have been applied to large
language models (LLMs), and have been shown to perform well down to 8-bits. We
find that these methods break down at lower bit precision, and investigate
quantization aware training for LLMs (LLM-QAT) to push quantization levels even
further. We propose a data-free distillation method that leverages generations
produced by the pre-trained model, which better preserves the original output
distribution and allows quantizing any generative model independent of its
training data, similar to post-training quantization methods. In addition to
quantizing weights and activations, we also quantize the KV cache, which is
critical for increasing throughput and support long sequence dependencies at
current model sizes. We experiment with LLaMA models of sizes 7B, 13B, and 30B,
at quantization levels down to 4-bits. We observe large improvements over
training-free methods, especially in the low-bit settings
- …