84 research outputs found
A High-Accuracy Adaptive Beam Training Algorithm for MmWave Communication
In millimeter wave communications, beam training is an effective way to
achieve beam alignment. Traditional beam training method allocates training
resources equally to each beam in the pre-designed beam training codebook. The
performance of this method is far from satisfactory, because different beams
have different beamforming gain, and thus some beams are relatively more
difficult to be distinguished from the optimal beam than the others. In this
paper, we pro- pose a new beam training algorithm which adaptively allocates
training resources to each beam. Specifically, the proposed algorithm allocates
more training symbols to the beams with relatively higher beamforming gain,
while uses less resources to distinguish the beams with relatively lower
beamforming gain. Through theoretical analysis and numerical simulations, we
show that in practical situations the proposed adaptive algorithm
asymptotically outperforms the traditional method in terms of beam training
accuracy. Moreover, simulations also show that this relative performance
behavior holds in non-asymptotic regime
Implicit Neural Representation for Cooperative Low-light Image Enhancement
The following three factors restrict the application of existing low-light
image enhancement methods: unpredictable brightness degradation and noise,
inherent gap between metric-favorable and visual-friendly versions, and the
limited paired training data. To address these limitations, we propose an
implicit Neural Representation method for Cooperative low-light image
enhancement, dubbed NeRCo. It robustly recovers perceptual-friendly results in
an unsupervised manner. Concretely, NeRCo unifies the diverse degradation
factors of real-world scenes with a controllable fitting function, leading to
better robustness. In addition, for the output results, we introduce
semantic-orientated supervision with priors from the pre-trained
vision-language model. Instead of merely following reference images, it
encourages results to meet subjective expectations, finding more
visual-friendly solutions. Further, to ease the reliance on paired data and
reduce solution space, we develop a dual-closed-loop constrained enhancement
module. It is trained cooperatively with other affiliated modules in a
self-supervised manner. Finally, extensive experiments demonstrate the
robustness and superior effectiveness of our proposed NeRCo. Our code is
available at https://github.com/Ysz2022/NeRCo
Determining Appropriate Lane-Changing Spacing for Off-Ramp Areas of Urban Expressways
Congestion has become a significant issue in recent years and has greatly affected the efficiency of urban traffic operation. Random and disorderly lane-changing behavior greatly reduces traffic capacity and safety. This paper is mainly concerned with the relationship of lane-changing spacing intervals provided by off-ramp facilities and traffic flow conditions. Through field investigations in Beijing, several typical lane-changing behaviors at off-ramp areas are analyzed. By using field traffic data and actual road geometry parameters, VISSIM-based micro-behavior simulations at off-ramp areas are implemented to obtain traffic flow conditions with different lane-changing spacing intervals and other model parameters, such as traffic volume and ratio of off-ramp vehicles. Then, the numerical relationships between traffic flow state and model parameters can be shown. The results show that with increasing traffic volume and the ratio of off-ramp vehicles, the lane-changing spacing interval required by vehicles should be increased. For the same ratio of off-ramp vehicles, if the traffic volume increases by 100 pcu/h/lane (pcu is a unit to stand for a standard passenger car), the corresponding lane-changing spacing interval should be increased by a spacing of 50–100 m to avoid increasing congestion. Based on the results of this paper, smart lane management can be implemented by optimizing lane-changing spacing intervals and lane-changing behaviors to improve traffic capacity.
Document type: Articl
DDRF: Denoising Diffusion Model for Remote Sensing Image Fusion
Denosing diffusion model, as a generative model, has received a lot of
attention in the field of image generation recently, thanks to its powerful
generation capability. However, diffusion models have not yet received
sufficient research in the field of image fusion. In this article, we introduce
diffusion model to the image fusion field, treating the image fusion task as
image-to-image translation and designing two different conditional injection
modulation modules (i.e., style transfer modulation and wavelet modulation) to
inject coarse-grained style information and fine-grained high-frequency and
low-frequency information into the diffusion UNet, thereby generating fused
images. In addition, we also discussed the residual learning and the selection
of training objectives of the diffusion model in the image fusion task.
Extensive experimental results based on quantitative and qualitative
assessments compared with benchmarks demonstrates state-of-the-art results and
good generalization performance in image fusion tasks. Finally, it is hoped
that our method can inspire other works and gain insight into this field to
better apply the diffusion model to image fusion tasks. Code shall be released
for better reproducibility
Physics-informed Deep Super-resolution for Spatiotemporal Data
High-fidelity simulation of complex physical systems is exorbitantly
expensive and inaccessible across spatiotemporal scales. Recently, there has
been an increasing interest in leveraging deep learning to augment scientific
data based on the coarse-grained simulations, which is of cheap computational
expense and retains satisfactory solution accuracy. However, the major existing
work focuses on data-driven approaches which rely on rich training datasets and
lack sufficient physical constraints. To this end, we propose a novel and
efficient spatiotemporal super-resolution framework via physics-informed
learning, inspired by the independence between temporal and spatial derivatives
in partial differential equations (PDEs). The general principle is to leverage
the temporal interpolation for flow estimation, and then introduce
convolutional-recurrent neural networks for learning temporal refinement.
Furthermore, we employ the stacked residual blocks with wide activation and
sub-pixel layers with pixelshuffle for spatial reconstruction, where feature
extraction is conducted in a low-resolution latent space. Moreover, we consider
hard imposition of boundary conditions in the network to improve reconstruction
accuracy. Results demonstrate the superior effectiveness and efficiency of the
proposed method compared with baseline algorithms through extensive numerical
experiments
The activated scaling behavior of quantum Griffiths singularity in two-dimensional superconductors
Quantum Griffiths singularity is characterized by the divergence of the
dynamical critical exponent with the activated scaling law and has been widely
observed in various two-dimensional superconductors. Recently, the direct
activated scaling analysis with the irrelevant correction has been proposed and
successfully used to analyze the experimental data of crystalline PdTe2 and
polycrystalline \b{eta}-W films, which provides new evidence of quantum
Griffiths singularity. Here we show that the direct activated scaling analysis
is applicable to the experimental data in different superconducting films,
including tri-layer Ga films and LaAlO3/SrTiO3 interface superconductor. When
taking the irrelevant correction into account, we calculate the corrected sheet
resistance at ultralow temperatures. The scaling behavior of the corrected
resistance in a comparably large temperature regime and the theoretical fitting
of the phase boundary give unambiguous evidence of quantum Griffiths
singularity. Compared to the previous method based on the finite size scaling,
the direct activated scaling analysis represents a more direct and precise way
to analyze the experimental data of quantum Griffiths singularity in diverse
two-dimensional superconductors
Evoke: Evoking Critical Thinking Abilities in LLMs via Reviewer-Author Prompt Editing
Large language models (LLMs) have made impressive progress in natural
language processing. These models rely on proper human instructions (or
prompts) to generate suitable responses. However, the potential of LLMs are not
fully harnessed by commonly-used prompting methods: many human-in-the-loop
algorithms employ ad-hoc procedures for prompt selection; while auto prompt
generation approaches are essentially searching all possible prompts randomly
and inefficiently. We propose Evoke, an automatic prompt refinement framework.
In Evoke, there are two instances of a same LLM: one as a reviewer
(LLM-Reviewer), it scores the current prompt; the other as an author
(LLM-Author), it edits the prompt by considering the edit history and the
reviewer's feedback. Such an author-reviewer feedback loop ensures that the
prompt is refined in each iteration. We further aggregate a data selection
approach to Evoke, where only the hard samples are exposed to the LLM. The hard
samples are more important because the LLM can develop deeper understanding of
the tasks out of them, while the model may already know how to solve the easier
cases. Experimental results show that Evoke significantly outperforms existing
methods. For instance, in the challenging task of logical fallacy detection,
Evoke scores above 80, while all other baseline methods struggle to reach 20
- …