2,022 research outputs found
Evaluating Multi-Agent Coordination Abilities in Large Language Models
A pivotal aim in contemporary AI research is to develop agents proficient in
multi-agent coordination, enabling effective collaboration with both humans and
other systems. Large Language Models (LLMs), with their notable ability to
understand, generate, and interpret language in a human-like manner, stand out
as promising candidates for the development of such agents. In this study, we
build and assess the effectiveness of agents crafted using LLMs in various
coordination scenarios. We introduce the LLM-Coordination (LLM-Co) Framework,
specifically designed to enable LLMs to play coordination games. With the
LLM-Co framework, we conduct our evaluation with three game environments and
organize the evaluation into five aspects: Theory of Mind, Situated Reasoning,
Sustained Coordination, Robustness to Partners, and Explicit Assistance. First,
the evaluation of the Theory of Mind and Situated Reasoning reveals the
capabilities of LLM to infer the partner's intention and reason actions
accordingly. Then, the evaluation around Sustained Coordination and Robustness
to Partners further showcases the ability of LLMs to coordinate with an unknown
partner in complex long-horizon tasks, outperforming Reinforcement Learning
baselines. Lastly, to test Explicit Assistance, which refers to the ability of
an agent to offer help proactively, we introduce two novel layouts into the
Overcooked-AI benchmark, examining if agents can prioritize helping their
partners, sacrificing time that could have been spent on their tasks. This
research underscores the promising capabilities of LLMs in sophisticated
coordination environments and reveals the potential of LLMs in building strong
real-world agents for multi-agent coordination
ICAFusion: Iterative Cross-Attention Guided Feature Fusion for Multispectral Object Detection
Effective feature fusion of multispectral images plays a crucial role in
multi-spectral object detection. Previous studies have demonstrated the
effectiveness of feature fusion using convolutional neural networks, but these
methods are sensitive to image misalignment due to the inherent deffciency in
local-range feature interaction resulting in the performance degradation. To
address this issue, a novel feature fusion framework of dual cross-attention
transformers is proposed to model global feature interaction and capture
complementary information across modalities simultaneously. This framework
enhances the discriminability of object features through the query-guided
cross-attention mechanism, leading to improved performance. However, stacking
multiple transformer blocks for feature enhancement incurs a large number of
parameters and high spatial complexity. To handle this, inspired by the human
process of reviewing knowledge, an iterative interaction mechanism is proposed
to share parameters among block-wise multimodal transformers, reducing model
complexity and computation cost. The proposed method is general and effective
to be integrated into different detection frameworks and used with different
backbones. Experimental results on KAIST, FLIR, and VEDAI datasets show that
the proposed method achieves superior performance and faster inference, making
it suitable for various practical scenarios. Code will be available at
https://github.com/chanchanchan97/ICAFusion.Comment: submitted to Pattern Recognition Journal, minor revisio
Diving into Darkness: A Dual-Modulated Framework for High-Fidelity Super-Resolution in Ultra-Dark Environments
Super-resolution tasks oriented to images captured in ultra-dark environments
is a practical yet challenging problem that has received little attention. Due
to uneven illumination and low signal-to-noise ratio in dark environments, a
multitude of problems such as lack of detail and color distortion may be
magnified in the super-resolution process compared to normal-lighting
environments. Consequently, conventional low-light enhancement or
super-resolution methods, whether applied individually or in a cascaded manner
for such problem, often encounter limitations in recovering luminance, color
fidelity, and intricate details. To conquer these issues, this paper proposes a
specialized dual-modulated learning framework that, for the first time,
attempts to deeply dissect the nature of the low-light super-resolution task.
Leveraging natural image color characteristics, we introduce a self-regularized
luminance constraint as a prior for addressing uneven lighting. Expanding on
this, we develop Illuminance-Semantic Dual Modulation (ISDM) components to
enhance feature-level preservation of illumination and color details. Besides,
instead of deploying naive up-sampling strategies, we design the
Resolution-Sensitive Merging Up-sampler (RSMU) module that brings together
different sampling modalities as substrates, effectively mitigating the
presence of artifacts and halos. Comprehensive experiments showcases the
applicability and generalizability of our approach to diverse and challenging
ultra-low-light conditions, outperforming state-of-the-art methods with a
notable improvement (i.e., 5\% in PSNR, and 43\% in LPIPS).
Especially noteworthy is the 19-fold increase in the RMSE score, underscoring
our method's exceptional generalization across different darkness levels. The
code will be available online upon publication of the paper.Comment: 9 page
The K giant stars from the LAMOST survey data I: identification, metallicity, and distance
We present a support vector machine classifier to identify the K giant stars
from the LAMOST survey directly using their spectral line features. The
completeness of the identification is about 75% for tests based on LAMOST
stellar parameters. The contamination in the identified K giant sample is lower
than 2.5%. Applying the classification method to about 2 million LAMOST spectra
observed during the pilot survey and the first year survey, we select 298,036 K
giant candidates. The metallicities of the sample are also estimated with
uncertainty of \,dex based on the equivalent widths of Mg and iron lines. A Bayesian method is then developed to estimate the
posterior probability of the distance for the K giant stars, based on the
estimated metallicity and 2MASS photometry. The synthetic isochrone-based
distance estimates have been calibrated using 7 globular clusters with a wide
range of metallicities. The uncertainty of the estimated distance modulus at
\,mag, which is the median brightness of the K giant sample, is about
0.6\,mag, corresponding to % in distance. As a scientific verification
case, the trailing arm of the Sagittarius stream is clearly identified with the
selected K giant sample. Moreover, at about 80\,kpc from the Sun, we use our K
giant stars to confirm a detection of stream members near the apo-center of the
trailing tail. These rediscoveries of the features of the Sagittarius stream
illustrate the potential of the LAMOST survey for detecting substructures in
the halo of the Milky Way.Comment: 24 pages, 20 figures, submitted to Ap
Effects of dietary guava leaf aqueous extract supplementation on growth, antioxidant capacity, and non-specific immunity in mud crab <em>Scylla paramamosain</em>
Mud crab (*Scylla paramamosain*) fed five different diets with varying concentrations of guava leaf aqueous extract (0 mg·kg^--1^, 80 mg·kg^--1^, 160 mg·kg^--1^, 320 mg·kg^--1^, and 640 mg·kg^--1^) for 30 days. Mud crabs in the 320 mg·kg^--1^ guava-leaf extract groups outperformed the control group in terms of survival rates (SR), weight gain rates (WGR), and specific growth rates (SGR). When compared to the control group, mud crabs in the 320 mg·kg^--1^ guava-leaf extract groups had significantly higher levels of lipase (LPS), pepsin, lysozyme (LZM), superoxide dismutase (SOD), acid phosphatase (ACP), and glutathione (GSH) (*P \< 0.05*). The amylase (AMS) activity was significantly decreased in all experimental groups (*P \< 0.05*). Malondialdehyde (MDA) content in the hepatopancreas of mud crabs in the 160 mg·kg^--1^, 320 mg·kg^--1^, and 640 mg·kg^--1^ guava-leaf extract groups were significantly reduced compared to the control group (*P \< 0.05*). Additionally, real-time PCR results illustrated that the expression levels of *GPx3*, *CAT*, and *JNK* were all considerably increased in the 80 mg·kg^--1^ guava-leaf extract groups compared to the control group (*P \< 0.05*). In the 160 mg·kg^--1^, 320 mg·kg^--1^, and 320 mg·kg^--1^ guava-leaf extract groups, the expression levels of *SOD* genes were considerably greater than the control (*P \< 0.05*), which was consistent with the level of SOD activity. *GST* and *P53* gene expression levels were significantly up-regulated in the 80 mg·kg^--1^, 160 mg·kg^--1^, 320 mg·kg^--1^, and 640 mg·kg^--1^ guava-leaf extract groups compared to the control group (*P \< 0.05*). Overall, the addition of 160 mg·kg^--1^-320 mg·kg^--1^ guava-leaf extract to the feed of *Scylla paramamosain* promoted growth, enhanced the activities of digestive and antioxidant enzymes, and strengthened immunity
- …