320 research outputs found
AI-Generated Network Design: A Diffusion Model-based Learning Approach
The future networks pose intense demands for intelligent and customized
designs to cope with the surging network scale, dynamically time-varying
environments, diverse user requirements, and complicated manual configuration.
However, traditional rule-based solutions heavily rely on human efforts and
expertise, while data-driven intelligent algorithms still lack interpretability
and generalization. In this paper, we propose the AIGN (AI-Generated Network),
a novel intention-driven paradigm for network design, which allows operators to
quickly generate a variety of customized network solutions and achieve
expert-free problem optimization. Driven by the diffusion model-based learning
approach, AIGN has great potential to learn the reward-maximizing trajectories,
automatically satisfy multiple constraints, adapt to different objectives and
scenarios, or even intelligently create novel designs and mechanisms unseen in
existing network environments. Finally, we conduct a use case to demonstrate
that AIGN can effectively guide the design of transmit power allocation in
digital twin-based access networks.Comment: 7 pages, 3 figure
Large Language Models for Networking: Applications, Enabling Techniques, and Challenges
The rapid evolution of network technologies and the growing complexity of
network tasks necessitate a paradigm shift in how networks are designed,
configured, and managed. With a wealth of knowledge and expertise, large
language models (LLMs) are one of the most promising candidates. This paper
aims to pave the way for constructing domain-adapted LLMs for networking.
Firstly, we present potential LLM applications for vertical network fields and
showcase the mapping from natural language to network language. Then, several
enabling technologies are investigated, including parameter-efficient
finetuning and prompt engineering. The insight is that language understanding
and tool usage are both required for network LLMs. Driven by the idea of
embodied intelligence, we propose the ChatNet, a domain-adapted network LLM
framework with access to various external network tools. ChatNet can reduce the
time required for burdensome network planning tasks significantly, leading to a
substantial improvement in efficiency. Finally, key challenges and future
research directions are highlighted.Comment: 7 pages, 3 figures, 2 table
Improved Feature Distillation via Projector Ensemble
In knowledge distillation, previous feature distillation methods mainly focus
on the design of loss functions and the selection of the distilled layers,
while the effect of the feature projector between the student and the teacher
remains under-explored. In this paper, we first discuss a plausible mechanism
of the projector with empirical evidence and then propose a new feature
distillation method based on a projector ensemble for further performance
improvement. We observe that the student network benefits from a projector even
if the feature dimensions of the student and the teacher are the same. Training
a student backbone without a projector can be considered as a multi-task
learning process, namely achieving discriminative feature extraction for
classification and feature matching between the student and the teacher for
distillation at the same time. We hypothesize and empirically verify that
without a projector, the student network tends to overfit the teacher's feature
distributions despite having different architecture and weights initialization.
This leads to degradation on the quality of the student's deep features that
are eventually used in classification. Adding a projector, on the other hand,
disentangles the two learning tasks and helps the student network to focus
better on the main feature extraction task while still being able to utilize
teacher features as a guidance through the projector. Motivated by the positive
effect of the projector in feature distillation, we propose an ensemble of
projectors to further improve the quality of student features. Experimental
results on different datasets with a series of teacher-student pairs illustrate
the effectiveness of the proposed method
Understanding the Effects of Projectors in Knowledge Distillation
Conventionally, during the knowledge distillation process (e.g. feature
distillation), an additional projector is often required to perform feature
transformation due to the dimension mismatch between the teacher and the
student networks. Interestingly, we discovered that even if the student and the
teacher have the same feature dimensions, adding a projector still helps to
improve the distillation performance. In addition, projectors even improve
logit distillation if we add them to the architecture too. Inspired by these
surprising findings and the general lack of understanding of the projectors in
the knowledge distillation process from existing literature, this paper
investigates the implicit role that projectors play but so far have been
overlooked. Our empirical study shows that the student with a projector (1)
obtains a better trade-off between the training accuracy and the testing
accuracy compared to the student without a projector when it has the same
feature dimensions as the teacher, (2) better preserves its similarity to the
teacher beyond shallow and numeric resemblance, from the view of Centered
Kernel Alignment (CKA), and (3) avoids being over-confident as the teacher does
at the testing phase. Motivated by the positive effects of projectors, we
propose a projector ensemble-based feature distillation method to further
improve distillation performance. Despite the simplicity of the proposed
strategy, empirical results from the evaluation of classification tasks on
benchmark datasets demonstrate the superior classification performance of our
method on a broad range of teacher-student pairs and verify from the aspects of
CKA and model calibration that the student's features are of improved quality
with the projector ensemble design.Comment: arXiv admin note: text overlap with arXiv:2210.1527
Effects of Litchi chinensis fruit isolates on prostaglandin E2 and nitric oxide production in J774 murine macrophage cells
<p>Abstract</p> <p>Background</p> <p><it>Litchi chinensis </it>is regarded as one of the 'heating' fruits in China, which causes serious inflammation symptoms to people.</p> <p>Methods</p> <p>In the current study, the effects of isolates of litchi on prostaglandin E<sub>2 </sub>(PGE<sub>2</sub>) and nitric oxide (NO) production in J774 murine macrophage cells were investigated.</p> <p>Results</p> <p>The AcOEt extract (EAE) of litchi was found effective on stimulating PGE<sub>2 </sub>production, and three compounds, benzyl alcohol, hydrobenzoin and 5-hydroxymethyl-2-furfurolaldehyde (5-HMF), were isolated and identified from the EAE. Benzyl alcohol caused markedly increase in PGE<sub>2 </sub>and NO production, compared with lipopolysaccharide (LPS) as positive control, and in a dose-dependent manner. Hydrobenzoin and 5-HMF were found in litchi for the first time, and both of them stimulated PGE<sub>2 </sub>and NO production moderately in a dose-dependent manner. Besides, regulation of cyclooxygenase-2 (COX-2) and inducible nitric oxide synthase (iNOS) mRNA expression and NF-κB (p50) activation might be involved in mechanism of the stimulative process.</p> <p>Conclusion</p> <p>The study showed, some short molecular compounds in litchi play inflammatory effects on human.</p
MusiLingo: Bridging Music and Text with Pre-trained Language Models for Music Captioning and Query Response
Large Language Models (LLMs) have shown immense potential in multimodal
applications, yet the convergence of textual and musical domains remains
relatively unexplored. To address this gap, we present MusiLingo, a novel
system for music caption generation and music-related query responses.
MusiLingo employs a single projection layer to align music representations from
the pre-trained frozen music audio model MERT with the frozen LLaMA language
model, bridging the gap between music audio and textual contexts. We train it
on an extensive music caption dataset and fine-tune it with instructional data.
Due to the scarcity of high-quality music Q&A datasets, we created the
MusicInstruct (MI) dataset from MusicCaps, tailored for open-ended music
inquiries. Empirical evaluations demonstrate its competitive performance in
generating music captions and composing music-related Q&A pairs. Our introduced
dataset enables notable advancements beyond previous ones
- …