104 research outputs found
News Recommendation with Attention Mechanism
This paper explores the area of news recommendation, a key component of
online information sharing. Initially, we provide a clear introduction to news
recommendation, defining the core problem and summarizing current methods and
notable recent algorithms. We then present our work on implementing the NRAM
(News Recommendation with Attention Mechanism), an attention-based approach for
news recommendation, and assess its effectiveness. Our evaluation shows that
NRAM has the potential to significantly improve how news content is
personalized for users on digital news platforms.Comment: 7 pages, Journal of Industrial Engineering and Applied Scienc
Switch as a Verifier: Toward Scalable Data Plane Checking via Distributed, On-Device Verification
Data plane verification (DPV) is important for finding network errors.
Current DPV tools employ a centralized architecture, where a server collects
the data planes of all devices and verifies them. Despite substantial efforts
on accelerating DPV, this centralized architecture is inherently unscalable. In
this paper, to tackle the scalability challenge of DPV, we circumvent the
scalability bottleneck of centralized design and design Coral, a distributed,
on-device DPV framework. The key insight of Coral is that DPV can be
transformed into a counting problem on a directed acyclic graph, which can be
naturally decomposed into lightweight tasks executed at network devices,
enabling scalability. Coral consists of (1) a declarative requirement
specification language, (2) a planner that employs a novel data structure DVNet
to systematically decompose global verification into on-device counting tasks,
and (3) a distributed verification (DV) protocol that specifies how on-device
verifiers communicate task results efficiently to collaboratively verify the
requirements. We implement a prototype of Coral. Extensive experiments with
real-world datasets (WAN/LAN/DC) show that Coral consistently achieves scalable
DPV under various networks and DPV scenarios, i.e., up to 1250 times speed up
in the scenario of burst update, and up to 202 times speed up on 80% quantile
of incremental verification, than state-of-the-art DPV tools, with little
overhead on commodity network devices
Particle Filter SLAM for Vehicle Localization
Simultaneous Localization and Mapping (SLAM) presents a formidable challenge
in robotics, involving the dynamic construction of a map while concurrently
determining the precise location of the robotic agent within an unfamiliar
environment. This intricate task is further compounded by the inherent
"chicken-and-egg" dilemma, where accurate mapping relies on a dependable
estimation of the robot's location, and vice versa. Moreover, the computational
intensity of SLAM adds an additional layer of complexity, making it a crucial
yet demanding topic in the field. In our research, we address the challenges of
SLAM by adopting the Particle Filter SLAM method. Our approach leverages
encoded data and fiber optic gyro (FOG) information to enable precise
estimation of vehicle motion, while lidar technology contributes to
environmental perception by providing detailed insights into surrounding
obstacles. The integration of these data streams culminates in the
establishment of a Particle Filter SLAM framework, representing a key endeavor
in this paper to effectively navigate and overcome the complexities associated
with simultaneous localization and mapping in robotic systems.Comment: 6 pages, Journal of Industrial Engineering and Applied Scienc
Enhanced heterologous protein productivity by genome reduction in Lactococcus lactis NZ9000
Background: The implementation of novel chassis organisms to be used as microbial cell factories in industrial applications is an intensive research field. Lactococcus lactis, which is one of the most extensively studied model organisms, exhibits superior ability to be used as engineered host for fermentation of desirable products. However, few studies have reported about genome reduction of L. lactis as a clean background for functional genomic studies and a model chassis for desirable product fermentation. Results: Four large nonessential DNA regions accounting for 2.83% in L. lactis NZ9000 (L. lactis 9 k) genome (2,530,294 bp) were deleted using the Cre-loxP deletion system as the first steps toward a minimized genome in this study. The mutants were compared with the parental strain in several physiological traits and evaluated as microbial cell factories for heterologous protein production (intracellular and secretory expression) with the red fluorescent protein (RFP) and the bacteriocin leucocin C (LecC) as reporters. The four mutants grew faster, yielded enhanced biomass, achieved increased adenosine triphosphate content, and diminished maintenance demands compared with the wild strain in the two media tested. In particular, L. lactis 9 k-4 with the largest deletion was identified as the optimum candidate host for recombinant protein production. With nisin induction, not only the transcriptional efficiency but also the production levels of the expressed reporters were approximately three-to fourfold improved compared with the wild strain. The expression of lecC gene controlled with strong constitutive promoters P5 and P8 in L. lactis 9 k-4 was also improved significantly. Conclusions: The genome-streamlined L. lactis 9 k-4 outcompeted the parental strain in several physiological traits assessed. Moreover, L. lactis 9 k-4 exhibited good properties as platform organism for protein production. In future works, the genome of L. lactis will be maximally reduced by using our specific design to provide an even more clean background for functional genomics studies than L. lactis 9 k-4 constructed in this study. Furthermore, an improved background will be potentially available for use in biotechology.Peer reviewe
Rumor Detection with a novel graph neural network approach
The wide spread of rumors on social media has caused a negative impact on
people's daily life, leading to potential panic, fear, and mental health
problems for the public. How to debunk rumors as early as possible remains a
challenging problem. Existing studies mainly leverage information propagation
structure to detect rumors, while very few works focus on correlation among
users that they may coordinate to spread rumors in order to gain large
popularity. In this paper, we propose a new detection model, that jointly
learns both the representations of user correlation and information propagation
to detect rumors on social media. Specifically, we leverage graph neural
networks to learn the representations of user correlation from a bipartite
graph that describes the correlations between users and source tweets, and the
representations of information propagation with a tree structure. Then we
combine the learned representations from these two modules to classify the
rumors. Since malicious users intend to subvert our model after deployment, we
further develop a greedy attack scheme to analyze the cost of three adversarial
attacks: graph attack, comment attack, and joint attack. Evaluation results on
two public datasets illustrate that the proposed MODEL outperforms the
state-of-the-art rumor detection models. We also demonstrate our method
performs well for early rumor detection. Moreover, the proposed detection
method is more robust to adversarial attacks compared to the best existing
method. Importantly, we show that it requires a high cost for attackers to
subvert user correlation pattern, demonstrating the importance of considering
user correlation for rumor detection.Comment: 10 pages, 5 figure
Image Captioning in news report scenario
Image captioning strives to generate pertinent captions for specified images,
situating itself at the crossroads of Computer Vision (CV) and Natural Language
Processing (NLP). This endeavor is of paramount importance with far-reaching
applications in recommendation systems, news outlets, social media, and beyond.
Particularly within the realm of news reporting, captions are expected to
encompass detailed information, such as the identities of celebrities captured
in the images. However, much of the existing body of work primarily centers
around understanding scenes and actions. In this paper, we explore the realm of
image captioning specifically tailored for celebrity photographs, illustrating
its broad potential for enhancing news industry practices. This exploration
aims to augment automated news content generation, thereby facilitating a more
nuanced dissemination of information. Our endeavor shows a broader horizon,
enriching the narrative in news reporting through a more intuitive image
captioning framework.Comment: 10 pages, 4 figure
Identification of Natural Compound Carnosol as a Novel TRPA1 Receptor Agonist
The transient receptor potential ankyrin 1 (TRPA1) cation channel is one of the well-known targets for pain therapy. Herbal medicine is a rich source for new drugs and potentially useful therapeutic agents. To discover novel natural TRPA1 agonists, compounds isolated from Chinese herbs were screened using a cell-based calcium mobilization assay. Out of the 158 natural compounds derived from traditional Chinese herbal medicines, carnosol was identified as a novel agonist of TRPA1 with an EC50 value of 12.46 µM. And the agonistic effect of carnosol on TRPA1 could be blocked by A-967079, a selective TRPA1 antagonist. Furthermore, the specificity of carnosol was verified as it showed no significant effects on two other typical targets of TRP family member: TRPM8 and TRPV3. Carnosol exhibited anti-inflammatory and anti-nociceptive properties; the activation of TRPA1 might be responsible for the modulation of inflammatory nociceptive transmission. Collectively, our findings indicate that carnosol is a new anti-nociceptive agent targeting TRPA1 that can be used to explore further biological role in pain therapy
DiffAgent: Fast and Accurate Text-to-Image API Selection with Large Language Model
Text-to-image (T2I) generative models have attracted significant attention
and found extensive applications within and beyond academic research. For
example, the Civitai community, a platform for T2I innovation, currently hosts
an impressive array of 74,492 distinct models. However, this diversity presents
a formidable challenge in selecting the most appropriate model and parameters,
a process that typically requires numerous trials. Drawing inspiration from the
tool usage research of large language models (LLMs), we introduce DiffAgent, an
LLM agent designed to screen the accurate selection in seconds via API calls.
DiffAgent leverages a novel two-stage training framework, SFTA, enabling it to
accurately align T2I API responses with user input in accordance with human
preferences. To train and evaluate DiffAgent's capabilities, we present
DABench, a comprehensive dataset encompassing an extensive range of T2I APIs
from the community. Our evaluations reveal that DiffAgent not only excels in
identifying the appropriate T2I API but also underscores the effectiveness of
the SFTA training framework. Codes are available at
https://github.com/OpenGVLab/DiffAgent.Comment: Published as a conference paper at CVPR 202
Large Language Models for Forecasting and Anomaly Detection: A Systematic Literature Review
This systematic literature review comprehensively examines the application of
Large Language Models (LLMs) in forecasting and anomaly detection, highlighting
the current state of research, inherent challenges, and prospective future
directions. LLMs have demonstrated significant potential in parsing and
analyzing extensive datasets to identify patterns, predict future events, and
detect anomalous behavior across various domains. However, this review
identifies several critical challenges that impede their broader adoption and
effectiveness, including the reliance on vast historical datasets, issues with
generalizability across different contexts, the phenomenon of model
hallucinations, limitations within the models' knowledge boundaries, and the
substantial computational resources required. Through detailed analysis, this
review discusses potential solutions and strategies to overcome these
obstacles, such as integrating multimodal data, advancements in learning
methodologies, and emphasizing model explainability and computational
efficiency. Moreover, this review outlines critical trends that are likely to
shape the evolution of LLMs in these fields, including the push toward
real-time processing, the importance of sustainable modeling practices, and the
value of interdisciplinary collaboration. Conclusively, this review underscores
the transformative impact LLMs could have on forecasting and anomaly detection
while emphasizing the need for continuous innovation, ethical considerations,
and practical solutions to realize their full potential
- …