130 research outputs found
Solitary wave fission and fusion in the (2+1)-dimensional generalized Broer–Kaup system
Via a special Painlevé–Bäcklund transformation and the linear superposition theorem, we derive the general variable separation solution of the (2 + 1)-dimensional generalized Broer–Kaup system. Based on the general variable separation solution and choosing some suitable variable separated functions, new types of V-shaped and A-shaped solitary wave fusion and Y-shaped solitary wave fission phenomena are reported
Research on the Application of Online and Offline Mixed Teaching Mode of Marketing Course Based on the BOPPPS Model
BOPPPS teaching fully integrates the advantages of online self-study and offline courses. This kind of teaching has been widely used in college education, and has proved to have a positive effect on improving students’ ability to solve problems. It also has a significant effect on improving students’ sense of self-efficacy, stimulating learning interest and improving their ability to learn independently in practice. During the implementation of the research, the team explored and practiced the online and offline mixed teaching mode of marketing course with the wisdom tree teaching platform, and built teaching resources for students to learn and discuss on their own, which is a reference for future online mixed teaching
Once is Enough: A Light-Weight Cross-Attention for Fast Sentence Pair Modeling
Transformer-based models have achieved great success on sentence pair
modeling tasks, such as answer selection and natural language inference (NLI).
These models generally perform cross-attention over input pairs, leading to
prohibitive computational costs. Recent studies propose dual-encoder and late
interaction architectures for faster computation. However, the balance between
the expressive of cross-attention and computation speedup still needs better
coordinated. To this end, this paper introduces a novel paradigm MixEncoder for
efficient sentence pair modeling. MixEncoder involves a light-weight
cross-attention mechanism. It conducts query encoding only once while modeling
the query-candidate interaction in parallel. Extensive experiments conducted on
four tasks demonstrate that our MixEncoder can speed up sentence pairing by
over 113x while achieving comparable performance as the more expensive
cross-attention models.Comment: Accepted to EMNLP 202
When Less is Enough: Positive and Unlabeled Learning Model for Vulnerability Detection
Automated code vulnerability detection has gained increasing attention in
recent years. The deep learning (DL)-based methods, which implicitly learn
vulnerable code patterns, have proven effective in vulnerability detection. The
performance of DL-based methods usually relies on the quantity and quality of
labeled data. However, the current labeled data are generally automatically
collected, such as crawled from human-generated commits, making it hard to
ensure the quality of the labels. Prior studies have demonstrated that the
non-vulnerable code (i.e., negative labels) tends to be unreliable in
commonly-used datasets, while vulnerable code (i.e., positive labels) is more
determined. Considering the large numbers of unlabeled data in practice, it is
necessary and worth exploring to leverage the positive data and large numbers
of unlabeled data for more accurate vulnerability detection.
In this paper, we focus on the Positive and Unlabeled (PU) learning problem
for vulnerability detection and propose a novel model named PILOT, i.e.,
PositIve and unlabeled Learning mOdel for vulnerability deTection. PILOT only
learns from positive and unlabeled data for vulnerability detection. It mainly
contains two modules: (1) A distance-aware label selection module, aiming at
generating pseudo-labels for selected unlabeled data, which involves the
inter-class distance prototype and progressive fine-tuning; (2) A
mixed-supervision representation learning module to further alleviate the
influence of noise and enhance the discrimination of representations.Comment: This paper is accepted by ASE 202
Towards Modeling Software Quality of Virtual Reality Applications from Users' Perspectives
Virtual Reality (VR) technology has become increasingly popular in recent
years as a key enabler of the Metaverse. VR applications have unique
characteristics, including the revolutionized human-computer interaction
mechanisms, that distinguish them from traditional software. Hence, user
expectations for the software quality of VR applications diverge from those for
traditional software. Investigating these quality expectations is crucial for
the effective development and maintenance of VR applications, which remains an
under-explored area in prior research.
To bridge the gap, we conduct the first large-scale empirical study to model
the software quality of VR applications from users' perspectives. To this end,
we analyze 1,132,056 user reviews of 14,150 VR applications across seven app
stores through a semiautomatic review mining approach. We construct a taxonomy
of 12 software quality attributes that are of major concern to VR users. Our
analysis reveals that the VR-specific quality attributes are of utmost
importance to users, which are closely related to the most unique properties of
VR applications like revolutionized interaction mechanisms and immersive
experiences. Our examination of relevant user complaints reveals the major
factors impacting user satisfaction with VR-specific quality attributes. We
identify that poor design or implementation of the movement mechanisms, control
mechanisms, multimedia systems, and physics, can significantly degrade the user
experience. Moreover, we discuss the implications of VR quality assurance for
both developers and researchers to shed light on future work. For instance, we
suggest developers implement sufficient accessibility and comfort options for
users with mobility limitations, sensory impairments, and other specific needs
to customize the interaction mechanisms. Our datasets and results will be
released to facilitate follow-up studies
On the Feasibility of Specialized Ability Stealing for Large Language Code Models
Recent progress in large language code models (LLCMs) has led to a dramatic
surge in the use of software development. Nevertheless, it is widely known that
training a well-performed LLCM requires a plethora of workforce for collecting
the data and high quality annotation. Additionally, the training dataset may be
proprietary (or partially open source to the public), and the training process
is often conducted on a large-scale cluster of GPUs with high costs. Inspired
by the recent success of imitation attacks in stealing computer vision and
natural language models, this work launches the first imitation attack on
LLCMs: by querying a target LLCM with carefully-designed queries and collecting
the outputs, the adversary can train an imitation model that manifests close
behavior with the target LLCM. We systematically investigate the effectiveness
of launching imitation attacks under different query schemes and different LLCM
tasks. We also design novel methods to polish the LLCM outputs, resulting in an
effective imitation training process. We summarize our findings and provide
lessons harvested in this study that can help better depict the attack surface
of LLCMs. Our research contributes to the growing body of knowledge on
imitation attacks and defenses in deep neural models, particularly in the
domain of code related tasks.Comment: 11 page
- …