1,309 research outputs found
Collaboration based Multi-Label Learning
It is well-known that exploiting label correlations is crucially important to
multi-label learning. Most of the existing approaches take label correlations
as prior knowledge, which may not correctly characterize the real relationships
among labels. Besides, label correlations are normally used to regularize the
hypothesis space, while the final predictions are not explicitly correlated. In
this paper, we suggest that for each individual label, the final prediction
involves the collaboration between its own prediction and the predictions of
other labels. Based on this assumption, we first propose a novel method to
learn the label correlations via sparse reconstruction in the label space.
Then, by seamlessly integrating the learned label correlations into model
training, we propose a novel multi-label learning approach that aims to
explicitly account for the correlated predictions of labels while training the
desired model simultaneously. Extensive experimental results show that our
approach outperforms the state-of-the-art counterparts.Comment: Accepted by AAAI-1
EnPAC: Petri Net Model Checking for Linear Temporal Logic
State generation and exploration (counterexample search) are two cores of
explicit-state Petri net model checking for linear temporal logic (LTL).
Traditional state generation updates a structure to reduce the computation of
all transitions and frequently encodes/decodes to read each encoded state. We
present the optimized calculation of enabled transitions on demand by dynamic
fireset to avoid such a structure. And we propose direct read/write (DRW)
operation on encoded markings without decoding and re-encoding to make state
generation faster and reduce memory consumption. To search counterexamples more
quickly under an on-the-fly framework, we add heuristic information to the
Buchi automaton to guide the exploration in the direction of accepted states.
The above strategies can optimize existing methods for LTL model checking. We
implement these optimization strategies in a Petri net model-checking tool
called EnPAC (Enhanced Petri-net Analyser and Checker) for linear temporal
logic. Then, we evaluate it on the benchmarks of MCC (Model Checking Contest),
which shows a drastic improvement over the existing methods.Comment: 11 pages, 5 figure
A Novel Neural Network-based Multi-objective Evolution Lower Upper Bound Estimation Method for Electricity Load Interval Forecast
Currently, an interval prediction model, lower and upper bounds estimation (LUBE) which constructs the prediction intervals (PIs) by using the double outputs of the neural network (NN) is growing popular. However, existing LUBE researches have two problems. One is that the applied NNs are flawed: feedforward NN (FNN) cannot map the dynamic relationship of data and recurrent NN (RNN) is computationally expensive. The other is that most LUBE models are built under a single-objective frame in which the uncertainty cannot be fully quantified. In this article, a novel wavelet NN (WNN) with direct input–output links (DLWNN) is proposed to obtain PIs in a multiobjective LUBE frame. Different from WNN, the proposed DLWNN adds the direct links from the input layer to output layer which can make full use of the information of time series data. Besides, a niched differential evolution nondominated fast sort genetic algorithm (NDENSGA) is proposed to optimize the prediction model, so as to achieve a balance between estimation accuracy and the average width of the PIs. NDENSGA modifies the traditional population renewal mechanism to increase population diversity and adopts a new elite selection strategy for obtaining more extensive and uniform solutions. The effectiveness of DLWNN and NDENSGA is evaluated through a series of experiments with real electricity load data sets. The results show that the proposed model has better performance than others in terms of convergence and diversity of obtained nondominated solutions
A Preliminary Exploration of YouTubers' Use of Generative-AI in Content Creation
Content creators increasingly utilize generative artificial intelligence
(Gen-AI) on platforms such as YouTube, TikTok, Instagram, and various blogging
sites to produce imaginative images, AI-generated videos, and articles using
Large Language Models (LLMs). Despite its growing popularity, there remains an
underexplored area concerning the specific domains where AI-generated content
is being applied, and the methodologies content creators employ with Gen-AI
tools during the creation process. This study initially explores this emerging
area through a qualitative analysis of 68 YouTube videos demonstrating Gen-AI
usage. Our research focuses on identifying the content domains, the variety of
tools used, the activities performed, and the nature of the final products
generated by Gen-AI in the context of user-generated content.Comment: Accepted at CHI LBW 202
Deep Koopman Operator-Informed Safety Command Governor for Autonomous Vehicles
Modeling of nonlinear behaviors with physical-based models poses challenges.
However, Koopman operator maps the original nonlinear system into an
infinite-dimensional linear space to achieve global linearization of the
nonlinear system through input and output data, which derives an absolute
equivalent linear representation of the original state space. Due to the
impossibility of implementing the infinite-dimensional Koopman operator,
finite-dimensional kernel functions are selected as an approximation. Given its
flexible structure and high accuracy, deep learning is initially employed to
extract kernel functions from data and acquire a linear evolution dynamic of
the autonomous vehicle in the lifted space. Additionally, the control barrier
function (CBF) converts the state constraints to the constraints on the input
to render safety property. Then, in terms of the lateral stability of the
in-wheel motor driven vehicle, the CBF conditions are incorporated with the
learned deep Koopman model. Because of the linear fashion of the deep Koopman
model, the quadratic programming problem is formulated to generate the applied
driving torque with minimal perturbation to the original driving torque as a
safety command governor. In the end, to validate the fidelity of the deep
Koopman model compared to other mainstream approaches and demonstrate the
lateral improvement achieved by the proposed safety command governor, data
collection and safety testing scenarios are conducted on a hardware-in-the-loop
platform
Large-scale Dataset Pruning with Dynamic Uncertainty
The state of the art of many learning tasks, e.g., image classification, is
advanced by collecting larger datasets and then training larger models on them.
As the outcome, the increasing computational cost is becoming unaffordable. In
this paper, we investigate how to prune the large-scale datasets, and thus
produce an informative subset for training sophisticated deep models with
negligible performance drop. We propose a simple yet effective dataset pruning
method by exploring both the prediction uncertainty and training dynamics. To
our knowledge, this is the first work to study dataset pruning on large-scale
datasets, i.e., ImageNet-1K and ImageNet-21K, and advanced models, i.e., Swin
Transformer and ConvNeXt. Extensive experimental results indicate that our
method outperforms the state of the art and achieves 75% lossless compression
ratio on both ImageNet-1K and ImageNet-21K. The code and pruned datasets are
available at https://github.com/BAAI-DCAI/Dataset-Pruning
- …