207 research outputs found
K-BMPC: Derivative-based Koopman Bilinear Model Predictive Control For Tractor-trailer Trajectory Tracking With Unknown Parameters
Nonlinear dynamics bring difficulties to controller design for control-affine
systems such as tractor-trailer vehicles, especially when the parameters in
dynamics are unknown. To address this constraint, we propose a derivative-based
lifting function construction method, show that the corresponding infinite
dimensional Koopman bilinear model over the lifting function is equivalent to
the original control-affine system. Further, we analyze the propagation and
bounds of state prediction errors caused by the the truncation in derivative
order. The identified finite dimensional Koopman bilinear model would serve as
predictive model in next step. Koopman Bilinear Model Predictive control
(K-BMPC) is proposed to solve the trajectory tracking problem. We linearize the
bilinear model around the estimation of the lifted state and control input.
Then the bilinear Model Predictive Control problem is approximated by a
quadratic programming problem. Further, the estimation is updated at each
iteration until the convergence is reached. Moreover, we implement our
algorithm on a tractor-trailer dynamic system, taking into account the
longitudinal and side slip effects. The open-loop simulation shows the proposed
Koopman bilinear model captures the dynamics with unknown parameters and has
good prediction performance. Closed loop tracking results show the proposed
K-BMPC exhibits elevated tracking precision along with commendable
computational efficiency. The experimental results demonstrate the feasibility
of the proposed method
Learnable Graph Matching: A Practical Paradigm for Data Association
Data association is at the core of many computer vision tasks, e.g., multiple
object tracking, image matching, and point cloud registration. Existing methods
usually solve the data association problem by network flow optimization,
bipartite matching, or end-to-end learning directly. Despite their popularity,
we find some defects of the current solutions: they mostly ignore the
intra-view context information; besides, they either train deep association
models in an end-to-end way and hardly utilize the advantage of
optimization-based assignment methods, or only use an off-the-shelf neural
network to extract features. In this paper, we propose a general learnable
graph matching method to address these issues. Especially, we model the
intra-view relationships as an undirected graph. Then data association turns
into a general graph matching problem between graphs. Furthermore, to make
optimization end-to-end differentiable, we relax the original graph matching
problem into continuous quadratic programming and then incorporate training
into a deep graph neural network with KKT conditions and implicit function
theorem. In MOT task, our method achieves state-of-the-art performance on
several MOT datasets. For image matching, our method outperforms
state-of-the-art methods with half training data and iterations on a popular
indoor dataset, ScanNet. Code will be available at
https://github.com/jiaweihe1996/GMTracker.Comment: Submitted to TPAMI on Mar 21, 2022. arXiv admin note: substantial
text overlap with arXiv:2103.1617
Task-Oriented Conversation Generation Using Heterogeneous Memory Networks
How to incorporate external knowledge into a neural dialogue model is
critically important for dialogue systems to behave like real humans. To handle
this problem, memory networks are usually a great choice and a promising way.
However, existing memory networks do not perform well when leveraging
heterogeneous information from different sources. In this paper, we propose a
novel and versatile external memory networks called Heterogeneous Memory
Networks (HMNs), to simultaneously utilize user utterances, dialogue history
and background knowledge tuples. In our method, historical sequential dialogues
are encoded and stored into the context-aware memory enhanced by gating
mechanism while grounding knowledge tuples are encoded and stored into the
context-free memory. During decoding, the decoder augmented with HMNs
recurrently selects each word in one response utterance from these two memories
and a general vocabulary. Experimental results on multiple real-world datasets
show that HMNs significantly outperform the state-of-the-art data-driven
task-oriented dialogue models in most domains.Comment: Accepted as a long paper at EMNLP-IJCNLP 201
GNNHLS: Evaluating Graph Neural Network Inference via High-Level Synthesis
With the ever-growing popularity of Graph Neural Networks (GNNs), efficient
GNN inference is gaining tremendous attention. Field-Programming Gate Arrays
(FPGAs) are a promising execution platform due to their fine-grained
parallelism, low-power consumption, reconfigurability, and concurrent
execution. Even better, High-Level Synthesis (HLS) tools bridge the gap between
the non-trivial FPGA development efforts and rapid emergence of new GNN models.
In this paper, we propose GNNHLS, an open-source framework to comprehensively
evaluate GNN inference acceleration on FPGAs via HLS, containing a software
stack for data generation and baseline deployment, and FPGA implementations of
6 well-tuned GNN HLS kernels. We evaluate GNNHLS on 4 graph datasets with
distinct topologies and scales. The results show that GNNHLS achieves up to
50.8x speedup and 423x energy reduction relative to the CPU baselines. Compared
with the GPU baselines, GNNHLS achieves up to 5.16x speedup and 74.5x energy
reduction
- …