36 research outputs found
Invariant analysis and explicit solutions of the time fractional nonlinear perturbed Burgers equation
The Lie group analysis method is performed for the nonlinear perturbed Burgers equation and the time fractional nonlinear perturbed Burgers equation. All of the point symmetries of the equations are constructed. In view of the point symmetries, the vector fields of the equations are constructed. Subsequently, the symmetry reductions are investigated. In particular, some novel exact and explicit solutions are obtained
Group analysis and conservation laws of an integrable Kadomtsev–Petviashvili equation
In this paper, an integrable KP equation is studied using symmetry and conservation laws. First, on the basis of various cases of coefficients, we construct the infinitesimal generators. For the special case, we get the corresponding geometry vector fields, and then from known soliton solutions we derive new soliton solutions. In addition, the explicit power series solutions are derived. Lastly, nonlinear self-adjointness and conservation laws are constructed with symmetries
Selective-Stereo: Adaptive Frequency Information Selection for Stereo Matching
Stereo matching methods based on iterative optimization, like RAFT-Stereo and
IGEV-Stereo, have evolved into a cornerstone in the field of stereo matching.
However, these methods struggle to simultaneously capture high-frequency
information in edges and low-frequency information in smooth regions due to the
fixed receptive field. As a result, they tend to lose details, blur edges, and
produce false matches in textureless areas. In this paper, we propose Selective
Recurrent Unit (SRU), a novel iterative update operator for stereo matching.
The SRU module can adaptively fuse hidden disparity information at multiple
frequencies for edge and smooth regions. To perform adaptive fusion, we
introduce a new Contextual Spatial Attention (CSA) module to generate attention
maps as fusion weights. The SRU empowers the network to aggregate hidden
disparity information across multiple frequencies, mitigating the risk of vital
hidden disparity information loss during iterative processes. To verify SRU's
universality, we apply it to representative iterative stereo matching methods,
collectively referred to as Selective-Stereo. Our Selective-Stereo ranks
on KITTI 2012, KITTI 2015, ETH3D, and Middlebury leaderboards among
all published methods. Code is available at
https://github.com/Windsrain/Selective-Stereo.Comment: Accepted to CVPR 202
Accurate and Efficient Stereo Matching via Attention Concatenation Volume
Stereo matching is a fundamental building block for many vision and robotics
applications. An informative and concise cost volume representation is vital
for stereo matching of high accuracy and efficiency. In this paper, we present
a novel cost volume construction method, named attention concatenation volume
(ACV), which generates attention weights from correlation clues to suppress
redundant information and enhance matching-related information in the
concatenation volume. The ACV can be seamlessly embedded into most stereo
matching networks, the resulting networks can use a more lightweight
aggregation network and meanwhile achieve higher accuracy. We further design a
fast version of ACV to enable real-time performance, named Fast-ACV, which
generates high likelihood disparity hypotheses and the corresponding attention
weights from low-resolution correlation clues to significantly reduce
computational and memory cost and meanwhile maintain a satisfactory accuracy.
The core idea of our Fast-ACV is volume attention propagation (VAP) which can
automatically select accurate correlation values from an upsampled correlation
volume and propagate these accurate values to the surroundings pixels with
ambiguous correlation clues. Furthermore, we design a highly accurate network
ACVNet and a real-time network Fast-ACVNet based on our ACV and Fast-ACV
respectively, which achieve the state-of-the-art performance on several
benchmarks (i.e., our ACVNet ranks the 2nd on KITTI 2015 and Scene Flow, and
the 3rd on KITTI 2012 and ETH3D among all the published methods; our
Fast-ACVNet outperforms almost all state-of-the-art real-time methods on Scene
Flow, KITTI 2012 and 2015 and meanwhile has better generalization ability)Comment: Accepted to TPAMI 2023. arXiv admin note: substantial text overlap
with arXiv:2203.0214
Group analysis, nonlinear self-adjointness, conservation laws, and soliton solutions for the mKdV systems
We study the symmetry groups, conservation laws, solitons, and singular solitary waves of some versions of systems of the modified KdV equations
New Initiation Modes for Directed Carbonylative C-C Bond Activation:Rhodium-Catalyzed (3+1+2) Cycloadditions of Aminomethylcyclopropanes
Under carbonylative conditions, neutral
RhÂ(I)-systems modified
with weak donor ligands (AsPh<sub>3</sub> or 1,4-oxathiane) undergo
N-Cbz, N-benzoyl, or N-Ts directed insertion into the proximal C–C
bond of aminoÂmethylÂcycloÂpropanes to generate rhodacycloÂpentanone
intermediates. These are trapped by N-tethered alkenes to provide
complex perhydroisoindoles
A (2+1)-dimensional sine-Gordon and sinh-Gordon equations with symmetries and kink wave solutions
In this paper, a (2+1)-dimensional sine-Gordon equation and a sinh-Gordon equation are derived from the well-known AKNS system. Based on the Hirota bilinear method and Lie symmetry analysis, kink wave solutions and travelingwave solutions of the (2+1)-dimensional sine-Gordon equation are constructed. The traveling wave solutions of the (2+1)-dimensional sinh-Gordon equation can also be provided in a similar manner. Meanwhile, conservation laws are derived
Continual Learning in Predictive Autoscaling
Predictive Autoscaling is used to forecast the workloads of servers and
prepare the resources in advance to ensure service level objectives (SLOs) in
dynamic cloud environments. However, in practice, its prediction task often
suffers from performance degradation under abnormal traffics caused by external
events (such as sales promotional activities and applications
re-configurations), for which a common solution is to re-train the model with
data of a long historical period, but at the expense of high computational and
storage costs. To better address this problem, we propose a replay-based
continual learning method, i.e., Density-based Memory Selection and Hint-based
Network Learning Model (DMSHM), using only a small part of the historical log
to achieve accurate predictions. First, we discover the phenomenon of sample
overlap when applying replay-based continual learning in prediction tasks. In
order to surmount this challenge and effectively integrate new sample
distribution, we propose a density-based sample selection strategy that
utilizes kernel density estimation to calculate sample density as a reference
to compute sample weight, and employs weight sampling to construct a new memory
set. Then we implement hint-based network learning based on hint representation
to optimize the parameters. Finally, we conduct experiments on public and
industrial datasets to demonstrate that our proposed method outperforms
state-of-the-art continual learning methods in terms of memory capacity and
prediction accuracy. Furthermore, we demonstrate remarkable practicability of
DMSHM in real industrial applications
Prompt-augmented Temporal Point Process for Streaming Event Sequence
Neural Temporal Point Processes (TPPs) are the prevalent paradigm for
modeling continuous-time event sequences, such as user activities on the web
and financial transactions. In real-world applications, event data is typically
received in a \emph{streaming} manner, where the distribution of patterns may
shift over time. Additionally, \emph{privacy and memory constraints} are
commonly observed in practical scenarios, further compounding the challenges.
Therefore, the continuous monitoring of a TPP to learn the streaming event
sequence is an important yet under-explored problem. Our work paper addresses
this challenge by adopting Continual Learning (CL), which makes the model
capable of continuously learning a sequence of tasks without catastrophic
forgetting under realistic constraints. Correspondingly, we propose a simple
yet effective framework, PromptTPP\footnote{Our code is available at {\small
\url{ https://github.com/yanyanSann/PromptTPP}}}, by integrating the base TPP
with a continuous-time retrieval prompt pool. The prompts, small learnable
parameters, are stored in a memory space and jointly optimized with the base
TPP, ensuring that the model learns event streams sequentially without
buffering past examples or task-specific attributes. We present a novel and
realistic experimental setup for modeling event streams, where PromptTPP
consistently achieves state-of-the-art performance across three real user
behavior datasets.Comment: NeurIPS 2023 camera ready versio