1,231 research outputs found
Incomplete Augmented Lagrangian Preconditioner for Steady Incompressible Navier-Stokes Equations
An incomplete augmented Lagrangian preconditioner, for the steady incompressible Navier-Stokes equations discretized by stable finite elements, is proposed. The eigenvalues of the preconditioned matrix are analyzed. Numerical experiments show that the incomplete augmented Lagrangian-based preconditioner proposed is very robust and performs quite well by the Picard linearization or the Newton linearization over a wide range of values of the viscosity on both uniform and stretched grids
Scale Attention for Learning Deep Face Representation: A Study Against Visual Scale Variation
Human face images usually appear with wide range of visual scales. The
existing face representations pursue the bandwidth of handling scale variation
via multi-scale scheme that assembles a finite series of predefined scales.
Such multi-shot scheme brings inference burden, and the predefined scales
inevitably have gap from real data. Instead, learning scale parameters from
data, and using them for one-shot feature inference, is a decent solution. To
this end, we reform the conv layer by resorting to the scale-space theory, and
achieve two-fold facilities: 1) the conv layer learns a set of scales from real
data distribution, each of which is fulfilled by a conv kernel; 2) the layer
automatically highlights the feature at the proper channel and location
corresponding to the input pattern scale and its presence. Then, we accomplish
the hierarchical scale attention by stacking the reformed layers, building a
novel style named SCale AttentioN Conv Neural Network (\textbf{SCAN-CNN}). We
apply SCAN-CNN to the face recognition task and push the frontier of SOTA
performance. The accuracy gain is more evident when the face images are blurry.
Meanwhile, as a single-shot scheme, the inference is more efficient than
multi-shot fusion. A set of tools are made to ensure the fast training of
SCAN-CNN and zero increase of inference cost compared with the plain CNN
In situ epicatechin-loaded hydrogel implants for local drug delivery to spinal column for effective management of post-traumatic spinal injuries
Purpose: To prepare hydrogels loaded with epicatechin, a strong antioxidant, anti-inflammatory, and neuroprotective tea flavonoid, and characterise them in situ as a vehicle for prolonged and safer drug delivery in patients with post-traumatic spinal cord injury.Methods: Five in situ gel formulations were prepared using chitosan and evaluated in terms of their visual appearance, clarity, pH, viscosity, and in vitro drug release. In vivo anti-inflammatory activity was determined and compared with 2 % piroxicam gel as standard. Motor function activity in a rat model of spinal injury was examined comparatively with i.v. methylprednisolone as standard.Results: The N-methyl pyrrolidone solution (containing 1 % w/w epicatechin with 2 to 10 % w/w chitosan) of the in situ gel formulation had a uniform pH in the range of 4.01 ± 0.12 to 4.27 ± 0.02. High and uniform drug loading, ranging from 94.48 ± 1.28 to 98.08 ± 1.24 %, and good in vitro drug release (79.48 ± 2.84 to 96.48 ± 1.02 % after 7 days) were achieved. The in situ gel prepared from 1 % epicatechin and 2 % chitosan (E5) showed the greatest in vivo anti-inflammatory activity (60.58 % inhibition of paw oedema in standard carrageenan-induced hind rat paw oedema model, compared with 48.08 % for the standard). The gels showed significant therapeutic effectiveness against post-traumainduced spinal injury in rats. E5 elicited maximum motor activity (horizontal bar test) in the spinal injuryrat model; the rats that received E5 treatment produced an activity score of 3.62 ± 0.02 at the end of 7 days, compared with 5.0 ± 0.20 following treatment with the standard.Conclusion: In situ epicatechin-loaded gel exhibits significant neuroprotective and anti-inflammatory effects, and therefore can potentially be used for prolonged and safe drug delivery in patients with traumatic spinal cord injury.Keywords: Epicatechin, In situ gel, Chitosan, Spinal injury, Post-traumatic, Motor activity, Antiinflammator
Adaptive loose optimization for robust question answering
Question answering methods are well-known for leveraging data bias, such as
the language prior in visual question answering and the position bias in
machine reading comprehension (extractive question answering). Current
debiasing methods often come at the cost of significant in-distribution
performance to achieve favorable out-of-distribution generalizability, while
non-debiasing methods sacrifice a considerable amount of out-of-distribution
performance in order to obtain high in-distribution performance. Therefore, it
is challenging for them to deal with the complicated changing real-world
situations. In this paper, we propose a simple yet effective novel loss
function with adaptive loose optimization, which seeks to make the best of both
worlds for question answering. Our main technical contribution is to reduce the
loss adaptively according to the ratio between the previous and current
optimization state on mini-batch training data. This loose optimization can be
used to prevent non-debiasing methods from overlearning data bias while
enabling debiasing methods to maintain slight bias learning. Experiments on the
visual question answering datasets, including VQA v2, VQA-CP v1, VQA-CP v2,
GQA-OOD, and the extractive question answering dataset SQuAD demonstrate that
our approach enables QA methods to obtain state-of-the-art in- and
out-of-distribution performance in most cases. The source code has been
released publicly in \url{https://github.com/reml-group/ALO}.Comment: 13 pages,8 figure
A splitting preconditioner for the incompressible navier–stokes equations
In this paper, a splitting preconditioner based on the relaxed dimensional factorization (RDF) preconditioner and the modified augmented Lagrangian (MAL) preconditioner for the incompressible Navier–Stokes equations is presented. The preconditioned matrix is analyzed, and similar results arising from the RDF and the MAL preconditioners are obtained. The corresponding details of the spectrum analysis are given. Finally, we compare the three preconditioners and numerical experiments are implemented by using the IFISS package
Deep Time-Stream Framework for Click-Through Rate Prediction by Tracking Interest Evolution
Click-through rate (CTR) prediction is an essential task in industrial
applications such as video recommendation. Recently, deep learning models have
been proposed to learn the representation of users' overall interests, while
ignoring the fact that interests may dynamically change over time. We argue
that it is necessary to consider the continuous-time information in CTR models
to track user interest trend from rich historical behaviors. In this paper, we
propose a novel Deep Time-Stream framework (DTS) which introduces the time
information by an ordinary differential equations (ODE). DTS continuously
models the evolution of interests using a neural network, and thus is able to
tackle the challenge of dynamically representing users' interests based on
their historical behaviors. In addition, our framework can be seamlessly
applied to any existing deep CTR models by leveraging the additional
Time-Stream Module, while no changes are made to the original CTR models.
Experiments on public dataset as well as real industry dataset with billions of
samples demonstrate the effectiveness of proposed approaches, which achieve
superior performance compared with existing methods.Comment: 8 pages. arXiv admin note: text overlap with arXiv:1809.03672 by
other author
- …