732 research outputs found
On the Optimal Linear Convergence Rate of a Generalized Proximal Point Algorithm
The proximal point algorithm (PPA) has been well studied in the literature.
In particular, its linear convergence rate has been studied by Rockafellar in
1976 under certain condition. We consider a generalized PPA in the generic
setting of finding a zero point of a maximal monotone operator, and show that
the condition proposed by Rockafellar can also sufficiently ensure the linear
convergence rate for this generalized PPA. Indeed we show that these linear
convergence rates are optimal. Both the exact and inexact versions of this
generalized PPA are discussed. The motivation to consider this generalized PPA
is that it includes as special cases the relaxed versions of some splitting
methods that are originated from PPA. Thus, linear convergence results of this
generalized PPA can be used to better understand the convergence of some widely
used algorithms in the literature. We focus on the particular convex
minimization context and specify Rockafellar's condition to see how to ensure
the linear convergence rate for some efficient numerical schemes, including the
classical augmented Lagrangian method proposed by Hensen and Powell in 1969 and
its relaxed version, the original alternating direction method of multipliers
(ADMM) by Glowinski and Marrocco in 1975 and its relaxed version (i.e., the
generalized ADMM by Eckstein and Bertsekas in 1992). Some refined conditions
weaker than existing ones are proposed in these particular contexts.Comment: 22 pages, 1 figur
A Finite Element Splitting Extrapolation for Second Order Hyperbolic Equations
Splitting extrapolation is an efficient technique for solving large scale scientific and engineering problems in parallel. This article discusses a finite element splitting extrapolation for second order hyperbolic equations with time-dependent coefficients. This method possesses a higher degree of parallelism, less computational complexity, and more flexibility than Richardson extrapolation while achieving the same accuracy. By means of domain decomposition and isoparametric mapping, some grid parameters are chosen according to the problem. The multiparameter asymptotic expansion of the d-quadratic finite element error is also established. The splitting extrapolation formulas are developed from this expansion. An approximation with higher accuracy on a globally fine grid can be computed by solving a set of smaller discrete subproblems on different coarser grids in parallel. Some a posteriori error estimates are also provided. Numerical examples show that this method is efficient for solving discontinuous problems and nonlinear hyperbolic equations
Development of braided rope seals for hypersonic engine applications. Part 2: Flow modeling
Two models based on the Kozeny-Carmen equation were developed to analyze the fluid flow through a new class of braided rope seals under development for advanced hypersonic engines. A hybrid seal geometry consisting of a braided sleeve and a substantial amount of longitudinal fibers with high packing density was selected for development based on its low leakage rates. The models developed allow prediction of the gas leakage rate as a function of fiber diameter, fiber packing density, gas properties, and pressure drop across the seal
Learning Social Image Embedding with Deep Multimodal Attention Networks
Learning social media data embedding by deep models has attracted extensive
research interest as well as boomed a lot of applications, such as link
prediction, classification, and cross-modal search. However, for social images
which contain both link information and multimodal contents (e.g., text
description, and visual content), simply employing the embedding learnt from
network structure or data content results in sub-optimal social image
representation. In this paper, we propose a novel social image embedding
approach called Deep Multimodal Attention Networks (DMAN), which employs a deep
model to jointly embed multimodal contents and link information. Specifically,
to effectively capture the correlations between multimodal contents, we propose
a multimodal attention network to encode the fine-granularity relation between
image regions and textual words. To leverage the network structure for
embedding learning, a novel Siamese-Triplet neural network is proposed to model
the links among images. With the joint deep model, the learnt embedding can
capture both the multimodal contents and the nonlinear network information.
Extensive experiments are conducted to investigate the effectiveness of our
approach in the applications of multi-label classification and cross-modal
search. Compared to state-of-the-art image embeddings, our proposed DMAN
achieves significant improvement in the tasks of multi-label classification and
cross-modal search
Meta Federated Reinforcement Learning for Distributed Resource Allocation
In cellular networks, resource allocation is usually performed in a
centralized way, which brings huge computation complexity to the base station
(BS) and high transmission overhead. This paper explores a distributed resource
allocation method that aims to maximize energy efficiency (EE) while ensuring
the quality of service (QoS) for users. Specifically, in order to address
wireless channel conditions, we propose a robust meta federated reinforcement
learning (\textit{MFRL}) framework that allows local users to optimize transmit
power and assign channels using locally trained neural network models, so as to
offload computational burden from the cloud server to the local users, reducing
transmission overhead associated with local channel state information. The BS
performs the meta learning procedure to initialize a general global model,
enabling rapid adaptation to different environments with improved EE
performance. The federated learning technique, based on decentralized
reinforcement learning, promotes collaboration and mutual benefits among users.
Analysis and numerical results demonstrate that the proposed \textit{MFRL}
framework accelerates the reinforcement learning process, decreases
transmission overhead, and offloads computation, while outperforming the
conventional decentralized reinforcement learning algorithm in terms of
convergence speed and EE performance across various scenarios.Comment: Submitted to TW
- …