13,672 research outputs found
Tracing planet-induced structures in circumstellar disks using molecular lines
Circumstellar disks are considered to be the birthplace of planets. Specific
structures like spiral arms, gaps, and cavities are characteristic indicators
of planet-disk interaction. Investigating these structures can provide insights
into the growth of protoplanets and the physical properties of the disk. We
investigate the feasibility of using molecular lines to trace planet-induced
structures in circumstellar disks. Based on 3D hydrodynamic simulations of
planet-disk interactions, we perform self-consistent temperature calculations
and produce N-LTE molecular line velocity-channel maps and spectra of these
disks using our new N-LTE line radiative transfer code Mol3D. Subsequently, we
simulate ALMA observations using the CASA simulator. We consider two nearly
face-on inclinations, 5 disk masses, 7 disk radii, and 2 different typical
pre-main-sequence host stars (T Tauri, Herbig Ae). We calculate up to 141
individual velocity-channel maps for five molecules/isotopoloques in a total of
32 rotational transitions to investigate the frequency dependence of the
structures indicated above. We find that the majority of protoplanetary disks
in our parameter space could be detected in the molecular lines considered.
However, unlike the continuum case, gap detection is not straightforward in
lines. For example, gaps are not seen in symmetric rings but are masked by the
pattern caused by the global (Keplerian) velocity field. We identify specific
regions in the velocity-channel maps that are characteristic of planet-induced
structures. Simulations of high angular resolution molecular line observations
demonstrate the potential of ALMA to provide complementary information about
the planet-disk interaction as compared to continuum observations. In
particular, the detection of planet-induced gaps is possible under certain
conditions.(abridged)Comment: 19 pages, 19 figures, accepted for publication in A&
On Primal-Dual Approach for Distributed Stochastic Convex Optimization over Networks
We introduce a primal-dual stochastic gradient oracle method for distributed
convex optimization problems over networks. We show that the proposed method is
optimal in terms of communication steps. Additionally, we propose a new
analysis method for the rate of convergence in terms of duality gap and
probability of large deviations. This analysis is based on a new technique that
allows to bound the distance between the iteration sequence and the optimal
point. By the proper choice of batch size, we can guarantee that this distance
equals (up to a constant) to the distance between the starting point and the
solution
Accelerating Incremental Gradient Optimization with Curvature Information
This paper studies an acceleration technique for incremental aggregated
gradient ({\sf IAG}) method through the use of \emph{curvature} information for
solving strongly convex finite sum optimization problems. These optimization
problems of interest arise in large-scale learning applications. Our technique
utilizes a curvature-aided gradient tracking step to produce accurate gradient
estimates incrementally using Hessian information. We propose and analyze two
methods utilizing the new technique, the curvature-aided IAG ({\sf CIAG})
method and the accelerated CIAG ({\sf A-CIAG}) method, which are analogous to
gradient method and Nesterov's accelerated gradient method, respectively.
Setting to be the condition number of the objective function, we prove
the linear convergence rates of for
the {\sf CIAG} method, and for the {\sf
A-CIAG} method, where are constants inversely proportional to
the distance between the initial point and the optimal solution. When the
initial iterate is close to the optimal solution, the linear convergence
rates match with the gradient and accelerated gradient method, albeit {\sf
CIAG} and {\sf A-CIAG} operate in an incremental setting with strictly lower
computation complexity. Numerical experiments confirm our findings. The source
codes used for this paper can be found on
\url{http://github.com/hoitowai/ciag/}.Comment: 22 pages, 3 figures, 3 tables. Accepted by Computational Optimization
and Applications, to appea
Fast Convergence Rates for Distributed Non-Bayesian Learning
We consider the problem of distributed learning, where a network of agents
collectively aim to agree on a hypothesis that best explains a set of
distributed observations of conditionally independent random processes. We
propose a distributed algorithm and establish consistency, as well as a
non-asymptotic, explicit and geometric convergence rate for the concentration
of the beliefs around the set of optimal hypotheses. Additionally, if the
agents interact over static networks, we provide an improved learning protocol
with better scalability with respect to the number of nodes in the network
- …
