5,457 research outputs found
Approximate Euclidean shortest paths in polygonal domains
Given a set of pairwise disjoint simple polygonal obstacles
in defined with vertices, we compute a sketch of
whose size is independent of , depending only on and the
input parameter . We utilize to compute a
-approximate geodesic shortest path between the two given points
in time. Here, is a user
parameter, and is a small positive constant (resulting from the time
for triangulating the free space of using the algorithm in
\cite{journals/ijcga/Bar-YehudaC94}). Moreover, we devise a
-approximation algorithm to answer two-point Euclidean distance
queries for the case of convex polygonal obstacles.Comment: a few updates; accepted to ISAAC 201
Relativistic MHD Simulations of Jets with Toroidal Magnetic Fields
This paper presents an application of the recent relativistic HLLC
approximate Riemann solver by Mignone & Bodo to magnetized flows with vanishing
normal component of the magnetic field.
The numerical scheme is validated in two dimensions by investigating the
propagation of axisymmetric jets with toroidal magnetic fields.
The selected jet models show that the HLLC solver yields sharper resolution
of contact and shear waves and better convergence properties over the
traditional HLL approach.Comment: 12 pages, 5 figure
Reducing the size and number of linear programs in a dynamic Gr\"obner basis algorithm
The dynamic algorithm to compute a Gr\"obner basis is nearly twenty years
old, yet it seems to have arrived stillborn; aside from two initial
publications, there have been no published followups. One reason for this may
be that, at first glance, the added overhead seems to outweigh the benefit; the
algorithm must solve many linear programs with many linear constraints. This
paper describes two methods of reducing the cost substantially, answering the
problem effectively.Comment: 11 figures, of which half are algorithms; submitted to journal for
refereeing, December 201
Block Factor-width-two Matrices and Their Applications to Semidefinite and Sum-of-squares Optimization
Semidefinite and sum-of-squares (SOS) optimization are fundamental
computational tools in many areas, including linear and nonlinear systems
theory. However, the scale of problems that can be addressed reliably and
efficiently is still limited. In this paper, we introduce a new notion of
\emph{block factor-width-two matrices} and build a new hierarchy of inner and
outer approximations of the cone of positive semidefinite (PSD) matrices. This
notion is a block extension of the standard factor-width-two matrices, and
allows for an improved inner-approximation of the PSD cone. In the context of
SOS optimization, this leads to a block extension of the \emph{scaled
diagonally dominant sum-of-squares (SDSOS)} polynomials. By varying a matrix
partition, the notion of block factor-width-two matrices can balance a
trade-off between the computation scalability and solution quality for solving
semidefinite and SOS optimization. Numerical experiments on large-scale
instances confirm our theoretical findings.Comment: 26 pages, 5 figures. Added a new section on the approximation quality
analysis using block factor-width-two matrices. Code is available through
https://github.com/zhengy09/SDPf
A Joint Intensity and Depth Co-Sparse Analysis Model for Depth Map Super-Resolution
High-resolution depth maps can be inferred from low-resolution depth
measurements and an additional high-resolution intensity image of the same
scene. To that end, we introduce a bimodal co-sparse analysis model, which is
able to capture the interdependency of registered intensity and depth
information. This model is based on the assumption that the co-supports of
corresponding bimodal image structures are aligned when computed by a suitable
pair of analysis operators. No analytic form of such operators exist and we
propose a method for learning them from a set of registered training signals.
This learning process is done offline and returns a bimodal analysis operator
that is universally applicable to natural scenes. We use this to exploit the
bimodal co-sparse analysis model as a prior for solving inverse problems, which
leads to an efficient algorithm for depth map super-resolution.Comment: 13 pages, 4 figure
The effect of massive neutrinos on the Sunyaev-Zeldovich and X-ray observables of galaxy clusters
Massive neutrinos are expected to influence the formation of the large-scale
structure of the Universe, depending on the value of their total mass, . In particular Planck data indicate that a non-zero may
help to reconcile CMB data with Sunyaev-Zel'dovich (SZ) cluster surveys. In
order to study the impact of neutrinos on the SZ and X-ray cluster properties
we run a set of six very large cosmological simulations (8 Gpc
comoving volume) that include a massive neutrino particle component: we
consider the values of = (0, 0.17, 0.34) eV in two cosmological
scenarios to test possible degeneracies. Using the halo catalogues extracted
from their outputs we produce 50 mock light-cones and, assuming suitable
scaling relations, we determine how massive neutrinos affect SZ and X-ray
cluster counts, the -parameter and its power spectrum. We provide forecasts
for the South Pole Telescope (SPT) and eROSITA cluster surveys, showing that
the number of expected detections is reduced by 40 per cent when assuming
=0.34 eV with respect to a model with massless neutrinos.
However the degeneracy with and is strong, in particular
for X-ray data, requiring the use of additional probes to break it. The
-parameter properties are also highly influenced by the neutrino mass
fraction, , with , considering the cluster
component only, and the normalization of the SZ power spectrum is proportional
to . Comparing our findings with SPT and Atacama Cosmology
Telescope measurements at = 3000 indicates that, when Planck
cosmological parameters are assumed, a value of eV is
required to fit with the data.Comment: 13 pages, 10 figures, 3 tables. Accepted for publication by MNRAS.
Substantial revisions after reviewer's comment
Interior Point Decoding for Linear Vector Channels
In this paper, a novel decoding algorithm for low-density parity-check (LDPC)
codes based on convex optimization is presented. The decoding algorithm, called
interior point decoding, is designed for linear vector channels. The linear
vector channels include many practically important channels such as inter
symbol interference channels and partial response channels. It is shown that
the maximum likelihood decoding (MLD) rule for a linear vector channel can be
relaxed to a convex optimization problem, which is called a relaxed MLD
problem. The proposed decoding algorithm is based on a numerical optimization
technique so called interior point method with barrier function. Approximate
variations of the gradient descent and the Newton methods are used to solve the
convex optimization problem. In a decoding process of the proposed algorithm, a
search point always lies in the fundamental polytope defined based on a
low-density parity-check matrix. Compared with a convectional joint message
passing decoder, the proposed decoding algorithm achieves better BER
performance with less complexity in the case of partial response channels in
many cases.Comment: 18 pages, 17 figures, The paper has been submitted to IEEE
Transaction on Information Theor
ROBAST: Development of a ROOT-Based Ray-Tracing Library for Cosmic-Ray Telescopes and its Applications in the Cherenkov Telescope Array
We have developed a non-sequential ray-tracing simulation library, ROOT-based
simulator for ray tracing (ROBAST), which is aimed to be widely used in optical
simulations of cosmic-ray (CR) and gamma-ray telescopes. The library is written
in C++, and fully utilizes the geometry library of the ROOT framework. Despite
the importance of optics simulations in CR experiments, no open-source software
for ray-tracing simulations that can be widely used in the community has
existed. To reduce the dispensable effort needed to develop multiple
ray-tracing simulators by different research groups, we have successfully used
ROBAST for many years to perform optics simulations for the Cherenkov Telescope
Array (CTA). Among the six proposed telescope designs for CTA, ROBAST is
currently used for three telescopes: a Schwarzschild-Couder (SC) medium-sized
telescope, one of SC small-sized telescopes, and a large-sized telescope (LST).
ROBAST is also used for the simulation and development of hexagonal light
concentrators proposed for the LST focal plane. Making full use of the ROOT
geometry library with additional ROBAST classes, we are able to build the
complex optics geometries typically used in CR experiments and ground-based
gamma-ray telescopes. We introduce ROBAST and its features developed for CR
experiments, and show several successful applications for CTA.Comment: Accepted for publication in Astroparticle Physics. 11 pages, 10
figures, 4 table
- …