5,457 research outputs found

    Approximate Euclidean shortest paths in polygonal domains

    Get PDF
    Given a set P\mathcal{P} of hh pairwise disjoint simple polygonal obstacles in R2\mathbb{R}^2 defined with nn vertices, we compute a sketch Ω\Omega of P\mathcal{P} whose size is independent of nn, depending only on hh and the input parameter ϵ\epsilon. We utilize Ω\Omega to compute a (1+ϵ)(1+\epsilon)-approximate geodesic shortest path between the two given points in O(n+h((lgn)+(lgh)1+δ+(1ϵlghϵ)))O(n + h((\lg{n}) + (\lg{h})^{1+\delta} + (\frac{1}{\epsilon}\lg{\frac{h}{\epsilon}}))) time. Here, ϵ\epsilon is a user parameter, and δ\delta is a small positive constant (resulting from the time for triangulating the free space of P\cal P using the algorithm in \cite{journals/ijcga/Bar-YehudaC94}). Moreover, we devise a (2+ϵ)(2+\epsilon)-approximation algorithm to answer two-point Euclidean distance queries for the case of convex polygonal obstacles.Comment: a few updates; accepted to ISAAC 201

    Relativistic MHD Simulations of Jets with Toroidal Magnetic Fields

    Full text link
    This paper presents an application of the recent relativistic HLLC approximate Riemann solver by Mignone & Bodo to magnetized flows with vanishing normal component of the magnetic field. The numerical scheme is validated in two dimensions by investigating the propagation of axisymmetric jets with toroidal magnetic fields. The selected jet models show that the HLLC solver yields sharper resolution of contact and shear waves and better convergence properties over the traditional HLL approach.Comment: 12 pages, 5 figure

    Reducing the size and number of linear programs in a dynamic Gr\"obner basis algorithm

    Full text link
    The dynamic algorithm to compute a Gr\"obner basis is nearly twenty years old, yet it seems to have arrived stillborn; aside from two initial publications, there have been no published followups. One reason for this may be that, at first glance, the added overhead seems to outweigh the benefit; the algorithm must solve many linear programs with many linear constraints. This paper describes two methods of reducing the cost substantially, answering the problem effectively.Comment: 11 figures, of which half are algorithms; submitted to journal for refereeing, December 201

    Block Factor-width-two Matrices and Their Applications to Semidefinite and Sum-of-squares Optimization

    Full text link
    Semidefinite and sum-of-squares (SOS) optimization are fundamental computational tools in many areas, including linear and nonlinear systems theory. However, the scale of problems that can be addressed reliably and efficiently is still limited. In this paper, we introduce a new notion of \emph{block factor-width-two matrices} and build a new hierarchy of inner and outer approximations of the cone of positive semidefinite (PSD) matrices. This notion is a block extension of the standard factor-width-two matrices, and allows for an improved inner-approximation of the PSD cone. In the context of SOS optimization, this leads to a block extension of the \emph{scaled diagonally dominant sum-of-squares (SDSOS)} polynomials. By varying a matrix partition, the notion of block factor-width-two matrices can balance a trade-off between the computation scalability and solution quality for solving semidefinite and SOS optimization. Numerical experiments on large-scale instances confirm our theoretical findings.Comment: 26 pages, 5 figures. Added a new section on the approximation quality analysis using block factor-width-two matrices. Code is available through https://github.com/zhengy09/SDPf

    A Joint Intensity and Depth Co-Sparse Analysis Model for Depth Map Super-Resolution

    Full text link
    High-resolution depth maps can be inferred from low-resolution depth measurements and an additional high-resolution intensity image of the same scene. To that end, we introduce a bimodal co-sparse analysis model, which is able to capture the interdependency of registered intensity and depth information. This model is based on the assumption that the co-supports of corresponding bimodal image structures are aligned when computed by a suitable pair of analysis operators. No analytic form of such operators exist and we propose a method for learning them from a set of registered training signals. This learning process is done offline and returns a bimodal analysis operator that is universally applicable to natural scenes. We use this to exploit the bimodal co-sparse analysis model as a prior for solving inverse problems, which leads to an efficient algorithm for depth map super-resolution.Comment: 13 pages, 4 figure

    The effect of massive neutrinos on the Sunyaev-Zeldovich and X-ray observables of galaxy clusters

    Get PDF
    Massive neutrinos are expected to influence the formation of the large-scale structure of the Universe, depending on the value of their total mass, Σmν\Sigma m_\nu. In particular Planck data indicate that a non-zero Σmν\Sigma m_\nu may help to reconcile CMB data with Sunyaev-Zel'dovich (SZ) cluster surveys. In order to study the impact of neutrinos on the SZ and X-ray cluster properties we run a set of six very large cosmological simulations (8h3h^{-3} Gpc3^3 comoving volume) that include a massive neutrino particle component: we consider the values of Σmν\Sigma m_\nu = (0, 0.17, 0.34) eV in two cosmological scenarios to test possible degeneracies. Using the halo catalogues extracted from their outputs we produce 50 mock light-cones and, assuming suitable scaling relations, we determine how massive neutrinos affect SZ and X-ray cluster counts, the yy-parameter and its power spectrum. We provide forecasts for the South Pole Telescope (SPT) and eROSITA cluster surveys, showing that the number of expected detections is reduced by 40 per cent when assuming Σmν\Sigma m_\nu =0.34 eV with respect to a model with massless neutrinos. However the degeneracy with σ8\sigma_8 and Ωm\Omega_m is strong, in particular for X-ray data, requiring the use of additional probes to break it. The yy-parameter properties are also highly influenced by the neutrino mass fraction, fνf_\nu, with (1fν)20\propto(1-f_\nu)^{20}, considering the cluster component only, and the normalization of the SZ power spectrum is proportional to (1fν)2530(1-f_\nu)^{25-30}. Comparing our findings with SPT and Atacama Cosmology Telescope measurements at \ell = 3000 indicates that, when Planck cosmological parameters are assumed, a value of Σmν0.34\Sigma m_\nu\simeq0.34 eV is required to fit with the data.Comment: 13 pages, 10 figures, 3 tables. Accepted for publication by MNRAS. Substantial revisions after reviewer's comment

    Interior Point Decoding for Linear Vector Channels

    Full text link
    In this paper, a novel decoding algorithm for low-density parity-check (LDPC) codes based on convex optimization is presented. The decoding algorithm, called interior point decoding, is designed for linear vector channels. The linear vector channels include many practically important channels such as inter symbol interference channels and partial response channels. It is shown that the maximum likelihood decoding (MLD) rule for a linear vector channel can be relaxed to a convex optimization problem, which is called a relaxed MLD problem. The proposed decoding algorithm is based on a numerical optimization technique so called interior point method with barrier function. Approximate variations of the gradient descent and the Newton methods are used to solve the convex optimization problem. In a decoding process of the proposed algorithm, a search point always lies in the fundamental polytope defined based on a low-density parity-check matrix. Compared with a convectional joint message passing decoder, the proposed decoding algorithm achieves better BER performance with less complexity in the case of partial response channels in many cases.Comment: 18 pages, 17 figures, The paper has been submitted to IEEE Transaction on Information Theor

    ROBAST: Development of a ROOT-Based Ray-Tracing Library for Cosmic-Ray Telescopes and its Applications in the Cherenkov Telescope Array

    Full text link
    We have developed a non-sequential ray-tracing simulation library, ROOT-based simulator for ray tracing (ROBAST), which is aimed to be widely used in optical simulations of cosmic-ray (CR) and gamma-ray telescopes. The library is written in C++, and fully utilizes the geometry library of the ROOT framework. Despite the importance of optics simulations in CR experiments, no open-source software for ray-tracing simulations that can be widely used in the community has existed. To reduce the dispensable effort needed to develop multiple ray-tracing simulators by different research groups, we have successfully used ROBAST for many years to perform optics simulations for the Cherenkov Telescope Array (CTA). Among the six proposed telescope designs for CTA, ROBAST is currently used for three telescopes: a Schwarzschild-Couder (SC) medium-sized telescope, one of SC small-sized telescopes, and a large-sized telescope (LST). ROBAST is also used for the simulation and development of hexagonal light concentrators proposed for the LST focal plane. Making full use of the ROOT geometry library with additional ROBAST classes, we are able to build the complex optics geometries typically used in CR experiments and ground-based gamma-ray telescopes. We introduce ROBAST and its features developed for CR experiments, and show several successful applications for CTA.Comment: Accepted for publication in Astroparticle Physics. 11 pages, 10 figures, 4 table
    corecore