131,105 research outputs found

    PURIFY: a new approach to radio-interferometric imaging

    Get PDF
    In a recent article series, the authors have promoted convex optimization algorithms for radio-interferometric imaging in the framework of compressed sensing, which leverages sparsity regularization priors for the associated inverse problem and defines a minimization problem for image reconstruction. This approach was shown, in theory and through simulations in a simple discrete visibility setting, to have the potential to outperform significantly CLEAN and its evolutions. In this work, we leverage the versatility of convex optimization in solving minimization problems to both handle realistic continuous visibilities and offer a highly parallelizable structure paving the way to significant acceleration of the reconstruction and high-dimensional data scalability. The new algorithmic structure promoted relies on the simultaneous-direction method of multipliers (SDMM), and contrasts with the current major-minor cycle structure of CLEAN and its evolutions, which in particular cannot handle the state-of-the-art minimization problems under consideration where neither the regularization term nor the data term are differentiable functions. We release a beta version of an SDMM-based imaging software written in C and dubbed PURIFY (http://basp-group.github.io/purify/) that handles various sparsity priors, including our recent average sparsity approach SARA. We evaluate the performance of different priors through simulations in the continuous visibility setting, confirming the superiority of SARA

    CONTENT RATING SYSTEM FOR NETWORK-BASED SOLUTIONS

    Get PDF
    Constraints (such as, for example, data privacy limitations) may limit the ability to measure the effectiveness of a piece of content (such as, for example, a Uniform Resource Locator (URL)) in solving a (e.g., network related) problem. Current analytics solutions focus on providing data on how many times a site is visited, but there is a lack of data that correlates a site’s visibility with its effectiveness at solving a problem. To address challenges of these types techniques are presented herein that support a method for developing and safely exposing resolution ratings of internal and external URLs – based on, possibly among other things, information in ticket management or helpdesk systems – across various organizations through (e.g., open source) web analytics platforms. Aspects of the presented techniques may encompass, among other things, a URL Miner (which may tag URLs and create a specific user rating) and a Sanitized URL Rating (SUR) function

    Voltage Profile Amplification and Attenuation of Real Power Loss by Using New Cuttlefish Algorithm

    Get PDF
    This paper presents an algorithm for solving the multi-objective reactive power dispatch problem in a power system. Modal analysis of the system is used for static voltage stability assessment. Loss minimization and maximization of voltage stability margin are taken as the objectives. Generator terminal voltages, reactive power generation of the capacitor banks and tap changing transformer setting are taken as the optimization variables. Global optimization methods play an important role to solve many real-world problems. However, the implementation of single methods is excessively preventive for high dimensionality and nonlinear problems, especially in term of the accuracy of finding best solutions and convergence speed performance. In this paper, a New Cuttlefish Algorithm (NCFA) is proposed to solve reactive power dispatch problem. The algorithm imitates the method of colour altering behaviour used by the cuttlefish .The patterns and colours seen in cuttlefish are produced by reflected light from different layers of cells including (chromatophores, leucophores and iridophores) heap together, and it is the amalgamation of certain cells at once that allows cuttlefish to acquire such a huge array of patterns and colours. The projected algorithm considers two key progressions: reflection and visibility. Reflection process is projected to replicate the light reflection mechanism used by these three layers, while the visibility is projected to replicate the visibility of matching pattern used by the cuttlefish. These two processes are used as a explore strategy to find the global optimal solution. The proposed New Cuttlefish Algorithm (NCFA) has been tested on standard IEEE 30 bus test system and simulation results show clearly the better performance of the proposed algorithm in enhancing the voltage stability and reducing the real power loss. Keywords: Optimal Reactive Power, Transmission loss, Cuttlefish algorithm, Reflection, Visibility, Optimization, Chromatophores, Iridophores, Leucophores

    Light in Power: A General and Parameter-free Algorithm for Caustic Design

    Get PDF
    We present in this paper a generic and parameter-free algorithm to efficiently build a wide variety of optical components, such as mirrors or lenses, that satisfy some light energy constraints. In all of our problems, one is given a collimated or point light source and a desired illumination after reflection or refraction and the goal is to design the geometry of a mirror or lens which transports exactly the light emitted by the source onto the target. We first propose a general framework and show that eight different optical component design problems amount to solving a light energy conservation equation that involves the computation of visibility diagrams. We then show that these diagrams all have the same structure and can be obtained by intersecting a 3D Power diagram with a planar or spherical domain. This allows us to propose an efficient and fully generic algorithm capable to solve these eight optical component design problems. The support of the prescribed target illumination can be a set of directions or a set of points located at a finite distance. Our solutions satisfy design constraints such as convexity or concavity. We show the effectiveness of our algorithm on simulated and fabricated examples

    Solving Large-Scale Optimization Problems Related to Bell's Theorem

    Get PDF
    Impossibility of finding local realistic models for quantum correlations due to entanglement is an important fact in foundations of quantum physics, gaining now new applications in quantum information theory. We present an in-depth description of a method of testing the existence of such models, which involves two levels of optimization: a higher-level non-linear task and a lower-level linear programming (LP) task. The article compares the performances of the existing implementation of the method, where the LPs are solved with the simplex method, and our new implementation, where the LPs are solved with a matrix-free interior point method. We describe in detail how the latter can be applied to our problem, discuss the basic scenario and possible improvements and how they impact on overall performance. Significant performance advantage of the matrix-free interior point method over the simplex method is confirmed by extensive computational results. The new method is able to solve problems which are orders of magnitude larger. Consequently, the noise resistance of the non-classicality of correlations of several types of quantum states, which has never been computed before, can now be efficiently determined. An extensive set of data in the form of tables and graphics is presented and discussed. The article is intended for all audiences, no quantum-mechanical background is necessary.Comment: 19 pages, 7 tables, 1 figur

    Engineering Art Galleries

    Full text link
    The Art Gallery Problem is one of the most well-known problems in Computational Geometry, with a rich history in the study of algorithms, complexity, and variants. Recently there has been a surge in experimental work on the problem. In this survey, we describe this work, show the chronology of developments, and compare current algorithms, including two unpublished versions, in an exhaustive experiment. Furthermore, we show what core algorithmic ingredients have led to recent successes

    Phaseless VLBI mapping of compact extragalactic radio sources

    Full text link
    The problem of phaseless aperture synthesis is of current interest in phase-unstable VLBI with a small number of elements when either the use of closure phases is not possible (a two-element interferometer) or their quality and number are not enough for acceptable image reconstruction by standard adaptive calibration methods. Therefore, we discuss the problem of unique image reconstruction only from the spectrum magnitude of a source. We suggest an efficient method for phaseless VLBI mapping of compact extragalactic radio sources. This method is based on the reconstruction of the spectrum magnitude for a source on the entire UV plane from the measured visibility magnitude on a limited set of points and the reconstruction of the sought-for image of the source by Fienup's method from the spectrum magnitude reconstructed at the first stage. We present the results of our mapping of the extragalactic radio source 2200 +420 using astrometric and geodetic observations on a global VLBI array. Particular attention is given to studying the capabilities of a two-element interferometer in connection with the putting into operation of a Russian-made radio interferometer based on Quasar RT-32 radio telescopes.Comment: 21 pages, 6 figure

    Exploring multiple viewshed analysis using terrain features and optimisation techniques

    Get PDF
    The calculation of viewsheds is a routine operation in geographic information systems and is used in a wide range of applications. Many of these involve the siting of features, such as radio masts, which are part of a network and yet the selection of sites is normally done separately for each feature. The selection of a series of locations which collectively maximise the visual coverage of an area is a combinatorial problem and as such cannot be directly solved except for trivial cases. In this paper, two strategies for tackling this problem are explored. The first is to restrict the search to key topographic points in the landscape such as peaks, pits and passes. The second is to use heuristics which have been applied to other maximal coverage spatial problems such as location-allocation. The results show that the use of these two strategies results in a reduction of the computing time necessary by two orders of magnitude, but at the cost of a loss of 10% in the area viewed. Three different heuristics were used, of which Simulated Annealing produced the best results. However the improvement over a much simpler fast-descent swap heuristic was very slight, but at the cost of greatly increased running times. © 2004 Elsevier Ltd. All rights reserved
    • …
    corecore