1,051 research outputs found

    Enhancing Compressed Sensing 4D Photoacoustic Tomography by Simultaneous Motion Estimation

    Get PDF
    A crucial limitation of current high-resolution 3D photoacoustic tomography (PAT) devices that employ sequential scanning is their long acquisition time. In previous work, we demonstrated how to use compressed sensing techniques to improve upon this: images with good spatial resolution and contrast can be obtained from suitably sub-sampled PAT data acquired by novel acoustic scanning systems if sparsity-constrained image reconstruction techniques such as total variation regularization are used. Now, we show how a further increase of image quality can be achieved for imaging dynamic processes in living tissue (4D PAT). The key idea is to exploit the additional temporal redundancy of the data by coupling the previously used spatial image reconstruction models with sparsity-constrained motion estimation models. While simulated data from a two-dimensional numerical phantom will be used to illustrate the main properties of this recently developed joint-image-reconstruction-and-motion-estimation framework, measured data from a dynamic experimental phantom will also be used to demonstrate their potential for challenging, large-scale, real-world, three-dimensional scenarios. The latter only becomes feasible if a carefully designed combination of tailored optimization schemes is employed, which we describe and examine in more detail

    Consensus-based approach to peer-to-peer electricity markets with product differentiation

    Full text link
    With the sustained deployment of distributed generation capacities and the more proactive role of consumers, power systems and their operation are drifting away from a conventional top-down hierarchical structure. Electricity market structures, however, have not yet embraced that evolution. Respecting the high-dimensional, distributed and dynamic nature of modern power systems would translate to designing peer-to-peer markets or, at least, to using such an underlying decentralized structure to enable a bottom-up approach to future electricity markets. A peer-to-peer market structure based on a Multi-Bilateral Economic Dispatch (MBED) formulation is introduced, allowing for multi-bilateral trading with product differentiation, for instance based on consumer preferences. A Relaxed Consensus+Innovation (RCI) approach is described to solve the MBED in fully decentralized manner. A set of realistic case studies and their analysis allow us showing that such peer-to-peer market structures can effectively yield market outcomes that are different from centralized market structures and optimal in terms of respecting consumers preferences while maximizing social welfare. Additionally, the RCI solving approach allows for a fully decentralized market clearing which converges with a negligible optimality gap, with a limited amount of information being shared.Comment: Accepted for publication in IEEE Transactions on Power System

    {RAMA}: {A} Rapid Multicut Algorithm on {GPU}

    Get PDF
    We propose a highly parallel primal-dual algorithm for the multicut (a.k.a. correlation clustering) problem, a classical graph clustering problem widely used in machine learning and computer vision. Our algorithm consists of three steps executed recursively: (1) Finding conflicted cycles that correspond to violated inequalities of the underlying multicut relaxation, (2) Performing message passing between the edges and cycles to optimize the Lagrange relaxation coming from the found violated cycles producing reduced costs and (3) Contracting edges with high reduced costs through matrix-matrix multiplications. Our algorithm produces primal solutions and dual lower bounds that estimate the distance to optimum. We implement our algorithm on GPUs and show resulting one to two order-of-magnitudes improvements in execution speed without sacrificing solution quality compared to traditional serial algorithms that run on CPUs. We can solve very large scale benchmark problems with up to O(108)\mathcal{O}(10^8) variables in a few seconds with small primal-dual gaps. We make our code available at https://github.com/pawelswoboda/RAMA

    Risk-based security-constrained optimal power flow: Mathematical fundamentals, computational strategies, validation, and use within electricity markets

    Get PDF
    This dissertation contributes to develop the mathematical fundamentals and computational strategies of risk-based security-constrained optimal power flow (RB-SCOPF) and validate its application in electricity markets. The RB-SCOPF enforces three types of flow-related constraints: normal state deterministic flow limits, contingency state deterministic flow limits (the N-1 criteria), and contingency state system risk, which depends only on contingency states but not the normal state. Each constraint group is scaled by a single parameter setting allowing tradeoffs between deterministic constraints and system risk. Relative to the security-constrained optimal power flow (SCOPF) used in industry today, the RB-SCOPF finds operating conditions that are more secure and more economic. It does this by obtaining solutions that achieve better balance between post-contingency flows on individual circuits and overall system risk. The method exploits the fact that, in a SCOPF solution, some post-contingency circuit flows which exceed their limits impose little risk while other post-contingency circuit flows which are within their limits impose significant risk. The RB-SCOPF softens constraints for the former and hardens constraints for the latter, thus achieving simultaneous improvement in both security and economy. Although the RB-SCOPF is more time-intensive to solve than SCOPF, we have developed efficient algorithms that allow RB-SCOPF to solve in sufficient time for use in real-time electricity markets. In contrast to SCOPF, which motivates market behavior to offload circuit flows exceeding rated flows, the use of RB-SCOPF provides price signals that motivate market behavior to offload circuit flows and to enhance system-wide security levels. Voltage stability testing has demonstrated that the dispatch result based on RB-SCOPF has higher reactive margins at normal state and after a contingency happens, thus has better static voltage stability performance

    Revisiting Tardos's framework for linear programming: faster exact solutions using approximate solvers

    Get PDF
    In breakthrough work, Tardos (Oper. Res. ’86) gave a proximity based framework for solving linear programming (LP) in time depending only on the constraint matrix in the bit complexity model. In Tardos’s framework, one reduces solving the LP min⟹c, x⟩, Ax = b, x ≄ 0, A ∈ Z m×n, to solving O(nm) LPs in A having small integer coefficient objectives and right-hand sides using any exact LP algorithm. This gives rise to an LP algorithm in time poly(n, m log ∆A), where ∆A is the largest subdeterminant of A. A significant extension to the real model of computation was given by Vavasis and Ye (Math. Prog. ’96), giving a specialized interior point method that runs in time poly(n, m, log ÂŻÏ‡A), depending on Stewart’s Ï‡ÂŻA, a well-studied condition number. In this work, we extend Tardos’s original framework to obtain such a running time dependence. In particular, we replace the exact LP solves with approximate ones, enabling us to directly leverage the tremendous recent algorithmic progress for approximate linear programming. More precisely, we show that the fundamental “accuracy” needed to exactly solve any LP in A is inverse polynomial in n and log ÂŻÏ‡A. Plugging in the recent algorithm of van den Brand (SODA ’20), our method computes an optimal primal and dual solution using O(mnω+1+o(1) log( ÂŻÏ‡A + n)) arithmetic operations, outperforming the specialized interior point method of Vavasis and Ye and its recent improvement by Dadush et al (STOC ’20). By applying the preprocessing algorithm of the latter paper, the dependence can also be reduced from ÂŻÏ‡A to ÂŻÏ‡ ∗ A, the minimum value of ÂŻÏ‡AD attainable via column rescalings. Our framework is applicable to achieve the poly(n, m, log ÂŻÏ‡ ∗ A) bound using essentially any weakly polynomial LP algorithm, such as the ellipsoid method. At a technical level, our framework combines together approximate LP solutions to compute exact ones, making use of constructive proximity theorems—which bound the distance between solutions of “nearby” LPs—to keep the required accuracy low
    • 

    corecore