257 research outputs found

    Reconstruction of Initial Beam Conditions at the Exit of the DARHT II Accelerator

    Get PDF
    We consider a technique to determine the initial beam conditions of the DARHT II Accelerator by measuring the beam size under three different magnetic transport settings. This may be time gated to resolve the parameters as a function of time within the 2000 nsec pulse. This technique leads to three equations in three unknowns with solution giving the accelerator exit beam radius, tilt and emittance. We find that systematic errors cancel and so are not a problem in unfolding the initial beam conditions. Random uncorrelated shot to shot errors can be managed by one of three strategies: 1) make the transport system optically de-magnifying; 2) average over many individual shots; or 3) make the random uncorrelated shot to shot errors sufficiently small. The high power of the DARHT II beam requires that the beam transport system leading to a radius measuring apparatus be optically magnifying. This means that the shot to shot random errors must either be made small (less than about 1%) or that we average each of the three beam radius determinations over many individual shots.Comment: 3 pages, 3 figures, LINAC2000 paper TUB1

    Caustics and wave front propagations (Singularity theory of smooth maps and related geometry)

    Get PDF

    Deep Proximal Learning for High-Resolution Plane Wave Compounding

    Get PDF
    Plane Wave imaging enables many applications that require high frame rates, including localisation microscopy, shear wave elastography, and ultra-sensitive Doppler. To alleviate the degradation of image quality with respect to conventional focused acquisition, typically, multiple acquisitions from distinctly steered plane waves are coherently (i.e. after time-of-flight correction) compounded into a single image. This poses a trade-off between image quality and achievable frame-rate. To that end, we propose a new deep learning approach, derived by formulating plane wave compounding as a linear inverse problem, that attains high resolution, high-contrast images from just 3 plane wave transmissions. Our solution unfolds the iterations of a proximal gradient descent algorithm as a deep network, thereby directly exploiting the physics-based generative acquisition model into the neural network design. We train our network in a greedy manner, i.e. layer-by-layer, using a combination of pixel, temporal, and distribution (adversarial) losses to achieve both perceptual fidelity and data consistency. Through the strong model-based inductive bias, the proposed architecture outperforms several standard benchmark architectures in terms of image quality, with a low computational and memory footprint

    On Pitts' Relational Properties of Domains

    Full text link
    Andrew Pitts' framework of relational properties of domains is a powerful method for defining predicates or relations on domains, with applications ranging from reasoning principles for program equivalence to proofs of adequacy connecting denotational and operational semantics. Its main appeal is handling recursive definitions that are not obviously well-founded: as long as the corresponding domain is also defined recursively, and its recursion pattern lines up appropriately with the definition of the relations, the framework can guarantee their existence. Pitts' original development used the Knaster-Tarski fixed-point theorem as a key ingredient. In these notes, I show how his construction can be seen as an instance of other key fixed-point theorems: the inverse limit construction, the Banach fixed-point theorem and the Kleene fixed-point theorem. The connection underscores how Pitts' construction is intimately tied to the methods for constructing the base recursive domains themselves, and also to techniques based on guarded recursion, or step-indexing, that have become popular in the last two decades
    corecore