7,470 research outputs found

    Robust sparse image reconstruction of radio interferometric observations with purify

    Get PDF
    Next-generation radio interferometers, such as the Square Kilometre Array (SKA), will revolutionise our understanding of the universe through their unprecedented sensitivity and resolution. However, to realise these goals significant challenges in image and data processing need to be overcome. The standard methods in radio interferometry for reconstructing images, such as CLEAN, have served the community well over the last few decades and have survived largely because they are pragmatic. However, they produce reconstructed inter\-ferometric images that are limited in quality and scalability for big data. In this work we apply and evaluate alternative interferometric reconstruction methods that make use of state-of-the-art sparse image reconstruction algorithms motivated by compressive sensing, which have been implemented in the PURIFY software package. In particular, we implement and apply the proximal alternating direction method of multipliers (P-ADMM) algorithm presented in a recent article. First, we assess the impact of the interpolation kernel used to perform gridding and degridding on sparse image reconstruction. We find that the Kaiser-Bessel interpolation kernel performs as well as prolate spheroidal wave functions, while providing a computational saving and an analytic form. Second, we apply PURIFY to real interferometric observations from the Very Large Array (VLA) and the Australia Telescope Compact Array (ATCA) and find images recovered by PURIFY are higher quality than those recovered by CLEAN. Third, we discuss how PURIFY reconstructions exhibit additional advantages over those recovered by CLEAN. The latest version of PURIFY, with developments presented in this work, is made publicly available.Comment: 22 pages, 10 figures, PURIFY code available at http://basp-group.github.io/purif

    High-speed, high-frequency ultrasound, \u3ci\u3ein utero\u3c/i\u3e vector-flow imaging of mouse embryos

    Get PDF
    Real-time imaging of the embryonic murine cardiovascular system is challenging due to the small size of the mouse embryo and rapid heart rate. High-frequency, linear-array ultrasound systems designed for small-animal imaging provide high-frame-rate and Doppler modes but are limited in regards to the field of view that can be imaged at fine-temporal and -spatial resolution. Here, a plane-wave imaging method was used to obtain high-speed image data from in utero mouse embryos and multi-angle, vector-flow algorithms were applied to the data to provide information on blood flow patterns in major organs. An 18-MHz linear array was used to acquire plane-wave data at absolute frame rates ≥10 kHz using a set of fixed transmission angles. After beamforming, vector-flow processing and image compounding, effective frame rates were on the order of 2 kHz. Data were acquired from the embryonic liver, heart and umbilical cord. Vector-flow results clearly revealed the complex nature of blood-flow patterns in the embryo with fine-temporal and -spatial resolution

    The Iray Light Transport Simulation and Rendering System

    Full text link
    While ray tracing has become increasingly common and path tracing is well understood by now, a major challenge lies in crafting an easy-to-use and efficient system implementing these technologies. Following a purely physically-based paradigm while still allowing for artistic workflows, the Iray light transport simulation and rendering system allows for rendering complex scenes by the push of a button and thus makes accurate light transport simulation widely available. In this document we discuss the challenges and implementation choices that follow from our primary design decisions, demonstrating that such a rendering system can be made a practical, scalable, and efficient real-world application that has been adopted by various companies across many fields and is in use by many industry professionals today

    Body MRI artifacts in clinical practice: a physicist\u27s and radiologist\u27s perspective.

    Get PDF
    The high information content of MRI exams brings with it unintended effects, which we call artifacts. The purpose of this review is to promote understanding of these artifacts, so they can be prevented or properly interpreted to optimize diagnostic effectiveness. We begin by addressing static magnetic field uniformity, which is essential for many techniques, such as fat saturation. Eddy currents, resulting from imperfect gradient pulses, are especially problematic for new techniques that depend on high performance gradient switching. Nonuniformity of the transmit radiofrequency system constitutes another source of artifacts, which are increasingly important as magnetic field strength increases. Defects in the receive portion of the radiofrequency system have become a more complex source of problems as the number of radiofrequency coils, and the sophistication of the analysis of their received signals, has increased. Unwanted signals and noise spikes have many causes, often manifesting as zipper or banding artifacts. These image alterations become particularly severe and complex when they are combined with aliasing effects. Aliasing is one of several phenomena addressed in our final section, on artifacts that derive from encoding the MR signals to produce images, also including those related to parallel imaging, chemical shift, motion, and image subtraction

    Optimal prefilters for display enhancement

    Get PDF
    Creating images from a set of discrete samples is arguably the most common operation in computer graphics and image processing, lying, for example, at the heart of rendering and image downscaling techniques. Traditional tools for this task are based on classic sampling theory and are modeled under mathematical conditions which are, in most cases, unrealistic; for example, sinc reconstruction – required by Shannon theorem in order to recover a signal exactly – is impossible to achieve in practice because LCD displays perform a box-like interpolation of the samples. Moreover, when an image is made for a human to look at, it will necessarily undergo some modifications due to the human optical system and all the neural processes involved in vision. Finally, image processing practitioners noticed that sinc prefiltering – also required by Shannon theorem – often leads to visually unpleasant images. From these facts, we can deduce that we cannot guarantee, via classic sampling theory, that the signal we see in a display is the best representation of the original image we had in first place. In this work, we propose a novel family of image prefilters based on modern sampling theory, and on a simple model of how the human visual system perceives an image on a display. The use of modern sampling theory guarantees us that the perceived image, based on this model, is indeed the best representation possible, and at virtually no computational overhead. We analyze the spectral properties of these prefilters, showing that they offer the possibility of trading-off aliasing and ringing, while guaranteeing that images look sharper then those generated with both classic and state-of-the-art filters. Finally, we compare it against other solutions in a selection of applications which include Monte Carlo rendering and image downscaling, also giving directions on how to apply it in different contexts.Exibir imagens a partir de um conjunto discreto de amostras é certamente uma das operações mais comuns em computação gráfica e processamento de imagens. Ferramentas tradicionais para essa tarefa são baseadas no teorema de Shannon e são modeladas em condições matemáticas que são, na maior parte dos casos, irrealistas; por exemplo, reconstrução com sinc – necessária pelo teorema de Shannon para recuperar um sinal exatamente – é impossível na prática, já que displays LCD realizam uma reconstrução mais próxima de uma interpolação com kernel box. Além disso, profissionais em processamento de imagem perceberam que prefiltragem com sinc – também requerida pelo teorema de Shannon – em geral leva a imagens visualmente desagradáveis devido ao fenômeno de ringing: oscilações próximas a regiões de descontinuidade nas imagens. Desses fatos, deduzimos que não é possível garantir, via ferramentas tradicionais de amostragem e reconstrução, que a imagem que observamos em um display digital é a melhor representação para a imagem original. Neste trabalho, propomos uma família de prefiltros baseada em teoria de amostragem generalizada e em um modelo de como o sistema ótico do olho humano modifica uma imagem. Proposta por Unser and Aldroubi (1994), a teoria de amostragem generalizada é mais geral que o teorema proposto por Shannon, e mostra como é possível pré-filtrar e reconstruir sinais usando kernels diferentes do sinc. Modelamos o sistema ótico do olho como uma câmera com abertura finita e uma lente delgada, o que apesar de ser simples é suficiente para os nossos propósitos. Além de garantir aproximação ótima quando reconstruindo as amostras por um display e filtrando a imagem com o modelo do sistema ótico humano, a teoria de amostragem generalizada garante que essas operações são extremamente eficientes, todas lineares no número de pixels de entrada. Também, analisamos as propriedades espectrais desses filtros e de técnicas semelhantes na literatura, mostrando que é possível obter um bom tradeoff entre aliasing e ringing (principais artefatos quando lidamos com amostragem e reconstrução de imagens), enquanto garantimos que as imagens finais são mais nítidas que aquelas geradas por técnicas existentes na literatura. Finalmente, mostramos algumas aplicações da nossa técnica em melhoria de imagens, adaptação à distâncias de visualização diferentes, redução de imagens e renderização de imagens sintéticas por método de Monte Carlo

    The non-coplanar baselines effect in radio interferometry: The W-Projection algorithm

    Full text link
    We consider a troublesome form of non-isoplanatism in synthesis radio telescopes: non-coplanar baselines. We present a novel interpretation of the non-coplanar baselines effect as being due to differential Fresnel diffraction in the neighborhood of the array antennas. We have developed a new algorithm to deal with this effect. Our new algorithm, which we call "W-projection", has markedly superior performance compared to existing algorithms. At roughly equivalent levels of accuracy, W-projection can be up to an order of magnitude faster than the corresponding facet-based algorithms. Furthermore, the precision of result is not tightly coupled to computing time. W-projection has important consequences for the design and operation of the new generation of radio telescopes operating at centimeter and longer wavelengths.Comment: Accepted for publication in "IEEE Journal of Selected Topics in Signal Processing

    Status and performance of the Gemini Planet Imager adaptive optics system

    Full text link
    The Gemini Planet Imager is a high-contrast near-infrared instrument specifically designed to image exoplanets and circumstellar disks over a narrow field of view. We use science data and AO telemetry taken during the first 1.5 yr of the GPI Exoplanet Survey to quantify the performance of the AO system. In a typical 60 sec H-band exposure, GPI achieves a 5σ\sigma raw contrast of 10−4^{-4} at 0.4"; typical final 5σ\sigma contrasts for full 1 hr sequences are more than 10 times better than raw contrasts. We find that contrast is limited by bandwidth wavefront error over much of the PSF. Preliminary exploratory factor analysis can explain 60-70% of the variance in raw contrasts with combinations of seeing and wavefront error metrics. We also examine the effect of higher loop gains on contrast by comparing wavefront error maps reconstructed from AO telemetry to concurrent IFS images. These results point to several ways that GPI performance could be improved in software or hardware.Comment: 15 pages, 11 figure

    Validating Stereoscopic Volume Rendering

    Get PDF
    The evaluation of stereoscopic displays for surface-based renderings is well established in terms of accurate depth perception and tasks that require an understanding of the spatial layout of the scene. In comparison direct volume rendering (DVR) that typically produces images with a high number of low opacity, overlapping features is only beginning to be critically studied on stereoscopic displays. The properties of the specific images and the choice of parameters for DVR algorithms make assessing the effectiveness of stereoscopic displays for DVR particularly challenging and as a result existing literature is sparse with inconclusive results. In this thesis stereoscopic volume rendering is analysed for tasks that require depth perception including: stereo-acuity tasks, spatial search tasks and observer preference ratings. The evaluations focus on aspects of the DVR rendering pipeline and assess how the parameters of volume resolution, reconstruction filter and transfer function may alter task performance and the perceived quality of the produced images. The results of the evaluations suggest that the transfer function and choice of recon- struction filter can have an effect on the performance on tasks with stereoscopic displays when all other parameters are kept consistent. Further, these were found to affect the sensitivity and bias response of the participants. The studies also show that properties of the reconstruction filters such as post-aliasing and smoothing do not correlate well with either task performance or quality ratings. Included in the contributions are guidelines and recommendations on the choice of pa- rameters for increased task performance and quality scores as well as image based methods of analysing stereoscopic DVR images
    • …
    corecore