1,212 research outputs found

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    Numerical methods for coupled reconstruction and registration in digital breast tomosynthesis.

    Get PDF
    Digital Breast Tomosynthesis (DBT) provides an insight into the fine details of normal fibroglandular tissues and abnormal lesions by reconstructing a pseudo-3D image of the breast. In this respect, DBT overcomes a major limitation of conventional X-ray mam- mography by reducing the confounding effects caused by the superposition of breast tissue. In a breast cancer screening or diagnostic context, a radiologist is interested in detecting change, which might be indicative of malignant disease. To help automate this task image registration is required to establish spatial correspondence between time points. Typically, images, such as MRI or CT, are first reconstructed and then registered. This approach can be effective if reconstructing using a complete set of data. However, for ill-posed, limited-angle problems such as DBT, estimating the deformation is com- plicated by the significant artefacts associated with the reconstruction, leading to severe inaccuracies in the registration. This paper presents a mathematical framework, which couples the two tasks and jointly estimates both image intensities and the parameters of a transformation. Under this framework, we compare an iterative method and a simultaneous method, both of which tackle the problem of comparing DBT data by combining reconstruction of a pair of temporal volumes with their registration. We evaluate our methods using various computational digital phantoms, uncom- pressed breast MR images, and in-vivo DBT simulations. Firstly, we compare both iter- ative and simultaneous methods to the conventional, sequential method using an affine transformation model. We show that jointly estimating image intensities and parametric transformations gives superior results with respect to reconstruction fidelity and regis- tration accuracy. Also, we incorporate a non-rigid B-spline transformation model into our simultaneous method. The results demonstrate a visually plausible recovery of the deformation with preservation of the reconstruction fidelity

    Computational Imaging Approach to Recovery of Target Coordinates Using Orbital Sensor Data

    Get PDF
    This dissertation addresses the components necessary for simulation of an image-based recovery of the position of a target using orbital image sensors. Each component is considered in detail, focusing on the effect that design choices and system parameters have on the accuracy of the position estimate. Changes in sensor resolution, varying amounts of blur, differences in image noise level, selection of algorithms used for each component, and lag introduced by excessive processing time all contribute to the accuracy of the result regarding recovery of target coordinates using orbital sensor data. Using physical targets and sensors in this scenario would be cost-prohibitive in the exploratory setting posed, therefore a simulated target path is generated using Bezier curves which approximate representative paths followed by the targets of interest. Orbital trajectories for the sensors are designed on an elliptical model representative of the motion of physical orbital sensors. Images from each sensor are simulated based on the position and orientation of the sensor, the position of the target, and the imaging parameters selected for the experiment (resolution, noise level, blur level, etc.). Post-processing of the simulated imagery seeks to reduce noise and blur and increase resolution. The only information available for calculating the target position by a fully implemented system are the sensor position and orientation vectors and the images from each sensor. From these data we develop a reliable method of recovering the target position and analyze the impact on near-realtime processing. We also discuss the influence of adjustments to system components on overall capabilities and address the potential system size, weight, and power requirements from realistic implementation approaches

    Machine learning for flow field measurements: a perspective

    Get PDF
    Advancements in machine-learning (ML) techniques are driving a paradigm shift in image processing. Flow diagnostics with optical techniques is not an exception. Considering the existing and foreseeable disruptive developments in flow field measurement techniques, we elaborate this perspective, particularly focused to the field of particle image velocimetry. The driving forces for the advancements in ML methods for flow field measurements in recent years are reviewed in terms of image preprocessing, data treatment and conditioning. Finally, possible routes for further developments are highlighted.Stefano Discetti acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 949085). Yingzheng Liu acknowledges financial support from the National Natural Science Foundation of China (11725209)

    Online Super-Resolution For Fibre-Bundle-Based Confocal Laser Endomicroscopy

    Get PDF
    Probe-based Confocal Laser Endomicroscopy (pCLE) produces microscopic images enabling real-time in vivo optical biopsy. However, the miniaturisation of the optical hardware, specifically the reliance on an optical fibre bundle as an imaging guide, fundamentally limits image quality by producing artefacts, noise, and relatively low contrast and resolution. The reconstruction approaches in clinical pCLE products do not fully alleviate these problems. Consequently, image quality remains a barrier that curbs the full potential of pCLE. Enhancing the image quality of pCLE in real-time remains a challenge. The research in this thesis is a response to this need. I have developed dedicated online super-resolution methods that account for the physics of the image acquisition process. These methods have the potential to replace existing reconstruction algorithms without interfering with the fibre design or the hardware of the device. In this thesis, novel processing pipelines are proposed for enhancing the image quality of pCLE. First, I explored a learning-based super-resolution method that relies on mapping from the low to the high-resolution space. Due to the lack of high-resolution pCLE, I proposed to simulate high-resolution data and use it as a ground truth model that is based on the pCLE acquisition physics. However, pCLE images are reconstructed from irregularly distributed fibre signals, and grid-based Convolutional Neural Networks are not designed to take irregular data as input. To alleviate this problem, I designed a new trainable layer that embeds Nadaraya- Watson regression. Finally, I proposed a novel blind super-resolution approach by deploying unsupervised zero-shot learning accompanied by a down-sampling kernel crafted for pCLE. I evaluated these new methods in two ways: a robust image quality assessment and a perceptual quality test assessed by clinical experts. The results demonstrate that the proposed super-resolution pipelines are superior to the current reconstruction algorithm in terms of image quality and clinician preference
    • …
    corecore