2,064 research outputs found

    Single Image Super-Resolution Using Multi-Scale Deep Encoder-Decoder with Phase Congruency Edge Map Guidance

    Get PDF
    This paper presents an end-to-end multi-scale deep encoder (convolution) and decoder (deconvolution) network for single image super-resolution (SISR) guided by phase congruency (PC) edge map. Our system starts by a single scale symmetrical encoder-decoder structure for SISR, which is extended to a multi-scale model by integrating wavelet multi-resolution analysis into our network. The new multi-scale deep learning system allows the low resolution (LR) input and its PC edge map to be combined so as to precisely predict the multi-scale super-resolved edge details with the guidance of the high-resolution (HR) PC edge map. In this way, the proposed deep model takes both the reconstruction of image pixels’ intensities and the recovery of multi-scale edge details into consideration under the same framework. We evaluate the proposed model on benchmark datasets of different data scenarios, such as Set14 and BSD100 - natural images, Middlebury and New Tsukuba - depth images. The evaluations based on both PSNR and visual perception reveal that the proposed model is superior to the state-of-the-art methods

    Image Super-Resolution Based on Sparse Coding with Multi-Class Dictionaries

    Get PDF
    Sparse coding-based single image super-resolution has attracted much interest. In this paper, a super-resolution reconstruction algorithm based on sparse coding with multi-class dictionaries is put forward. We propose a novel method for image patch classification, using the phase congruency information. A sub-dictionary is learned from patches in each category. For a given image patch, the sub-dictionary that belongs to the same category is selected adaptively. Since the given patch has similar pattern with the selected sub-dictionary, it can be better represented. Finally, iterative back-projection is used to enforce global reconstruction constraint. Experiments demonstrate that our approach can produce comparable or even better super-resolution reconstruction results with some existing algorithms, in both subjective visual quality and numerical measures

    Model order reduction for left ventricular mechanics via congruency training

    Get PDF
    Computational models of the cardiovascular system and specifically heart function are currently being investigated as analytic tools to assist medical practice and clinical trials. To achieve clinical utility, models should be able to assimilate the diagnostic multi-modality data available for each patient and generate consistent representations of the underlying cardiovascular physiology. While finite element models of the heart can naturally account for patient-specific anatomies reconstructed from medical images, optimizing the many other parameters driving simulated cardiac functions is challenging due to computational complexity. With the goal of streamlining parameter adaptation, in this paper we present a novel, multifidelity strategy for model order reduction of 3-D finite element models of ventricular mechanics. Our approach is centered around well established findings on the similarity between contraction of an isolated muscle and the whole ventricle. Specifically, we demonstrate that simple linear transformations between sarcomere strain (tension) and ventricular volume (pressure) are sufficient to reproduce global pressure-volume outputs of 3-D finite element models even by a reduced model with just a single myocyte unit. We further develop a procedure for congruency training of a surrogate low-order model from multiscale finite elements, and we construct an example of parameter optimization based on medical images. We discuss how the presented approach might be employed to process large datasets of medical images as well as databases of echocardiographic reports, paving the way towards application of heart mechanics models in the clinical practice. © 2020 Di Achille et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.19-14- 00134Russell Sage Foundation, RSFSK and OS were funded by RSF (http:// www.rscf.ru/en/) as described below. Part of this work was carried out within the framework of the IIF UrB RAS government assignment and was partially supported by the UrFU Competitiveness Enhancement Program (agreement 02. A03.21.0006) as well as the RSF grant (No. 19-14- 00134). The Uran supercomputer at IMM UrB RAS was used for part of the model calculations. IBM provided support in the form of salaries for authors PA, JP, JK and VG but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the "author contributions" section

    Towards multiple 3D bone surface identification and reconstruction using few 2D X-ray images for intraoperative applications

    Get PDF
    This article discusses a possible method to use a small number, e.g. 5, of conventional 2D X-ray images to reconstruct multiple 3D bone surfaces intraoperatively. Each bone’s edge contours in X-ray images are automatically identified. Sparse 3D landmark points of each bone are automatically reconstructed by pairing the 2D X-ray images. The reconstructed landmark point distribution on a surface is approximately optimal covering main characteristics of the surface. A statistical shape model, dense point distribution model (DPDM), is then used to fit the reconstructed optimal landmarks vertices to reconstruct a full surface of each bone separately. The reconstructed surfaces can then be visualised and manipulated by surgeons or used by surgical robotic systems
    corecore