30,113 research outputs found

    On stable reconstructions from nonuniform Fourier measurements

    Full text link
    We consider the problem of recovering a compactly-supported function from a finite collection of pointwise samples of its Fourier transform taking nonuniformly. First, we show that under suitable conditions on the sampling frequencies - specifically, their density and bandwidth - it is possible to recover any such function ff in a stable and accurate manner in any given finite-dimensional subspace; in particular, one which is well suited for approximating ff. In practice, this is carried out using so-called nonuniform generalized sampling (NUGS). Second, we consider approximation spaces in one dimension consisting of compactly supported wavelets. We prove that a linear scaling of the dimension of the space with the sampling bandwidth is both necessary and sufficient for stable and accurate recovery. Thus wavelets are up to constant factors optimal spaces for reconstruction

    Shape basis interpretation for monocular deformable 3D reconstruction

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.In this paper, we propose a novel interpretable shape model to encode object non-rigidity. We first use the initial frames of a monocular video to recover a rest shape, used later to compute a dissimilarity measure based on a distance matrix measurement. Spectral analysis is then applied to this matrix to obtain a reduced shape basis, that in contrast to existing approaches, can be physically interpreted. In turn, these pre-computed shape bases are used to linearly span the deformation of a wide variety of objects. We introduce the low-rank basis into a sequential approach to recover both camera motion and non-rigid shape from the monocular video, by simply optimizing the weights of the linear combination using bundle adjustment. Since the number of parameters to optimize per frame is relatively small, specially when physical priors are considered, our approach is fast and can potentially run in real time. Validation is done in a wide variety of real-world objects, undergoing both inextensible and extensible deformations. Our approach achieves remarkable robustness to artifacts such as noisy and missing measurements and shows an improved performance to competing methods.Peer ReviewedPostprint (author's final draft

    Weighted frames of exponentials and stable recovery of multidimensional functions from nonuniform Fourier samples

    Full text link
    In this paper, we consider the problem of recovering a compactly supported multivariate function from a collection of pointwise samples of its Fourier transform taken nonuniformly. We do this by using the concept of weighted Fourier frames. A seminal result of Beurling shows that sample points give rise to a classical Fourier frame provided they are relatively separated and of sufficient density. However, this result does not allow for arbitrary clustering of sample points, as is often the case in practice. Whilst keeping the density condition sharp and dimension independent, our first result removes the separation condition and shows that density alone suffices. However, this result does not lead to estimates for the frame bounds. A known result of Groechenig provides explicit estimates, but only subject to a density condition that deteriorates linearly with dimension. In our second result we improve these bounds by reducing the dimension dependence. In particular, we provide explicit frame bounds which are dimensionless for functions having compact support contained in a sphere. Next, we demonstrate how our two main results give new insight into a reconstruction algorithm---based on the existing generalized sampling framework---that allows for stable and quasi-optimal reconstruction in any particular basis from a finite collection of samples. Finally, we construct sufficiently dense sampling schemes that are often used in practice---jittered, radial and spiral sampling schemes---and provide several examples illustrating the effectiveness of our approach when tested on these schemes

    EchoFusion: Tracking and Reconstruction of Objects in 4D Freehand Ultrasound Imaging without External Trackers

    Get PDF
    Ultrasound (US) is the most widely used fetal imaging technique. However, US images have limited capture range, and suffer from view dependent artefacts such as acoustic shadows. Compounding of overlapping 3D US acquisitions into a high-resolution volume can extend the field of view and remove image artefacts, which is useful for retrospective analysis including population based studies. However, such volume reconstructions require information about relative transformations between probe positions from which the individual volumes were acquired. In prenatal US scans, the fetus can move independently from the mother, making external trackers such as electromagnetic or optical tracking unable to track the motion between probe position and the moving fetus. We provide a novel methodology for image-based tracking and volume reconstruction by combining recent advances in deep learning and simultaneous localisation and mapping (SLAM). Tracking semantics are established through the use of a Residual 3D U-Net and the output is fed to the SLAM algorithm. As a proof of concept, experiments are conducted on US volumes taken from a whole body fetal phantom, and from the heads of real fetuses. For the fetal head segmentation, we also introduce a novel weak annotation approach to minimise the required manual effort for ground truth annotation. We evaluate our method qualitatively, and quantitatively with respect to tissue discrimination accuracy and tracking robustness.Comment: MICCAI Workshop on Perinatal, Preterm and Paediatric Image analysis (PIPPI), 201

    Temporal shape super-resolution by intra-frame motion encoding using high-fps structured light

    Full text link
    One of the solutions of depth imaging of moving scene is to project a static pattern on the object and use just a single image for reconstruction. However, if the motion of the object is too fast with respect to the exposure time of the image sensor, patterns on the captured image are blurred and reconstruction fails. In this paper, we impose multiple projection patterns into each single captured image to realize temporal super resolution of the depth image sequences. With our method, multiple patterns are projected onto the object with higher fps than possible with a camera. In this case, the observed pattern varies depending on the depth and motion of the object, so we can extract temporal information of the scene from each single image. The decoding process is realized using a learning-based approach where no geometric calibration is needed. Experiments confirm the effectiveness of our method where sequential shapes are reconstructed from a single image. Both quantitative evaluations and comparisons with recent techniques were also conducted.Comment: 9 pages, Published at the International Conference on Computer Vision (ICCV 2017
    • …
    corecore