20 research outputs found

    Towards Benchmarking Scene Background Initialization

    Full text link
    Given a set of images of a scene taken at different times, the availability of an initial background model that describes the scene without foreground objects is the prerequisite for a wide range of applications, ranging from video surveillance to computational photography. Even though several methods have been proposed for scene background initialization, the lack of a common groundtruthed dataset and of a common set of metrics makes it difficult to compare their performance. To move first steps towards an easy and fair comparison of these methods, we assembled a dataset of sequences frequently adopted for background initialization, selected or created ground truths for quantitative evaluation through a selected suite of metrics, and compared results obtained by some existing methods, making all the material publicly available.Comment: 6 pages, SBI dataset, SBMI2015 Worksho

    Compressive sensing based Q-space resampling for handling fast bulk motion in hardi acquisitions

    Get PDF
    Diffusion-weighted (DW) MRI has become a widely adopted imaging modality to reveal the underlying brain connectivity. Long acquisition times and/or non-cooperative patients increase the chances of motion-related artifacts. Whereas slow bulk motion results in inter-gradient misalignment which can be handled via retrospective motion correction algorithms, fast bulk motion usually affects data during the application of a single diffusion gradient causing signal dropout artifacts. Common practices opt to discard gradients bearing signal attenuation due to the difficulty of their retrospective correction, with the disadvantage to lose full gradients for further processing. Nonetheless, such attenuation might only affect limited number of slices within a gradient volume. Q-space resampling has recently been proposed to recover corrupted slices while saving gradients for subsequent reconstruction. However, few corrupted gradients are implicitly assumed which might not hold in case of scanning unsedated infants or patients in pain. In this paper, we propose to adopt recent advances in compressive sensing based reconstruction of the diffusion orientation distribution functions (ODF) with under sampled measurements to resample corrupted slices. We make use of Simple Harmonic Oscillator based Reconstruction and Estimation (SHORE) basis functions which can analytically model ODF from arbitrary sampled signals. We demonstrate the impact of the proposed resampling strategy compared to state-of-art resampling and gradient exclusion on simulated intra-gradient motion as well as samples from real DWI data

    Skeletal Shape Correspondence Through Entropy

    Get PDF
    We present a novel approach for improving the shape statistics of medical image objects by generating correspondence of skeletal points. Each object's interior is modeled by an s-rep, i.e., by a sampled, folded, two-sided skeletal sheet with spoke vectors proceeding from the skeletal sheet to the boundary. The skeleton is divided into three parts: the up side, the down side, and the fold curve. The spokes on each part are treated separately and, using spoke interpolation, are shifted along that skeleton in each training sample so as to tighten the probability distribution on those spokes' geometric properties while sampling the object interior regularly. As with the surface/boundary-based correspondence method of Cates et al., entropy is used to measure both the probability distribution tightness and the sampling regularity, here of the spokes' geometric properties. Evaluation on synthetic and real world lateral ventricle and hippocampus data sets demonstrate improvement in the performance of statistics using the resulting probability distributions. This improvement is greater than that achieved by an entropy-based correspondence method on the boundary points

    Shape‐from‐shading using sensor and physical object characteristics applied to human teeth surface reconstruction

    No full text
    Image formation involves understanding the sensors characteristics and object reflectance. In dentistry, for example an accurate three‐dimensional (3D) representation of the human jaw may be used for diagnostic and treatment purposes. Photogrammetry can offer a flexible, cost‐effective solution in that regard. Nonetheless there are several challenges, such as non‐friendly image acquisition environment inside the human mouth, problems with lighting (specularity effects because of saliva, gum discolourisation, and occlusion because of the tongue in the lower jaw), and errors because of the data acquisition sensors (e.g. camera calibration errors, lens distortion and so on). In this study, the authors focus on the 3D surface reconstruction aspect for human jaw modelling based on physical surface characteristics and sensor properties. Owing to apparent lens distortion imposed by near‐field imaging, the authors propose a new flexible calibration for lens radial distortion based on a single image of a sphere. The authors propose a non‐Lambertian shape‐from‐shading (SFS) algorithm under perspective projection which benefits from camera calibration parameters. Our experiments provide quantitative metric results for the proposed approach. The reflectance of the tooth surface is modelled by the Oren–Nayar reflectance model for rough surfaces whose roughness parameter is physically computed from an optical surface profiler measurements. As compared to state‐of‐the‐art SFS approaches, our approach is able to recover geometric details of tooth occlusal surface. This work is fundamental for establishing an optical‐based approach for reconstructing the human jaw, that is inexpensive and does not use ionising radiation
    corecore