4,130 research outputs found

    3D scanning, modelling and printing of ultra-thin nacreous shells from Jericho: a case study of small finds documentation in archaeology

    Get PDF
    This paper springs out from a collaborative project jointly carried out by the FabLab Saperi&Co and the Museum of Near East, Egypt and Mediterranean of Sapienza University of Rome focused at producing replicas of ultra-thin archeological finds with a sub-millimetric precision. The main technological challenge of this project was to produce models through 3D optical scanning (photogrammetry) and to print faithful replicas with additive manufacturing. The objects chosen for the trial were five extremely fragile and ultra-thin nacreous shells retrieved in Tell es-Sultan/ancient Jericho by the Italian-Palestinian Expedition in spring 2017, temporarily on exhibit in the Museum. The experiment proved to be successful, and the scanning, modeling and printing of the shells also allowed some observations on their possible uses in research and museum activities

    EchoFusion: Tracking and Reconstruction of Objects in 4D Freehand Ultrasound Imaging without External Trackers

    Get PDF
    Ultrasound (US) is the most widely used fetal imaging technique. However, US images have limited capture range, and suffer from view dependent artefacts such as acoustic shadows. Compounding of overlapping 3D US acquisitions into a high-resolution volume can extend the field of view and remove image artefacts, which is useful for retrospective analysis including population based studies. However, such volume reconstructions require information about relative transformations between probe positions from which the individual volumes were acquired. In prenatal US scans, the fetus can move independently from the mother, making external trackers such as electromagnetic or optical tracking unable to track the motion between probe position and the moving fetus. We provide a novel methodology for image-based tracking and volume reconstruction by combining recent advances in deep learning and simultaneous localisation and mapping (SLAM). Tracking semantics are established through the use of a Residual 3D U-Net and the output is fed to the SLAM algorithm. As a proof of concept, experiments are conducted on US volumes taken from a whole body fetal phantom, and from the heads of real fetuses. For the fetal head segmentation, we also introduce a novel weak annotation approach to minimise the required manual effort for ground truth annotation. We evaluate our method qualitatively, and quantitatively with respect to tissue discrimination accuracy and tracking robustness.Comment: MICCAI Workshop on Perinatal, Preterm and Paediatric Image analysis (PIPPI), 201

    Efficient illumination independent appearance-based face tracking

    Get PDF
    One of the major challenges that visual tracking algorithms face nowadays is being able to cope with changes in the appearance of the target during tracking. Linear subspace models have been extensively studied and are possibly the most popular way of modelling target appearance. We introduce a linear subspace representation in which the appearance of a face is represented by the addition of two approxi- mately independent linear subspaces modelling facial expressions and illumination respectively. This model is more compact than previous bilinear or multilinear ap- proaches. The independence assumption notably simplifies system training. We only require two image sequences. One facial expression is subject to all possible illumina- tions in one sequence and the face adopts all facial expressions under one particular illumination in the other. This simple model enables us to train the system with no manual intervention. We also revisit the problem of efficiently fitting a linear subspace-based model to a target image and introduce an additive procedure for solving this problem. We prove that Matthews and Baker’s Inverse Compositional Approach makes a smoothness assumption on the subspace basis that is equiva- lent to Hager and Belhumeur’s, which worsens convergence. Our approach differs from Hager and Belhumeur’s additive and Matthews and Baker’s compositional ap- proaches in that we make no smoothness assumptions on the subspace basis. In the experiments conducted we show that the model introduced accurately represents the appearance variations caused by illumination changes and facial expressions. We also verify experimentally that our fitting procedure is more accurate and has better convergence rate than the other related approaches, albeit at the expense of a slight increase in computational cost. Our approach can be used for tracking a human face at standard video frame rates on an average personal computer

    Beyond Low Rank + Sparse: Multi-scale Low Rank Matrix Decomposition

    Full text link
    We present a natural generalization of the recent low rank + sparse matrix decomposition and consider the decomposition of matrices into components of multiple scales. Such decomposition is well motivated in practice as data matrices often exhibit local correlations in multiple scales. Concretely, we propose a multi-scale low rank modeling that represents a data matrix as a sum of block-wise low rank matrices with increasing scales of block sizes. We then consider the inverse problem of decomposing the data matrix into its multi-scale low rank components and approach the problem via a convex formulation. Theoretically, we show that under various incoherence conditions, the convex program recovers the multi-scale low rank components \revised{either exactly or approximately}. Practically, we provide guidance on selecting the regularization parameters and incorporate cycle spinning to reduce blocking artifacts. Experimentally, we show that the multi-scale low rank decomposition provides a more intuitive decomposition than conventional low rank methods and demonstrate its effectiveness in four applications, including illumination normalization for face images, motion separation for surveillance videos, multi-scale modeling of the dynamic contrast enhanced magnetic resonance imaging and collaborative filtering exploiting age information

    Bessel beam illumination reduces random and systematic errors in quantitative functional studies using light-sheet microscopy

    Get PDF
    Light-sheet microscopy (LSM), in combination with intrinsically transparent zebrafish larvae, is a choice method to observe brain function with high frame rates at cellular resolution. Inherently to LSM, however, residual opaque objects cause stripe artifacts, which obscure features of interest and, during functional imaging, modulate fluorescence variations related to neuronal activity. Here, we report how Bessel beams reduce streaking artifacts and produce high-fidelity quantitative data demonstrating a fivefold increase in sensitivity to calcium transients and a 20 fold increase in accuracy in the detection of activity correlations in functional imaging. Furthermore, using principal component analysis, we show that measurements obtained with Bessel beams are clean enough to reveal in one-shot experiments correlations that can not be averaged over trials after stimuli as is the case when studying spontaneous activity. Our results not only demonstrate the contamination of data by systematic and random errors through conventional Gaussian illumination and but,furthermore, quantify the increase in fidelity of such data when using Bessel beams

    Burst Denoising with Kernel Prediction Networks

    Full text link
    We present a technique for jointly denoising bursts of images taken from a handheld camera. In particular, we propose a convolutional neural network architecture for predicting spatially varying kernels that can both align and denoise frames, a synthetic data generation approach based on a realistic noise formation model, and an optimization guided by an annealed loss function to avoid undesirable local minima. Our model matches or outperforms the state-of-the-art across a wide range of noise levels on both real and synthetic data.Comment: To appear in CVPR 2018 (spotlight). Project page: http://people.eecs.berkeley.edu/~bmild/kpn

    Feasibility of dual-energy CBCT material decomposition in the human torso with 2D anti-scatter grids and grid-based scatter sampling

    Full text link
    Background: Dual-energy (DE) imaging techniques in cone-beam computed tomography (CBCT) have potential clinical applications, including material quantification and improved tissue visualization. However, the performance of DE CBCT is limited by the effects of scattered radiation, which restricts its use to small object imaging. Purpose: This study investigates the feasibility of DE CBCT material decomposition by reducing scatter with a 2D anti-scatter grid and a measurement-based scatter correction method. Methods: A 2D anti-scatter grid prototype was utilized with a residual scatter correction method in a linac-mounted CBCT system to investigate the effects of robust scatter suppression in DE CBCT. Scans were acquired at 90 and 140 kVp using phantoms that mimic head, thorax, and abdomen/pelvis anatomies. The effect of a 2D anti-scatter grid with and without residual scatter correction on iodine concentration quantification and contrast visualization in VME images was evaluated. Results: In CBCT images, a 2D grid with or without scatter correction can differentiate iodine and water after DE processing in human torso-sized phantom images. However, iodine quantification errors were up to 10 mg/ml in pelvis phantoms when only the 2D grid was used. Adding scatter correction to 2D-grid CBCT reduced iodine quantification errors below 1.5 mg/ml in pelvis phantoms, comparable to iodine quantification errors in multidetector CT. Conclusions: This study indicates that accurate DE decomposition is potentially feasible in DE CBCT of the human torso if robust scatter suppression is achieved with 2D anti-scatter grids and residual scatter correction. This approach can potentially enable better contrast visualization and tissue and contrast agent quantification in various CBCT applications
    • …
    corecore