5,952 research outputs found

    Computer Vision and Graphics for Heritage Preservation and Digital Archaeology

    Get PDF
    The goal of this work is to provide attendees with a survey of topics related to Heritage Preservation and Digital Archeology, which are challenging and motivating subjects to both computer vision and graphics community. These issues have been gaining increasing attention and priority within the scientific scenario and among funding agencies and development organizations over the last years. Motivations to this work are the recent efforts in the digital preservation of cultural heritage objects and sites before degradation or damage caused by environmental factors or human development. One of the main focuses of these researches is the development of new techniques for realistic 3D model building from images, preserving as much information as possible. We intend to introduce and discuss several emerging topics in computer vision and graphics related to the proposed theme while highlighting the major contributions and advances in these fields

    Convolutional nets for reconstructing neural circuits from brain images acquired by serial section electron microscopy

    Full text link
    Neural circuits can be reconstructed from brain images acquired by serial section electron microscopy. Image analysis has been performed by manual labor for half a century, and efforts at automation date back almost as far. Convolutional nets were first applied to neuronal boundary detection a dozen years ago, and have now achieved impressive accuracy on clean images. Robust handling of image defects is a major outstanding challenge. Convolutional nets are also being employed for other tasks in neural circuit reconstruction: finding synapses and identifying synaptic partners, extending or pruning neuronal reconstructions, and aligning serial section images to create a 3D image stack. Computational systems are being engineered to handle petavoxel images of cubic millimeter brain volumes

    Exploiting flow dynamics for super-resolution in contrast-enhanced ultrasound

    Get PDF
    Ultrasound localization microscopy offers new radiation-free diagnostic tools for vascular imaging deep within the tissue. Sequential localization of echoes returned from inert microbubbles with low-concentration within the bloodstream reveal the vasculature with capillary resolution. Despite its high spatial resolution, low microbubble concentrations dictate the acquisition of tens of thousands of images, over the course of several seconds to tens of seconds, to produce a single super-resolved image. %since each echo is required to be well separated from adjacent microbubbles. Such long acquisition times and stringent constraints on microbubble concentration are undesirable in many clinical scenarios. To address these restrictions, sparsity-based approaches have recently been developed. These methods reduce the total acquisition time dramatically, while maintaining good spatial resolution in settings with considerable microbubble overlap. %Yet, non of the reported methods exploit the fact that microbubbles actually flow within the bloodstream. % to improve recovery. Here, we further improve sparsity-based super-resolution ultrasound imaging by exploiting the inherent flow of microbubbles and utilize their motion kinematics. While doing so, we also provide quantitative measurements of microbubble velocities. Our method relies on simultaneous tracking and super-localization of individual microbubbles in a frame-by-frame manner, and as such, may be suitable for real-time implementation. We demonstrate the effectiveness of the proposed approach on both simulations and {\it in-vivo} contrast enhanced human prostate scans, acquired with a clinically approved scanner.Comment: 11 pages, 9 figure

    New point matching algorithm for panoramic reflectance images

    Full text link

    Toward automated earned value tracking using 3D imaging tools

    Get PDF

    Multimodal Three Dimensional Scene Reconstruction, The Gaussian Fields Framework

    Get PDF
    The focus of this research is on building 3D representations of real world scenes and objects using different imaging sensors. Primarily range acquisition devices (such as laser scanners and stereo systems) that allow the recovery of 3D geometry, and multi-spectral image sequences including visual and thermal IR images that provide additional scene characteristics. The crucial technical challenge that we addressed is the automatic point-sets registration task. In this context our main contribution is the development of an optimization-based method at the core of which lies a unified criterion that solves simultaneously for the dense point correspondence and transformation recovery problems. The new criterion has a straightforward expression in terms of the datasets and the alignment parameters and was used primarily for 3D rigid registration of point-sets. However it proved also useful for feature-based multimodal image alignment. We derived our method from simple Boolean matching principles by approximation and relaxation. One of the main advantages of the proposed approach, as compared to the widely used class of Iterative Closest Point (ICP) algorithms, is convexity in the neighborhood of the registration parameters and continuous differentiability, allowing for the use of standard gradient-based optimization techniques. Physically the criterion is interpreted in terms of a Gaussian Force Field exerted by one point-set on the other. Such formulation proved useful for controlling and increasing the region of convergence, and hence allowing for more autonomy in correspondence tasks. Furthermore, the criterion can be computed with linear complexity using recently developed Fast Gauss Transform numerical techniques. In addition, we also introduced a new local feature descriptor that was derived from visual saliency principles and which enhanced significantly the performance of the registration algorithm. The resulting technique was subjected to a thorough experimental analysis that highlighted its strength and showed its limitations. Our current applications are in the field of 3D modeling for inspection, surveillance, and biometrics. However, since this matching framework can be applied to any type of data, that can be represented as N-dimensional point-sets, the scope of the method is shown to reach many more pattern analysis applications

    Nonrigid reconstruction of 3D breast surfaces with a low-cost RGBD camera for surgical planning and aesthetic evaluation

    Get PDF
    Accounting for 26% of all new cancer cases worldwide, breast cancer remains the most common form of cancer in women. Although early breast cancer has a favourable long-term prognosis, roughly a third of patients suffer from a suboptimal aesthetic outcome despite breast conserving cancer treatment. Clinical-quality 3D modelling of the breast surface therefore assumes an increasingly important role in advancing treatment planning, prediction and evaluation of breast cosmesis. Yet, existing 3D torso scanners are expensive and either infrastructure-heavy or subject to motion artefacts. In this paper we employ a single consumer-grade RGBD camera with an ICP-based registration approach to jointly align all points from a sequence of depth images non-rigidly. Subtle body deformation due to postural sway and respiration is successfully mitigated leading to a higher geometric accuracy through regularised locally affine transformations. We present results from 6 clinical cases where our method compares well with the gold standard and outperforms a previous approach. We show that our method produces better reconstructions qualitatively by visual assessment and quantitatively by consistently obtaining lower landmark error scores and yielding more accurate breast volume estimates
    corecore