52,654 research outputs found

    Creation of virtual worlds from 3D models retrieved from content aware networks based on sketch and image queries

    Get PDF
    The recent emergence of user generated content requires new content creation tools that will be both easy to learn and easy to use. These new tools should enable the user to construct new high-quality content with minimum effort; it is essential to allow existing multimedia content to be reused as building blocks when creating new content. In this work we present a new tool for automatically constructing virtual worlds with minimum user intervention. Users can create these worlds by drawing a simple sketch, or by using interactively segmented 2D objects from larger images. The system receives as a query the sketch or the segmented image, and uses it to find similar 3D models that are stored in a Content Centric Network. The user selects a suitable model from the retrieved models, and the system uses it to automatically construct a virtual 3D world

    Calipso: Physics-based Image and Video Editing through CAD Model Proxies

    Get PDF
    We present Calipso, an interactive method for editing images and videos in a physically-coherent manner. Our main idea is to realize physics-based manipulations by running a full physics simulation on proxy geometries given by non-rigidly aligned CAD models. Running these simulations allows us to apply new, unseen forces to move or deform selected objects, change physical parameters such as mass or elasticity, or even add entire new objects that interact with the rest of the underlying scene. In Calipso, the user makes edits directly in 3D; these edits are processed by the simulation and then transfered to the target 2D content using shape-to-image correspondences in a photo-realistic rendering process. To align the CAD models, we introduce an efficient CAD-to-image alignment procedure that jointly minimizes for rigid and non-rigid alignment while preserving the high-level structure of the input shape. Moreover, the user can choose to exploit image flow to estimate scene motion, producing coherent physical behavior with ambient dynamics. We demonstrate Calipso's physics-based editing on a wide range of examples producing myriad physical behavior while preserving geometric and visual consistency.Comment: 11 page

    Vessel tractography using an intensity based tensor model with branch detection

    Get PDF
    In this paper, we present a tubular structure seg- mentation method that utilizes a second order tensor constructed from directional intensity measurements, which is inspired from diffusion tensor image (DTI) modeling. The constructed anisotropic tensor which is fit inside a vessel drives the segmen- tation analogously to a tractography approach in DTI. Our model is initialized at a single seed point and is capable of capturing whole vessel trees by an automatic branch detection algorithm developed in the same framework. The centerline of the vessel as well as its thickness is extracted. Performance results within the Rotterdam Coronary Artery Algorithm Evaluation framework are provided for comparison with existing techniques. 96.4% average overlap with ground truth delineated by experts is obtained in addition to other measures reported in the paper. Moreover, we demonstrate further quantitative results over synthetic vascular datasets, and we provide quantitative experiments for branch detection on patient Computed Tomography Angiography (CTA) volumes, as well as qualitative evaluations on the same CTA datasets, from visual scores by a cardiologist expert

    Extracting Tree-structures in CT data by Tracking Multiple Statistically Ranked Hypotheses

    Full text link
    In this work, we adapt a method based on multiple hypothesis tracking (MHT) that has been shown to give state-of-the-art vessel segmentation results in interactive settings, for the purpose of extracting trees. Regularly spaced tubular templates are fit to image data forming local hypotheses. These local hypotheses are used to construct the MHT tree, which is then traversed to make segmentation decisions. However, some critical parameters in this method are scale-dependent and have an adverse effect when tracking structures of varying dimensions. We propose to use statistical ranking of local hypotheses in constructing the MHT tree, which yields a probabilistic interpretation of scores across scales and helps alleviate the scale-dependence of MHT parameters. This enables our method to track trees starting from a single seed point. Our method is evaluated on chest CT data to extract airway trees and coronary arteries. In both cases, we show that our method performs significantly better than the original MHT method.Comment: Accepted for publication at the International Journal of Medical Physics and Practic

    Innovative strategies for 3D visualisation using photogrammetry and 3D scanning for mobile phones

    Get PDF
    3D model generation through Photogrammetry is a modern overlay of digital information representing real world objects in a virtual world. The immediate scope of this study aims at generating 3D models using imagery and overcoming the challenge of acquiring accurate 3D meshes. This research aims to achieve optimised ways to document raw 3D representations of real life objects and then converting them into retopologised, textured usable data through mobile phones. Augmented Reality (AR) is a projected combination of real and virtual objects. A lot of work is done to create market dependant AR applications so customers can view products before purchasing them. The need is to develop a product independent photogrammetry to AR pipeline which is freely available to create independent 3D Augmented models. Although for the particulars of this research paper, the aim would be to compare and analyse different open source SDK’s and libraries for developing optimised 3D Mesh using Photogrammetry/3D Scanning which will contribute as a main skeleton to the 3D-AR pipeline. Natural disasters, global political crisis, terrorist attacks and other catastrophes have led researchers worldwide to capture monuments using photogrammetry and laser scans. Some of these objects of “global importance” are processed by companies including CyArk (Cyber Archives) and UNESCO’s World Heritage Centre, who work against time to preserve these historical monuments, before they are damaged or in some cases completely destroyed. The need is to question the significance of preserving objects and monuments which might be of value locally to a city or town. What is done to preserve those objects? This research would develop pipelines for collecting and processing 3D data so the local communities could contribute towards restoring endangered sites and objects using their smartphones and making these objects available to be viewed in location based AR. There exist some companies which charge relatively large amounts of money for local scanning projects. This research would contribute as a non-profitable project which could be later used in school curriculums, visitor attractions and historical preservation organisations all over the globe at no cost. The scope isn’t limited to furniture, museums or marketing, but could be used for personal digital archiving as well. This research will capture and process virtual objects using Mobile Phones comparing methodologies used in Computer Vision design from data conversion on Mobile phones to 3D generation, texturing and retopologising. The outcomes of this research will be used as input for generating AR which is application independent of any industry or product

    Coronary Artery Centerline Extraction in Cardiac CT Angiography Using a CNN-Based Orientation Classifier

    Full text link
    Coronary artery centerline extraction in cardiac CT angiography (CCTA) images is a prerequisite for evaluation of stenoses and atherosclerotic plaque. We propose an algorithm that extracts coronary artery centerlines in CCTA using a convolutional neural network (CNN). A 3D dilated CNN is trained to predict the most likely direction and radius of an artery at any given point in a CCTA image based on a local image patch. Starting from a single seed point placed manually or automatically anywhere in a coronary artery, a tracker follows the vessel centerline in two directions using the predictions of the CNN. Tracking is terminated when no direction can be identified with high certainty. The CNN was trained using 32 manually annotated centerlines in a training set consisting of 8 CCTA images provided in the MICCAI 2008 Coronary Artery Tracking Challenge (CAT08). Evaluation using 24 test images of the CAT08 challenge showed that extracted centerlines had an average overlap of 93.7% with 96 manually annotated reference centerlines. Extracted centerline points were highly accurate, with an average distance of 0.21 mm to reference centerline points. In a second test set consisting of 50 CCTA scans, 5,448 markers in the coronary arteries were used as seed points to extract single centerlines. This showed strong correspondence between extracted centerlines and manually placed markers. In a third test set containing 36 CCTA scans, fully automatic seeding and centerline extraction led to extraction of on average 92% of clinically relevant coronary artery segments. The proposed method is able to accurately and efficiently determine the direction and radius of coronary arteries. The method can be trained with limited training data, and once trained allows fast automatic or interactive extraction of coronary artery trees from CCTA images.Comment: Accepted in Medical Image Analysi

    MonoPerfCap: Human Performance Capture from Monocular Video

    Full text link
    We present the first marker-less approach for temporally coherent 3D performance capture of a human with general clothing from monocular video. Our approach reconstructs articulated human skeleton motion as well as medium-scale non-rigid surface deformations in general scenes. Human performance capture is a challenging problem due to the large range of articulation, potentially fast motion, and considerable non-rigid deformations, even from multi-view data. Reconstruction from monocular video alone is drastically more challenging, since strong occlusions and the inherent depth ambiguity lead to a highly ill-posed reconstruction problem. We tackle these challenges by a novel approach that employs sparse 2D and 3D human pose detections from a convolutional neural network using a batch-based pose estimation strategy. Joint recovery of per-batch motion allows to resolve the ambiguities of the monocular reconstruction problem based on a low dimensional trajectory subspace. In addition, we propose refinement of the surface geometry based on fully automatically extracted silhouettes to enable medium-scale non-rigid alignment. We demonstrate state-of-the-art performance capture results that enable exciting applications such as video editing and free viewpoint video, previously infeasible from monocular video. Our qualitative and quantitative evaluation demonstrates that our approach significantly outperforms previous monocular methods in terms of accuracy, robustness and scene complexity that can be handled.Comment: Accepted to ACM TOG 2018, to be presented on SIGGRAPH 201

    The VIMOS Integral Field Unit: data reduction methods and quality assessment

    Full text link
    With new generation spectrographs integral field spectroscopy is becoming a widely used observational technique. The Integral Field Unit of the VIsible Multi-Object Spectrograph on the ESO-VLT allows to sample a field as large as 54" x 54" covered by 6400 fibers coupled with micro-lenses. We are presenting here the methods of the data processing software developed to extract the astrophysical signal of faint sources from the VIMOS IFU observations. We focus on the treatment of the fiber-to-fiber relative transmission and the sky subtraction, and the dedicated tasks we have built to address the peculiarities and unprecedented complexity of the dataset. We review the automated process we have developed under the VIPGI data organization and reduction environment (Scodeggio et al. 2005), along with the quality control performed to validate the process. The VIPGI-IFU data processing environment is available to the scientific community to process VIMOS-IFU data since November 2003.Comment: 19 pages, 10 figures and 1 table. Accepted for publication in PAS
    • 

    corecore