159 research outputs found

    Fast Back-Projection for Non-Line of Sight Reconstruction

    Get PDF
    Recent works have demonstrated non-line of sight (NLOS) reconstruction by using the time-resolved signal frommultiply scattered light. These works combine ultrafast imaging systems with computation, which back-projects the recorded space-time signal to build a probabilistic map of the hidden geometry. Unfortunately, this computation is slow, becoming a bottleneck as the imaging technology improves. In this work, we propose a new back-projection technique for NLOS reconstruction, which is up to a thousand times faster than previous work, with almost no quality loss. We base on the observation that the hidden geometry probability map can be built as the intersection of the three-bounce space-time manifolds defined by the light illuminating the hidden geometry and the visible point receiving the scattered light from such hidden geometry. This allows us to pose the reconstruction of the hidden geometry as the voxelization of these space-time manifolds, which has lower theoretic complexity and is easily implementable in the GPU. We demonstrate the efficiency and quality of our technique compared against previous methods in both captured and synthetic dat

    Multi-level 3D CNN for Learning Multi-scale Spatial Features

    Get PDF
    3D object recognition accuracy can be improved by learning the multi-scale spatial features from 3D spatial geometric representations of objects such as point clouds, 3D models, surfaces, and RGB-D data. Current deep learning approaches learn such features either using structured data representations (voxel grids and octrees) or from unstructured representations (graphs and point clouds). Learning features from such structured representations is limited by the restriction on resolution and tree depth while unstructured representations creates a challenge due to non-uniformity among data samples. In this paper, we propose an end-to-end multi-level learning approach on a multi-level voxel grid to overcome these drawbacks. To demonstrate the utility of the proposed multi-level learning, we use a multi-level voxel representation of 3D objects to perform object recognition. The multi-level voxel representation consists of a coarse voxel grid that contains volumetric information of the 3D object. In addition, each voxel in the coarse grid that contains a portion of the object boundary is subdivided into multiple fine-level voxel grids. The performance of our multi-level learning algorithm for object recognition is comparable to dense voxel representations while using significantly lower memory.Comment: CVPR 2019 workshop on Deep Learning for Geometric Shape Understandin

    Sparse Volumetric Deformation

    Get PDF
    Volume rendering is becoming increasingly popular as applications require realistic solid shape representations with seamless texture mapping and accurate filtering. However rendering sparse volumetric data is difficult because of the limited memory and processing capabilities of current hardware. To address these limitations, the volumetric information can be stored at progressive resolutions in the hierarchical branches of a tree structure, and sampled according to the region of interest. This means that only a partial region of the full dataset is processed, and therefore massive volumetric scenes can be rendered efficiently. The problem with this approach is that it currently only supports static scenes. This is because it is difficult to accurately deform massive amounts of volume elements and reconstruct the scene hierarchy in real-time. Another problem is that deformation operations distort the shape where more than one volume element tries to occupy the same location, and similarly gaps occur where deformation stretches the elements further than one discrete location. It is also challenging to efficiently support sophisticated deformations at hierarchical resolutions, such as character skinning or physically based animation. These types of deformation are expensive and require a control structure (for example a cage or skeleton) that maps to a set of features to accelerate the deformation process. The problems with this technique are that the varying volume hierarchy reflects different feature sizes, and manipulating the features at the original resolution is too expensive; therefore the control structure must also hierarchically capture features according to the varying volumetric resolution. This thesis investigates the area of deforming and rendering massive amounts of dynamic volumetric content. The proposed approach efficiently deforms hierarchical volume elements without introducing artifacts and supports both ray casting and rasterization renderers. This enables light transport to be modeled both accurately and efficiently with applications in the fields of real-time rendering and computer animation. Sophisticated volumetric deformation, including character animation, is also supported in real-time. This is achieved by automatically generating a control skeleton which is mapped to the varying feature resolution of the volume hierarchy. The output deformations are demonstrated in massive dynamic volumetric scenes

    Multi-scale space-variant FRep cellular structures

    Get PDF
    Existing mesh and voxel based modeling methods encounter difficulties when dealing with objects containing cellular structures on several scale levels and varying their parameters in space. We describe an alternative approach based on using real functions evaluated procedurally at any given point. This allows for modeling fully parameterized, nested and multi-scale cellular structures with dynamic variations in geometric and cellular properties. The geometry of a base unit cell is defined using Function Representation (FRep) based primitives and operations. The unit cell is then replicated in space using periodic space mappings such as sawtooth and triangle waves. While being replicated, the unit cell can vary its geometry and topology due to the use of dynamic parameterization. We illustrate this approach by several examples of microstructure generation within a given volume or along a given surface. We also outline some methods for direct rendering and fabrication not involving auxiliary mesh and voxel representations

    Automated Digital Machining for Parallel Processors

    Get PDF
    When a process engineer creates a tool path a number of fixed decisions are made that inevitably produce sub-optimal results. This is because it is impossible to process all of the tradeoffs before generating the tool path. The research presents a methodology to support a process engineers attempt to generate optimal tool paths by performing automated digital machining and analysis. This methodology automatically generates and evaluates tool paths based on parallel processing of digital part models and generalized cutting geometry. Digital part models are created by voxelizing STL files and the resulting digital part surfaces are obtained based on casting rays into the part model. Tool paths are generated based on a general path template and updated based on generalized tool geometry and part surface information. The material removed by the generalized cutter as it follows the path is used to obtain path metrics. The paths are evaluated based on the path metrics of material removal rate, machining time, and amount of scallop. This methodology is a parallel processing accelerated framework suitable for generating tool paths in parallel enabling the process engineer to rank and select the best tool path for the job

    A geometric framework for immersogeometric analysis

    Get PDF
    The purpose of this dissertation is to develop a geometric framework for immersogeometric analysis that directly uses the boundary representations (B-reps) of a complex computer-aided design (CAD) model and immerses it into a locally refined, non-boundary-fitted discretization of the fluid domain. Using the non-boundary-fitted mesh which does not need to conform to the shape of the object can alleviate the challenge of mesh generation for complex geometries. This also reduces the labor-intensive and time-consuming work of geometry cleanup for the purpose of obtaining watertight CAD models in order to perform boundary-fitted mesh generation. The Dirichlet boundary conditions in the fluid domain are enforced weakly over the immersed object surface in the intersected elements. The surface quadrature points for the immersed object are generated on the parametric and analytic surfaces of the B-rep models. In the case of trimmed surfaces, adaptive quadrature rule is considered to improve the accuracy of the surface integral. For the non-boundary-fitted mesh, a sub-cell-based adaptive quadrature rule based on the recursive splitting of quadrature elements is used to faithfully capture the geometry in intersected elements. The point membership classification for identifying quadrature points in the fluid domain is based on a voxel-based approach implemented on GPUs. A variety of computational fluid dynamics (CFD) simulations are performed using the proposed method to assess its accuracy and efficiency. Finally, a fluid--structure interaction (FSI) simulation of a deforming left ventricle coupled with the heart valves shows the potential advantages of the developed geometric framework for the immersogeomtric analysis with complex moving domains

    Virtual Reality Simulation of Glenoid Reaming Procedure

    Get PDF
    Glenoid reaming is a bone machining operation in Total Shoulder Arthroplasty (TSA) in which the glenoid bone is resurfaced to make intimate contact with implant undersurface. While this step is crucial for the longevity of TSA, many surgeons find it technically challenging. With the recent advances in Virtual Reality (VR) simulations, it has become possible to realistically replicate complicated operations without any need for patients or cadavers, and at the same time, provide quantitative feedback to improve surgeons\u27 psycho-motor skills. In light of these advantages, the current thesis intends to develop tools and methods required for construction of a VR simulator for glenoid reaming, in an attempt to construct a reliable tool for preoperative training and planning for surgeons involved with TSA. Towards the end, this thesis presents computational algorithms to appropriately represent surgery tool and bone in the VR environment, determine their intersection and compute realistic haptic feedback based on the intersections. The core of the computations is constituted by sampled geometrical representations of both objects. In particular, point cloud model of the tool and voxelized model of bone - that is derived from Computed Tomography (CT) images - are employed. The thesis shows how to efficiently construct these models and adequately represent them in memory. It also elucidates how to effectively use these models to rapidly determine tool-bone collisions and account for bone removal momentarily. Furthermore, the thesis applies cadaveric experimental data to study the mechanics of glenoid reaming and proposes a realistic model for haptic computations. The proposed model integrates well with the developed computational tools, enabling real-time haptic and graphic simulation of glenoid reaming. Throughout the thesis, a particular emphasis is placed upon computational efficiency, especially on the use of parallel computing using Graphics Processing Units (GPUs). Extensive implementation results are also presented to verify the effectiveness of the developments. Not only do the results of this thesis advance the knowledge in the simulation of glenoid reaming, but they also rigorously contribute to the broader area of surgery simulation, and can serve as a step forward to the wider implementation of VR technology in surgeon training programs

    From 3D Models to 3D Prints: an Overview of the Processing Pipeline

    Get PDF
    Due to the wide diffusion of 3D printing technologies, geometric algorithms for Additive Manufacturing are being invented at an impressive speed. Each single step, in particular along the Process Planning pipeline, can now count on dozens of methods that prepare the 3D model for fabrication, while analysing and optimizing geometry and machine instructions for various objectives. This report provides a classification of this huge state of the art, and elicits the relation between each single algorithm and a list of desirable objectives during Process Planning. The objectives themselves are listed and discussed, along with possible needs for tradeoffs. Additive Manufacturing technologies are broadly categorized to explicitly relate classes of devices and supported features. Finally, this report offers an analysis of the state of the art while discussing open and challenging problems from both an academic and an industrial perspective.Comment: European Union (EU); Horizon 2020; H2020-FoF-2015; RIA - Research and Innovation action; Grant agreement N. 68044
    corecore