1,828 research outputs found

    QuickCSG: Fast Arbitrary Boolean Combinations of N Solids

    Get PDF
    QuickCSG computes the result for general N-polyhedron boolean expressions without an intermediate tree of solids. We propose a vertex-centric view of the problem, which simplifies the identification of final geometric contributions, and facilitates its spatial decomposition. The problem is then cast in a single KD-tree exploration, geared toward the result by early pruning of any region of space not contributing to the final surface. We assume strong regularity properties on the input meshes and that they are in general position. This simplifying assumption, in combination with our vertex-centric approach, improves the speed of the approach. Complemented with a task-stealing parallelization, the algorithm achieves breakthrough performance, one to two orders of magnitude speedups with respect to state-of-the-art CPU algorithms, on boolean operations over two to dozens of polyhedra. The algorithm also outperforms GPU implementations with approximate discretizations, while producing an output without redundant facets. Despite the restrictive assumptions on the input, we show the usefulness of QuickCSG for applications with large CSG problems and strong temporal constraints, e.g. modeling for 3D printers, reconstruction from visual hulls and collision detection

    QuickCSG: Fast Arbitrary Boolean Combinations of N Solids

    Full text link
    QuickCSG computes the result for general N-polyhedron boolean expressions without an intermediate tree of solids. We propose a vertex-centric view of the problem, which simplifies the identification of final geometric contributions, and facilitates its spatial decomposition. The problem is then cast in a single KD-tree exploration, geared toward the result by early pruning of any region of space not contributing to the final surface. We assume strong regularity properties on the input meshes and that they are in general position. This simplifying assumption, in combination with our vertex-centric approach, improves the speed of the approach. Complemented with a task-stealing parallelization, the algorithm achieves breakthrough performance, one to two orders of magnitude speedups with respect to state-of-the-art CPU algorithms, on boolean operations over two to dozens of polyhedra. The algorithm also outperforms GPU implementations with approximate discretizations, while producing an output without redundant facets. Despite the restrictive assumptions on the input, we show the usefulness of QuickCSG for applications with large CSG problems and strong temporal constraints, e.g. modeling for 3D printers, reconstruction from visual hulls and collision detection

    QuickCSG: Arbitrary and Faster Boolean Combinations of N Solids

    Get PDF
    While studied over several decades, the computation of boolean operations on polyhedra is almost always addressed by focusing on the case of two polyhedra. For multiple input polyhedra and an arbitrary boolean operation to be applied, the operation is decomposed over a binary CSG tree, each node being processed separately in quasilinear time. For large trees, this is both error prone due to intermediate geometry and error accumulation, and inefficient because each node yields a specific overhead. We introduce a fundamentally new approach to polyhedral CSG evaluation, addressing the general N-polyhedron case. We propose a new vertex-centric view of the problem, which both simplifies the algorithm computing resulting geometric contributions, and vastly facilitates its spatial decomposition. We then embed the entire problem in a single KD-tree, specifically geared toward the final result by early pruning of any region of space not contributing to the final surface. This not only improves the robustness of the approach, it also gives it a fundamental speed advantage, with an output complexity depending on the output mesh size instead of the input size as with usual approaches. Complemented with a task-stealing parallelization, the algorithm achieves breakthrough performance, one to two orders of magnitude speedups with respect to state-of-the-art CPU algorithms, on boolean operations over two to several dozen polyhedra. The algorithm is also shown to outperform recent GPU implementations and approximate discretizations, while producing an exact output without redundant facets.Quoique étudié depuis des décennies, le calcul d'opérations booléennes sur des polyèdres est quasiment toujours fait sur deux opérandes. Pour un plus grand nombre de polyèdres et une opération booléenne arbitraire à effectuer, l'opération est décomposée sur un arbre binaire CSG (géométrie constructive), dans lequel chaque nœud est traité séparément en temps quasi-linéaire. Pour de grands arbres, ceci est à la fois source d'erreurs, à cause des calculs géométriques intermédiaires, et inefficace à cause des traitements superflus au niveau des nœuds. Nous introduisons une approche fondamentalement nouvelle qui traite le cas général de N polyèdres. Nous proposons une vue du problème centrée sur les sommets, ce qui simplifie l'algorithme et facilite sa décomposition spatiale. Nous traitons le problème dans un seul KD-tree, qui est dirigé vers le résultat final, en élaguant les régions de l'espace qui ne contribuent pas à la surface finale. Non seulement ceci améliore la robustesse de l'approche mais ça lui donne un avantage en vitesse, car la complexité dépend plus de la taille de la sortie que celle d'entrée. En la combinant avec une parallélisation basée sur du vol de tâche, l'algorithme a des performances inouïes, d'un ou deux ordres de grandeur plus rapide que les algorithmes de l'état de l'art sur CPU et GPU. De plus il produit un résultat exact, sans aucune primitive géométrique superflue

    3D Shape Cropping

    Get PDF
    International audienceWe introduce shape cropping as the segmentation of a bounding geometry of an object as observed by sensors with different modalities. Segmenting a bounding volume is a preliminary step in many multi-view vision applications that consider or require the recovery of 3D information, in particular in multi-camera environments. Recent vision systems used to acquire such information often combine sensors of different types, usually color and depth sensors. Given depth and color images we present an efficient geometric algorithm to compute a polyhedral bounding sur- face that delimits the region in space where the object lies. The resulting cropped geometry eliminates unwanted space regions and enables the initialization of further processes including surface refinements. Our approach ex- ploits the fact that such a region can be defined as the intersection of 3D regions identified as non empty in color or depth images. To this purpose, we propose a novel polyhedron combination algorithm that overcomes compu- tational and robustness issues exhibited by traditional intersection tools in our context. We show the correction and effectiveness of the approach on various combination of inputs

    High Resolution 3D Shape Texture from Multiple Videos

    Get PDF
    International audienceWe examine the problem of retrieving high resolution textures of objects observed in multiple videos under small object deformations. In the monocular case, the data redundancy necessary to reconstruct a high-resolution image stems from temporal accumulation. This has been vastly explored and is known as super-resolution. On the other hand, a handful of methods have considered the texture of a static 3D object observed from several cameras, where the data redundancy is obtained through the different viewpoints. We introduce a unified framework to leverage both possibilities for the estimation of a high resolution texture of an object. This framework uniformly deals with any related geometric variability introduced by the acquisition chain or by the evolution over time. To this goal we use 2D warps for all viewpoints and all temporal frames and a linear projection model from texture to image space. Despite its simplicity, the method is able to successfully handle different views over space and time. As shown experimentally, it demonstrates the interest of temporal information that improves the texture quality. Additionally, we also show that our method outperforms state of the art multi-view super-resolution methods that exist for the static case

    QuickCSG: Arbitrary and Faster Boolean Combinations of N Solids

    No full text
    While studied over several decades, the computation of boolean operations on polyhedra is almost always addressed by focusing on the case of two polyhedra. For multiple input polyhedra and an arbitrary boolean operation to be applied, the operation is decomposed over a binary CSG tree, each node being processed separately in quasilinear time. For large trees, this is both error prone due to intermediate geometry and error accumulation, and inefficient because each node yields a specific overhead. We introduce a fundamentally new approach to polyhedral CSG evaluation, addressing the general N-polyhedron case. We propose a new vertex-centric view of the problem, which both simplifies the algorithm computing resulting geometric contributions, and vastly facilitates its spatial decomposition. We then embed the entire problem in a single KD-tree, specifically geared toward the final result by early pruning of any region of space not contributing to the final surface. This not only improves the robustness of the approach, it also gives it a fundamental speed advantage, with an output complexity depending on the output mesh size instead of the input size as with usual approaches. Complemented with a task-stealing parallelization, the algorithm achieves breakthrough performance, one to two orders of magnitude speedups with respect to state-of-the-art CPU algorithms, on boolean operations over two to several dozen polyhedra. The algorithm is also shown to outperform recent GPU implementations and approximate discretizations, while producing an exact output without redundant facets.Quoique étudié depuis des décennies, le calcul d'opérations booléennes sur des polyèdres est quasiment toujours fait sur deux opérandes. Pour un plus grand nombre de polyèdres et une opération booléenne arbitraire à effectuer, l'opération est décomposée sur un arbre binaire CSG (géométrie constructive), dans lequel chaque nœud est traité séparément en temps quasi-linéaire. Pour de grands arbres, ceci est à la fois source d'erreurs, à cause des calculs géométriques intermédiaires, et inefficace à cause des traitements superflus au niveau des nœuds. Nous introduisons une approche fondamentalement nouvelle qui traite le cas général de N polyèdres. Nous proposons une vue du problème centrée sur les sommets, ce qui simplifie l'algorithme et facilite sa décomposition spatiale. Nous traitons le problème dans un seul KD-tree, qui est dirigé vers le résultat final, en élaguant les régions de l'espace qui ne contribuent pas à la surface finale. Non seulement ceci améliore la robustesse de l'approche mais ça lui donne un avantage en vitesse, car la complexité dépend plus de la taille de la sortie que celle d'entrée. En la combinant avec une parallélisation basée sur du vol de tâche, l'algorithme a des performances inouïes, d'un ou deux ordres de grandeur plus rapide que les algorithmes de l'état de l'art sur CPU et GPU. De plus il produit un résultat exact, sans aucune primitive géométrique superflue

    Cotemporal Multi-View Video Segmentation

    Get PDF
    International audienceWe address the problem of multi-view video segmentation of dynamic scenes in general and outdoor environments with possibly moving cameras. Multi-view methods for dynamic scenes usually rely on geometric calibration to impose spatial shape constraints between viewpoints. In this paper, we show that the calibration constraint can be relaxed while still getting competitive segmentation results using multi-view constraints. We introduce new multi-view cotemporality constraints through motion correlation cues, in addition to common appearance features used by co-segmentation methods to identify co-instances of objects. We also take advantage of learning based segmentation strategies by casting the problem as the selection of monocular proposals that satisfy multi-view constraints. This yields a fully automated method that can segment subjects of interest without any particular pre-processing stage. Results on several challenging outdoor datasets demonstrate the feasibility and robustness of our approach

    Shape Animation with Combined Captured and Simulated Dynamics

    No full text
    We present a novel volumetric animation generation framework to create new types of animations from raw 3D surface or point cloud sequence of captured real performances. The framework considers as input time incoherent 3D observations of a moving shape, and is thus particularly suitable for the output of performance capture platforms. In our system, a suitable virtual representation of the actor is built from real captures that allows seamless combination and simulation with virtual external forces and objects, in which the original captured actor can be reshaped, disassembled or reassembled from user-specified virtual physics. Instead of using the dominant surface-based geometric representation of the capture, which is less suitable for volumetric effects, our pipeline exploits Centroidal Voronoi tessellation decompositions as unified volumetric representation of the real captured actor, which we show can be used seamlessly as a building block for all processing stages, from capture and tracking to virtual physic simulation. The representation makes no human specific assumption and can be used to capture and re-simulate the actor with props or other moving scenery elements. We demonstrate the potential of this pipeline for virtual reanimation of a real captured event with various unprecedented volumetric visual effects, such as volumetric distortion, erosion, morphing, gravity pull, or collisions

    Segmentation multi-vues par coupure de graphes

    Get PDF
    National audienceDans cet article, nous abordons le problème de la segmentation simultanée d'images lorsque plusieurs caméras calibrées et synchronisées observent la même scène. Nous proposons une nouvelle approche permettant de propager l'information de segmentation de manière cohérente entre les vues. Pour cela, le problème de segmentation est formulé comme un problème d'étiquetage en deux régions fond et forme des pixels de l'image, résolu avec une méthode de coupe de graphe. Contrairement à de nombreuses approches de l'état de l'art, notre méthode ne nécessite pas de reconstruction 3D dense de l'objet mais plus simplement un échantillonnage éparse de l'espace 3D. Une évaluation complète est effectuée sur des données statiques standard. Les résultats obtenus montrent l'intérêt de la méthode qui obtient des résultats équivalents à ceux de l'état de l'art mais avec beaucoup moins de points de vue

    Multi-View Dynamic Shape Refinement Using Local Temporal Integration

    Get PDF
    International audienceWe consider 4D shape reconstructions in multi-view environments and investigate how to exploit temporal redundancy for precision refinement. In addition to being beneficial to many dynamic multi-view scenarios this also enables larger scenes where such increased precision can compensate for the reduced spatial resolution per image frame. With precision and scalability in mind, we propose a symmetric (non-causal) local time-window geometric integration scheme over temporal sequences, where shape reconstructions are refined framewise by warping local and reliable geometric regions of neighboring frames to them. This is in contrast to recent comparable approaches targeting a different context with more compact scenes and real-time applications. These usually use a single dense volumetric update space or geometric template, which they causally track and update globally frame by frame, with limitations in scalability for larger scenes and in topology and precision with a template based strategy. Our templateless and local approach is a first step towards temporal shape super-resolution. We show that it improves reconstruction accuracy by considering multiple frames. To this purpose, and in addition to real data examples, we introduce a multi-camera synthetic dataset that provides ground-truth data for mid-scale dynamic scenes
    • …
    corecore