224,729 research outputs found

    A Distributed Approach for Real Time 3D Modeling

    Get PDF
    International audienceThis paper addresses the problem of real time 3D modeling from images with multiple cameras. Environments where multiple cameras and PCs are present are becoming usual, mainly due to new camera technologies and high computing power of modern PCs. However most applications in computer vision are based on a single, or few PCs for computations and do not scale. Our motivation in this paper is therefore to propose a distributed framework which allows to compute precise 3D models in real time with a variable number of cameras, this through an optimal use of the several PCs which are generally present. We focus in this paper on silhouette based modeling approaches and investigate how to efficiently partition the associated tasks over a set of PCs. Our contribution is a distribution scheme that applies to the different types of approaches in this field and allows for real time applications. Such a scheme relies on different accessible levels of parallelization, from individual task partitions to concurrent executions, yielding in turn controls on both latency and frame rate of the modeling system. We report on the application of the presented framework to visual hull modeling applications. In particular, we show that precise surface models can be computed in real time with standard components. Results with synthetic data and preliminary results in real contexts are presented

    Hybrid client-server and P2P network for web-based collaborative 3D design

    Get PDF
    National audienceOur proposed research project is to enable 3D distributed visualization and manipulation involving collaborative effort through the use of web-based technologies. Our project resulted from a wide collaborative application research fields: Computer Aided Design (CAD), Building Information Modeling (BIM) or Product Life Cycle Management (PLM) where design tasks are often performed in teams and need a fluent communication system. The system allows distributed remote assembling in 3D scenes with real-time updates for the users. This paper covers this feature using hybrid networking solution: a client-server architecture (REST) for 3D rendering (WebGL) and data persistence (NoSQL) associated to an automatically built peer-to-peer (P2P) mesh for real-time communication between the clients (WebRTC). The approach is demonstrated through the development of a web-platform prototype focusing on the easy manipulation, fine rendering and light update messages for all participating users. We provide an architecture and a prototype to enable users to design in 3D together in real time with the benefits of web based online collaboration

    Real-time 3D reconstruction of non-rigid shapes with a single moving camera

    Get PDF
    © . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/This paper describes a real-time sequential method to simultaneously recover the camera motion and the 3D shape of deformable objects from a calibrated monocular video. For this purpose, we consider the Navier-Cauchy equations used in 3D linear elasticity and solved by finite elements, to model the time-varying shape per frame. These equations are embedded in an extended Kalman filter, resulting in sequential Bayesian estimation approach. We represent the shape, with unknown material properties, as a combination of elastic elements whose nodal points correspond to salient points in the image. The global rigidity of the shape is encoded by a stiffness matrix, computed after assembling each of these elements. With this piecewise model, we can linearly relate the 3D displacements with the 3D acting forces that cause the object deformation, assumed to be normally distributed. While standard finite-element-method techniques require imposing boundary conditions to solve the resulting linear system, in this work we eliminate this requirement by modeling the compliance matrix with a generalized pseudoinverse that enforces a pre-fixed rank. Our framework also ensures surface continuity without the need for a post-processing step to stitch all the piecewise reconstructions into a global smooth shape. We present experimental results using both synthetic and real videos for different scenarios ranging from isometric to elastic deformations. We also show the consistency of the estimation with respect to 3D ground truth data, include several experiments assessing robustness against artifacts and finally, provide an experimental validation of our performance in real time at frame rate for small mapsPeer ReviewedPostprint (author's final draft

    VastGaussian: Vast 3D Gaussians for Large Scene Reconstruction

    Full text link
    Existing NeRF-based methods for large scene reconstruction often have limitations in visual quality and rendering speed. While the recent 3D Gaussian Splatting works well on small-scale and object-centric scenes, scaling it up to large scenes poses challenges due to limited video memory, long optimization time, and noticeable appearance variations. To address these challenges, we present VastGaussian, the first method for high-quality reconstruction and real-time rendering on large scenes based on 3D Gaussian Splatting. We propose a progressive partitioning strategy to divide a large scene into multiple cells, where the training cameras and point cloud are properly distributed with an airspace-aware visibility criterion. These cells are merged into a complete scene after parallel optimization. We also introduce decoupled appearance modeling into the optimization process to reduce appearance variations in the rendered images. Our approach outperforms existing NeRF-based methods and achieves state-of-the-art results on multiple large scene datasets, enabling fast optimization and high-fidelity real-time rendering.Comment: Accepted to CVPR 2024. Project website: https://vastgaussian.github.i

    Scalable Real-Time Rendering for Extremely Complex 3D Environments Using Multiple GPUs

    Get PDF
    In 3D visualization, real-time rendering of high-quality meshes in complex 3D environments is still one of the major challenges in computer graphics. New data acquisition techniques like 3D modeling and scanning have drastically increased the requirement for more complex models and the demand for higher display resolutions in recent years. Most of the existing acceleration techniques using a single GPU for rendering suffer from the limited GPU memory budget, the time-consuming sequential executions, and the finite display resolution. Recently, people have started building commodity workstations with multiple GPUs and multiple displays. As a result, more GPU memory is available across a distributed cluster of GPUs, more computational power is provided throughout the combination of multiple GPUs, and a higher display resolution can be achieved by connecting each GPU to a display monitor (resulting in a tiled large display configuration). However, using a multi-GPU workstation may not always give the desired rendering performance due to the imbalanced rendering workloads among GPUs and overheads caused by inter-GPU communication. In this dissertation, I contribute a multi-GPU multi-display parallel rendering approach for complex 3D environments. The approach has the capability to support a high-performance and high-quality rendering of static and dynamic 3D environments. A novel parallel load balancing algorithm is developed based on a screen partitioning strategy to dynamically balance the number of vertices and triangles rendered by each GPU. The overhead of inter-GPU communication is minimized by transferring only a small amount of image pixels rather than chunks of 3D primitives with a novel frame exchanging algorithm. The state-of-the-art parallel mesh simplification and GPU out-of-core techniques are integrated into the multi-GPU multi-display system to accelerate the rendering process

    Elastic, strain-aware physical modeling interface

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2006.Includes bibliographical references (leaves 56-58).Senspectra is a computationally augmented physical modeling toolkit designed for sensing and visualization of structural strain. The system functions as a distributed sensor network consisting of nodes, embedded with computational capabilities and a full spectrum LED, which communicate to neighbor nodes to determine a network topology through a system of flexible joints. Each joint, while serving as a data and power bus between nodes, also integrates an omnidirectional bend sensing mechanism, which uses a simple optical occlusion technique to sense and communicate mechanical strain between neighboring nodes. Using Senspectra, a user incrementally assembles and refines a physical 3D model of discrete elements with a real-time visualization of structural strain. While the Senspectra infrastructure provides a flexible modular sensor network platform, its primary application derives from the need to couple physical modeling techniques utilized in the architecture and industrial design disciplines with systems for structural engineering analysis, offering an intuitive approach for physical real-time finite element analysis. Utilizing direct manipulation augmented with visual feedback, the system gives users valuable insights on the global behavior of a constructed system defined as a network of discrete elements.by Vincent Leclerc.S.M
    • …
    corecore