2,177 research outputs found
Coarse-grained Multiresolution Structures for Mobile Exploration of Gigantic Surface Models
We discuss our experience in creating scalable systems for distributing
and rendering gigantic 3D surfaces on web environments and
common handheld devices. Our methods are based on compressed
streamable coarse-grained multiresolution structures. By combining
CPU and GPU compression technology with our multiresolution
data representation, we are able to incrementally transfer, locally
store and render with unprecedented performance extremely
detailed 3D mesh models on WebGL-enabled browsers, as well as
on hardware-constrained mobile devices
An open and parallel multiresolution framework using block-based adaptive grids
A numerical approach for solving evolutionary partial differential equations
in two and three space dimensions on block-based adaptive grids is presented.
The numerical discretization is based on high-order, central finite-differences
and explicit time integration. Grid refinement and coarsening are triggered by
multiresolution analysis, i.e. thresholding of wavelet coefficients, which
allow controlling the precision of the adaptive approximation of the solution
with respect to uniform grid computations. The implementation of the scheme is
fully parallel using MPI with a hybrid data structure. Load balancing relies on
space filling curves techniques. Validation tests for 2D advection equations
allow to assess the precision and performance of the developed code.
Computations of the compressible Navier-Stokes equations for a temporally
developing 2D mixing layer illustrate the properties of the code for nonlinear
multi-scale problems. The code is open source
A multiresolution space-time adaptive scheme for the bidomain model in electrocardiology
This work deals with the numerical solution of the monodomain and bidomain
models of electrical activity of myocardial tissue. The bidomain model is a
system consisting of a possibly degenerate parabolic PDE coupled with an
elliptic PDE for the transmembrane and extracellular potentials, respectively.
This system of two scalar PDEs is supplemented by a time-dependent ODE modeling
the evolution of the so-called gating variable. In the simpler sub-case of the
monodomain model, the elliptic PDE reduces to an algebraic equation. Two simple
models for the membrane and ionic currents are considered, the
Mitchell-Schaeffer model and the simpler FitzHugh-Nagumo model. Since typical
solutions of the bidomain and monodomain models exhibit wavefronts with steep
gradients, we propose a finite volume scheme enriched by a fully adaptive
multiresolution method, whose basic purpose is to concentrate computational
effort on zones of strong variation of the solution. Time adaptivity is
achieved by two alternative devices, namely locally varying time stepping and a
Runge-Kutta-Fehlberg-type adaptive time integration. A series of numerical
examples demonstrates thatthese methods are efficient and sufficiently accurate
to simulate the electrical activity in myocardial tissue with affordable
effort. In addition, an optimalthreshold for discarding non-significant
information in the multiresolution representation of the solution is derived,
and the numerical efficiency and accuracy of the method is measured in terms of
CPU time speed-up, memory compression, and errors in different norms.Comment: 25 pages, 41 figure
Adaptive multiresolution computations applied to detonations
A space-time adaptive method is presented for the reactive Euler equations
describing chemically reacting gas flow where a two species model is used for
the chemistry. The governing equations are discretized with a finite volume
method and dynamic space adaptivity is introduced using multiresolution
analysis. A time splitting method of Strang is applied to be able to consider
stiff problems while keeping the method explicit. For time adaptivity an
improved Runge--Kutta--Fehlberg scheme is used. Applications deal with
detonation problems in one and two space dimensions. A comparison of the
adaptive scheme with reference computations on a regular grid allow to assess
the accuracy and the computational efficiency, in terms of CPU time and memory
requirements.Comment: Zeitschrift f\"ur Physicalische Chemie, accepte
Multi-Resolution Texture Coding for Multi-Resolution 3D Meshes
We present an innovative system to encode and transmit textured multi-resolution 3D meshes in a progressive way, with no need to send several texture images, one for each mesh LOD (Level Of Detail). All texture LODs are created from the finest one (associated to the finest mesh), but can be re- constructed progressively from the coarsest thanks to refinement images calculated in the encoding process, and transmitted only if needed. This allows us to adjust the LOD/quality of both 3D mesh and texture according to the rendering power of the device that will display them, and to the network capacity. Additionally, we achieve big savings in data transmission by avoiding altogether texture coordinates, which are generated automatically thanks to an unwrapping system agreed upon by both encoder and decoder
Wavelet-based Adaptive Techniques Applied to Turbulent Hypersonic Scramjet Intake Flows
The simulation of hypersonic flows is computationally demanding due to large
gradients of the flow variables caused by strong shock waves and thick boundary
or shear layers. The resolution of those gradients imposes the use of extremely
small cells in the respective regions. Taking turbulence into account
intensives the variation in scales even more. Furthermore, hypersonic flows
have been shown to be extremely grid sensitive. For the simulation of
three-dimensional configurations of engineering applications, this results in a
huge amount of cells and prohibitive computational time. Therefore, modern
adaptive techniques can provide a gain with respect to computational costs and
accuracy, allowing the generation of locally highly resolved flow regions where
they are needed and retaining an otherwise smooth distribution. An h-adaptive
technique based on wavelets is employed for the solution of hypersonic flows.
The compressible Reynolds averaged Navier-Stokes equations are solved using a
differential Reynolds stress turbulence model, well suited to predict
shock-wave-boundary-layer interactions in high enthalpy flows. Two test cases
are considered: a compression corner and a scramjet intake. The compression
corner is a classical test case in hypersonic flow investigations because it
poses a shock-wave-turbulent-boundary-layer interaction problem. The adaptive
procedure is applied to a two-dimensional confguration as validation. The
scramjet intake is firstly computed in two dimensions. Subsequently a
three-dimensional geometry is considered. Both test cases are validated with
experimental data and compared to non-adaptive computations. The results show
that the use of an adaptive technique for hypersonic turbulent flows at high
enthalpy conditions can strongly improve the performance in terms of memory and
CPU time while at the same time maintaining the required accuracy of the
results.Comment: 26 pages, 29 Figures, submitted to AIAA Journa
A new numerical strategy with space-time adaptivity and error control for multi-scale streamer discharge simulations
This paper presents a new resolution strategy for multi-scale streamer
discharge simulations based on a second order time adaptive integration and
space adaptive multiresolution. A classical fluid model is used to describe
plasma discharges, considering drift-diffusion equations and the computation of
electric field. The proposed numerical method provides a time-space accuracy
control of the solution, and thus, an effective accurate resolution independent
of the fastest physical time scale. An important improvement of the
computational efficiency is achieved whenever the required time steps go beyond
standard stability constraints associated with mesh size or source time scales
for the resolution of the drift-diffusion equations, whereas the stability
constraint related to the dielectric relaxation time scale is respected but
with a second order precision. Numerical illustrations show that the strategy
can be efficiently applied to simulate the propagation of highly nonlinear
ionizing waves as streamer discharges, as well as highly multi-scale nanosecond
repetitively pulsed discharges, describing consistently a broad spectrum of
space and time scales as well as different physical scenarios for consecutive
discharge/post-discharge phases, out of reach of standard non-adaptive methods.Comment: Support of Ecole Centrale Paris is gratefully acknowledged for
several month stay of Z. Bonaventura at Laboratory EM2C as visiting
Professor. Authors express special thanks to Christian Tenaud (LIMSI-CNRS)
for providing the basis of the multiresolution kernel of MR CHORUS, code
developed for compressible Navier-Stokes equations (D\'eclaration d'Invention
DI 03760-01). Accepted for publication; Journal of Computational Physics
(2011) 1-2
A predictive approach for a real-time remote visualization of large meshes
DĂ©jĂ sur HALRemote access to large meshes is the subject of studies since several years. We propose in this paper a contribution to the problem of remote mesh viewing. We work on triangular meshes. After a study of existing methods of remote viewing, we propose a visualization approach based on a client-server architecture, in which almost all operations are performed on the server. Our approach includes three main steps: a first step of partitioning the original mesh, generating several fragments of the original mesh that can be supported by the supposed smaller Transfer Control Protocol (TCP) window size of the network, a second step called pre-simplification of the mesh partitioned, generating simplified models of fragments at different levels of detail, which aims to accelerate the visualization process when a client(that we also call remote user) requests a visualization of a specific area of interest, the final step involves the actual visualization of an area which interest the client, the latter having the possibility to visualize more accurately the area of interest, and less accurately the areas out of context. In this step, the reconstruction of the object taking into account the connectivity of fragments before simplifying a fragment is necessary.Pestiv-3D projec
- âŠ