253 research outputs found

    Level Set Methods for Stochastic Discontinuity Detection in Nonlinear Problems

    Full text link
    Stochastic physical problems governed by nonlinear conservation laws are challenging due to solution discontinuities in stochastic and physical space. In this paper, we present a level set method to track discontinuities in stochastic space by solving a Hamilton-Jacobi equation. By introducing a speed function that vanishes at discontinuities, the iso-zero of the level set problem coincide with the discontinuities of the conservation law. The level set problem is solved on a sequence of successively finer grids in stochastic space. The method is adaptive in the sense that costly evaluations of the conservation law of interest are only performed in the vicinity of the discontinuities during the refinement stage. In regions of stochastic space where the solution is smooth, a surrogate method replaces expensive evaluations of the conservation law. The proposed method is tested in conjunction with different sets of localized orthogonal basis functions on simplex elements, as well as frames based on piecewise polynomials conforming to the level set function. The performance of the proposed method is compared to existing adaptive multi-element generalized polynomial chaos methods

    The persistent cosmic web and its filamentary structure II: Illustrations

    Full text link
    The recently introduced discrete persistent structure extractor (DisPerSE, Soubie 2010, paper I) is implemented on realistic 3D cosmological simulations and observed redshift catalogues (SDSS); it is found that DisPerSE traces equally well the observed filaments, walls, and voids in both cases. In either setting, filaments are shown to connect onto halos, outskirt walls, which circumvent voids. Indeed this algorithm operates directly on the particles without assuming anything about the distribution, and yields a natural (topologically motivated) self-consistent criterion for selecting the significance level of the identified structures. It is shown that this extraction is possible even for very sparsely sampled point processes, as a function of the persistence ratio. Hence astrophysicists should be in a position to trace and measure precisely the filaments, walls and voids from such samples and assess the confidence of the post-processed sets as a function of this threshold, which can be expressed relative to the expected amplitude of shot noise. In a cosmic framework, this criterion is comparable to friend of friend for the identifications of peaks, while it also identifies the connected filaments and walls, and quantitatively recovers the full set of topological invariants (Betti numbers) {\sl directly from the particles} as a function of the persistence threshold. This criterion is found to be sufficient even if one particle out of two is noise, when the persistence ratio is set to 3-sigma or more. The algorithm is also implemented on the SDSS catalogue and used to locat interesting configurations of the filamentary structure. In this context we carried the identification of an ``optically faint'' cluster at the intersection of filaments through the recent observation of its X-ray counterpart by SUZAKU. The corresponding filament catalogue will be made available online.Comment: A higher resolution version is available at http://www.iap.fr/users/sousbie together with complementary material (movie and data). Submitted to MNRA

    Robust Surface Reconstruction from Point Clouds

    Get PDF
    The problem of generating a surface triangulation from a set of points with normal information arises in several mesh processing tasks like surface reconstruction or surface resampling. In this paper we present a surface triangulation approach which is based on local 2d delaunay triangulations in tangent space. Our contribution is the extension of this method to surfaces with sharp corners and creases. We demonstrate the robustness of the method on difficult meshing problems that include nearby sheets, self intersecting non manifold surfaces and noisy point samples

    Recent Improvements on Cavity-Based Operators for RANS Mesh Adaptation

    Get PDF
    International audienceIf anisotropic mesh adaptation has been a reliable tool to predict inviscid flows, its use with viscous flows at high Reynolds number remains a tedious task. Indeed many issues tends to limit the efficiency of standard remeshing algorithms based on local modifications. First, the high Reynolds number require to handle a very high level of anisotropy O(1 : 10 6) near the geometry. In the range of anisotropy, interpolation of metric fields or the projection on geometry are typical components that may fail during an adaptive step. The need for high-resolution near the geometry imposes to use an accurate geometry description, and optimally, be linked to a continuous CAD geometries. However, the boundary layer sizing may become smaller than typical CAD tolerance. We present a simple hierarchical geometry approximation where the newly created points are projected linearly, then using a cubic approximation then the CAD data. Finally, the accuracy, speed of convergence of the flow solver highly depends on the topology of the grids. Typical quasi-structured grids are preferred in the boundary layer while this kind of grids are complicated to generate with typical anisotropic meshing algorithm. We discuss in this paper, new developments in metric-orthogonal approach where an advancing points techniques is used to propose new points. Then these newly created points are inserted by using the cavity operator

    Efficient deep data assimilation with sparse observations and time-varying sensors

    Full text link
    Variational Data Assimilation (DA) has been broadly used in engineering problems for field reconstruction and prediction by performing a weighted combination of multiple sources of noisy data. In recent years, the integration of deep learning (DL) techniques in DA has shown promise in improving the efficiency and accuracy in high-dimensional dynamical systems. Nevertheless, existing deep DA approaches face difficulties in dealing with unstructured observation data, especially when the placement and number of sensors are dynamic over time. We introduce a novel variational DA scheme, named Voronoi-tessellation Inverse operator for VariatIonal Data assimilation (VIVID), that incorporates a DL inverse operator into the assimilation objective function. By leveraging the capabilities of the Voronoi-tessellation and convolutional neural networks, VIVID is adept at handling sparse, unstructured, and time-varying sensor data. Furthermore, the incorporation of the DL inverse operator establishes a direct link between observation and state space, leading to a reduction in the number of minimization steps required for DA. Additionally, VIVID can be seamlessly integrated with Proper Orthogonal Decomposition (POD) to develop an end-to-end reduced-order DA scheme, which can further expedite field reconstruction. Numerical experiments in a fluid dynamics system demonstrate that VIVID can significantly outperform existing DA and DL algorithms. The robustness of VIVID is also accessed through the application of various levels of prior error, the utilization of varying numbers of sensors, and the misspecification of error covariance in DA

    Construction of boundary element models in bioelectromagnetism

    Get PDF
    Multisensor electro- and magnetoencephalographic (EEG and MEG) as well as electro- and magnetocardiographic (ECG and MCG) recordings have been proved useful in noninvasively extracting information on bioelectric excitation. The anatomy of the patient needs to be taken into account, when excitation sites are localized by solving the inverse problem. In this work, a methodology has been developed to construct patient specific boundary element models for bioelectromagnetic inverse problems from magnetic resonance (MR) data volumes as well as from two orthogonal X-ray projections. The process consists of three main steps: reconstruction of 3-D geometry, triangulation of reconstructed geometry, and registration of the model with a bioelectromagnetic measurement system. The 3-D geometry is reconstructed from MR data by matching a 3-D deformable boundary element template to images. The deformation is accomplished as an energy minimization process consisting of image and model based terms. The robustness of the matching is improved by multi-resolution and global-to-local approaches as well as using oriented distance maps. A boundary element template is also used when 3-D geometry is reconstructed from X-ray projections. The deformation is first accomplished in 2-D for the contours of simulated, built from the template, and real X-ray projections. The produced 2-D vector field is back-projected and interpolated on the 3-D template surface. A marching cube triangulation is computed for the reconstructed 3-D geometry. Thereafter, a non-iterative mesh-simplification method is applied. The method is based on the Voronoi-Delaunay duality on a 3-D surface with discrete distance measures. Finally, the triangulated surfaces are registered with a bioelectromagnetic measurement utilizing markers. More than fifty boundary element models have been successfully constructed from MR images using the methods developed in this work. A simulation demonstrated the feasibility of X-ray reconstruction; some practical problems of X-ray imaging need to be solved to begin tests with real data.reviewe

    COMPOSE: Compacted object sample extraction a framework for semi-supervised learning in nonstationary environments

    Get PDF
    An increasing number of real-world applications are associated with streaming data drawn from drifting and nonstationary distributions. These applications demand new algorithms that can learn and adapt to such changes, also known as concept drift. Proper characterization of such data with existing approaches typically requires substantial amount of labeled instances, which may be difficult, expensive, or even impractical to obtain. In this thesis, compacted object sample extraction (COMPOSE) is introduced - a computational geometry-based framework to learn from nonstationary streaming data - where labels are unavailable (or presented very sporadically) after initialization. The feasibility and performance of the algorithm are evaluated on several synthetic and real-world data sets, which present various different scenarios of initially labeled streaming environments. On carefully designed synthetic data sets, we also compare the performance of COMPOSE against the optimal Bayes classifier, as well as the arbitrary subpopulation tracker algorithm, which addresses a similar environment referred to as extreme verification latency. Furthermore, using the real-world National Oceanic and Atmospheric Administration weather data set, we demonstrate that COMPOSE is competitive even with a well-established and fully supervised nonstationary learning algorithm that receives labeled data in every batch
    corecore