11,131 research outputs found

    Strict lower bounds with separation of sources of error in non-overlapping domain decomposition methods

    Get PDF
    This article deals with the computation of guaranteed lower bounds of the error in the framework of finite element (FE) and domain decomposition (DD) methods. In addition to a fully parallel computation, the proposed lower bounds separate the algebraic error (due to the use of a DD iterative solver) from the discretization error (due to the FE), which enables the steering of the iterative solver by the discretization error. These lower bounds are also used to improve the goal-oriented error estimation in a substructured context. Assessments on 2D static linear mechanic problems illustrate the relevance of the separation of sources of error and the lower bounds' independence from the substructuring. We also steer the iterative solver by an objective of precision on a quantity of interest. This strategy consists in a sequence of solvings and takes advantage of adaptive remeshing and recycling of search directions.Comment: International Journal for Numerical Methods in Engineering, Wiley, 201

    Resilience in Numerical Methods: A Position on Fault Models and Methodologies

    Full text link
    Future extreme-scale computer systems may expose silent data corruption (SDC) to applications, in order to save energy or increase performance. However, resilience research struggles to come up with useful abstract programming models for reasoning about SDC. Existing work randomly flips bits in running applications, but this only shows average-case behavior for a low-level, artificial hardware model. Algorithm developers need to understand worst-case behavior with the higher-level data types they actually use, in order to make their algorithms more resilient. Also, we know so little about how SDC may manifest in future hardware, that it seems premature to draw conclusions about the average case. We argue instead that numerical algorithms can benefit from a numerical unreliability fault model, where faults manifest as unbounded perturbations to floating-point data. Algorithms can use inexpensive "sanity" checks that bound or exclude error in the results of computations. Given a selective reliability programming model that requires reliability only when and where needed, such checks can make algorithms reliable despite unbounded faults. Sanity checks, and in general a healthy skepticism about the correctness of subroutines, are wise even if hardware is perfectly reliable.Comment: Position Pape

    Deep learning extends de novo protein modelling coverage of genomes using iteratively predicted structural constraints

    Get PDF
    The inapplicability of amino acid covariation methods to small protein families has limited their use for structural annotation of whole genomes. Recently, deep learning has shown promise in allowing accurate residue-residue contact prediction even for shallow sequence alignments. Here we introduce DMPfold, which uses deep learning to predict inter-atomic distance bounds, the main chain hydrogen bond network, and torsion angles, which it uses to build models in an iterative fashion. DMPfold produces more accurate models than two popular methods for a test set of CASP12 domains, and works just as well for transmembrane proteins. Applied to all Pfam domains without known structures, confident models for 25% of these so-called dark families were produced in under a week on a small 200 core cluster. DMPfold provides models for 16% of human proteome UniProt entries without structures, generates accurate models with fewer than 100 sequences in some cases, and is freely available.Comment: JGG and SMK contributed equally to the wor

    A Static Analyzer for Large Safety-Critical Software

    Get PDF
    We show that abstract interpretation-based static program analysis can be made efficient and precise enough to formally verify a class of properties for a family of large programs with few or no false alarms. This is achieved by refinement of a general purpose static analyzer and later adaptation to particular programs of the family by the end-user through parametrization. This is applied to the proof of soundness of data manipulation operations at the machine level for periodic synchronous safety critical embedded software. The main novelties are the design principle of static analyzers by refinement and adaptation through parametrization, the symbolic manipulation of expressions to improve the precision of abstract transfer functions, the octagon, ellipsoid, and decision tree abstract domains, all with sound handling of rounding errors in floating point computations, widening strategies (with thresholds, delayed) and the automatic determination of the parameters (parametrized packing)

    Instance optimal Crouzeix-Raviart adaptive finite element methods for the Poisson and Stokes problems

    Full text link
    We extend the ideas of Diening, Kreuzer, and Stevenson [Instance optimality of the adaptive maximum strategy, Found. Comput. Math. (2015)], from conforming approximations of the Poisson problem to nonconforming Crouzeix-Raviart approximations of the Poisson and the Stokes problem in 2D. As a consequence, we obtain instance optimality of an AFEM with a modified maximum marking strategy

    Algorithms and data structures for adaptive multigrid elliptic solvers

    Get PDF
    Adaptive refinement and the complicated data structures required to support it are discussed. These data structures must be carefully tuned, especially in three dimensions where the time and storage requirements of algorithms are crucial. Another major issue is grid generation. The options available seem to be curvilinear fitted grids, constructed on iterative graphics systems, and unfitted Cartesian grids, which can be constructed automatically. On several grounds, including storage requirements, the second option seems preferrable for the well behaved scalar elliptic problems considered here. A variety of techniques for treatment of boundary conditions on such grids are reviewed. A new approach, which may overcome some of the difficulties encountered with previous approaches, is also presented
    corecore