18,844 research outputs found

    Evolution of anodic stress corrosion cracking in a coated material

    Get PDF
    In the present paper, we investigate the influence of corrosion driving forces and interfacial toughness for a coated material subjected to mechanical loading. If the protective coating is cracked, the substrate material may become exposed to a corrosive media. For a stress corrosion sensitive substrate material, this may lead to detrimental crack growth. A crack is assumed to grow by anodic dissolution, inherently leading to a blunt crack tip. The evolution of the crack surface is modelled as a moving boundary problem using an adaptive finite element method. The rate of dissolution along the crack surface in the substrate is assumed to be proportional to the chemical potential, which is function of the local surface energy density and elastic strain energy density. The surface energy tends to flatten the surface, whereas the strain energy due to stress concentration promotes material dissolution. The influence of the interface energy density parameter for the solid–fluid combination, interface corrosion resistance and stiffness ratios between coating and substrate is investigated. Three characteristic crack shapes are obtained; deepening and narrowing single cracks, branched cracks and sharp interface cracks. The crack shapes obtained by our simulations are similar to real sub-coating cracks reported in the literature

    Data mining and accelerated electronic structure theory as a tool in the search for new functional materials

    Full text link
    Data mining is a recognized predictive tool in a variety of areas ranging from bioinformatics and drug design to crystal structure prediction. In the present study, an electronic structure implementation has been combined with structural data from the Inorganic Crystal Structure Database to generate results for highly accelerated electronic structure calculations of about 22,000 inorganic compounds. It is shown how data mining algorithms employed on the database can identify new functional materials with desired materials properties, resulting in a prediction of 136 novel materials with potential for use as detector materials for ionizing radiation. The methodology behind the automatized ab-initio approach is presented, results are tabulated and a version of the complete database is made available at the internet web site http://gurka.fysik.uu.se/ESP/ (Ref.1).Comment: Project homepage: http://gurka.fysik.uu.se/ESP

    Nanovoid nucleation by vacancy aggregation and vacancy-cluster coarsening in high-purity metallic single crystals

    Get PDF
    A numerical model to estimate critical times required for nanovoid nucleation in high-purity aluminum single crystals subjected to shock loading is presented. We regard a nanovoid to be nucleated when it attains a size sufficient for subsequent growth by dislocation-mediated plasticity. Nucleation is assumed to proceed by means of diffusion-mediated vacancy aggregation and subsequent vacancy cluster coarsening. Nucleation times are computed by a combination of lattice kinetic Monte Carlo simulations and simple estimates of nanovoid cavitation pressures and vacancy concentrations. The domain of validity of the model is established by considering rate-limiting physical processes and theoretical strength limits. The computed nucleation times are compared to experiments suggesting that vacancy aggregation and cluster coarsening are feasible mechanisms of nanovoid nucleation in a specific subdomain of the pressure-strain rate-temperature space

    Symplectic-energy-momentum preserving variational integrators

    Get PDF
    The purpose of this paper is to develop variational integrators for conservative mechanical systems that are symplectic and energy and momentum conserving. To do this, a space–time view of variational integrators is employed and time step adaptation is used to impose the constraint of conservation of energy. Criteria for the solvability of the time steps and some numerical examples are given

    Variational integrators, the Newmark scheme, and dissipative systems

    Get PDF
    Variational methods are a class of symplectic-momentum integrators for ODEs. Using these schemes, it is shown that the classical Newmark algorithm is structure preserving in a non-obvious way, thus explaining the observed numerical behavior. Modifications to variational methods to include forcing and dissipation are also proposed, extending the advantages of structure preserving integrators to non-conservative systems

    Dislocation subgrain structures and modeling the plastic hardening of metallic single crystals

    Get PDF
    A single crystal plasticity theory for insertion into finite element simulation is formulated using sequential laminates to model subgrain dislocation structures. It is known that local models do not adequately account for latent hardening, as latent hardening is not only a material property, but a nonlocal property (e.g. grain size and shape). The addition of the nonlocal energy from the formation of subgrain structure dislocation walls and the boundary layer misfits provide both latent and self-hardening of a crystal slip. Latent hardening occurs as the formation of new dislocation walls limits motion of new mobile dislocations, thus hardening future slip systems. Self-hardening is accomplished by an evolution of the subgrain structure length scale. The substructure length scale is computed by minimizing the nonlocal energy. The minimization of the nonlocal energy is a competition between the dislocation wall energy and the boundary layer energies. The nonlocal terms are also directly minimized within the subgrain model as they affect deformation response. The geometrical relationship between the dislocation walls and slip planes affecting the dislocation mean free path is taken into account, giving a first-order approximation to shape effects. A coplanar slip model is developed due to requirements while modeling the subgrain structure. This subgrain structure plasticity model is noteworthy as all material parameters are experimentally determined rather than fit. The model also has an inherit path dependence due to the formation of the subgrain structures. Validation is accomplished by comparison with single crystal tension test results

    Nonsmooth Lagrangian mechanics and variational collision integrators

    Get PDF
    Variational techniques are used to analyze the problem of rigid-body dynamics with impacts. The theory of smooth Lagrangian mechanics is extended to a nonsmooth context appropriate for collisions, and it is shown in what sense the system is symplectic and satisfies a Noether-style momentum conservation theorem. Discretizations of this nonsmooth mechanics are developed by using the methodology of variational discrete mechanics. This leads to variational integrators which are symplectic-momentum preserving and are consistent with the jump conditions given in the continuous theory. Specific examples of these methods are tested numerically, and the long-time stable energy behavior typical of variational methods is demonstrated

    Frictional Collisions Off Sharp Objects

    Get PDF
    This work develops robust contact algorithms capable of dealing with multibody nonsmooth contact geometries for which neither normals nor gap functions can be defined. Such situations arise in the early stage of fragmentation when a number of angular fragments undergo complex collision sequences before eventually scattering. Such situations precludes the application of most contact algorithms proposed to date

    The Optimal Uncertainty Algorithm in the Mystic Framework

    Get PDF
    We have recently proposed a rigorous framework for Uncertainty Quantification (UQ) in which UQ objectives and assumption/information set are brought into the forefront, providing a framework for the communication and comparison of UQ results. In particular, this framework does not implicitly impose inappropriate assumptions nor does it repudiate relevant information. This framework, which we call Optimal Uncertainty Quantification (OUQ), is based on the observation that given a set of assumptions and information, there exist bounds on uncertainties obtained as values of optimization problems and that these bounds are optimal. It provides a uniform environment for the optimal solution of the problems of validation, certification, experimental design, reduced order modeling, prediction, extrapolation, all under aleatoric and epistemic uncertainties. OUQ optimization problems are extremely large, and even though under general conditions they have finite-dimensional reductions, they must often be solved numerically. This general algorithmic framework for OUQ has been implemented in the mystic optimization framework. We describe this implementation, and demonstrate its use in the context of the Caltech surrogate model for hypervelocity impact

    Optimal Uncertainty Quantification

    Get PDF
    We propose a rigorous framework for Uncertainty Quantification (UQ) in which the UQ objectives and the assumptions/information set are brought to the forefront. This framework, which we call Optimal Uncertainty Quantification (OUQ), is based on the observation that, given a set of assumptions and information about the problem, there exist optimal bounds on uncertainties: these are obtained as extreme values of well-defined optimization problems corresponding to extremizing probabilities of failure, or of deviations, subject to the constraints imposed by the scenarios compatible with the assumptions and information. In particular, this framework does not implicitly impose inappropriate assumptions, nor does it repudiate relevant information. Although OUQ optimization problems are extremely large, we show that under general conditions, they have finite-dimensional reductions. As an application, we develop Optimal Concentration Inequalities (OCI) of Hoeffding and McDiarmid type. Surprisingly, contrary to the classical sensitivity analysis paradigm, these results show that uncertainties in input parameters do not necessarily propagate to output uncertainties. In addition, a general algorithmic framework is developed for OUQ and is tested on the Caltech surrogate model for hypervelocity impact, suggesting the feasibility of the framework for important complex systems
    corecore