585 research outputs found

    Evaluacion de modelos de aprendizaje profundo mediante redes neuronales guiadas por datos para materiales no lineales

    Get PDF
    Nonlinear materials are often difficult to model with classical methods like the FiniteElement Method, have a complex and sometimes inaccurate physical and mathematicaldescription or simply we do not know how to describe such materials in terms ofrelations between external and internal variables. In many disciplines, neural networkmethods have arisen as powerful tools to deal with nonlinear problems. In this work, thevery recently developed concept of Physically-Guided Neural Networks with InternalVariables (PGNNIV) is applied for nonlinear materials, providing us with a tool to addphysically meaningful constraints to deep neural networks from a model-free perspective.These latter outperform classical simulation methods in terms of computational powerfor the evaluation of the prediction of external and specially internal variables, sincethey are less computationally intensive and easily scalable. Furthermore, in comparisonwith classical neural networks, they lter numerical noise, have faster convergence, areless data demanding and can have improved extrapolation capacity. In addition, as theyare not based on conventional parametric models (model-free character), a reductionin the time required to develop material models is achieved compared to the use ofmethods such as Finite Elements. In this work, it is shown that the same PGNNIVis capable of achieving good results in the predictions regardless of the nature of theelastic material considered (linear, with hardening or softening behavior), being able tounravel the constitutive law of the material and explain its nature. The results showthat PGNNIV is a useful tool to deal with the problems of solid mechanics, both fromthe point of view of predicting the response to new load situations, and to explain thebehavior of materials, placing the method in what is known as Explainable ArticialIntelligence (XAI).<br /

    Deep Reinforcement Learning for the Design of Structural Topologies

    Get PDF
    Advances in machine learning algorithms and increased computational efficiencies have given engineers new capabilities and tools for engineering design. The presented work investigates using deep reinforcement learning (DRL), a subset of deep machine learning that teaches an agent to complete a task through accumulating experiences in an interactive environment, to design 2D structural topologies. Three unique structural topology design problems are investigated to validate DRL as a practical design automation tool to produce high-performing designs in structural topology domains. The first design problem attempts to find a gradient-free alternative to solving the compliance minimization topology optimization problem. In the proposed DRL environment, a DRL agent can sequentially remove elements from a starting solid material domain to form a topology that minimizes compliance. After each action, the agent receives feedback on its performance by evaluating how well the current topology satisfies the design objectives. The agent learned a generalized design strategy that produced topology designs with similar or better compliance minimization performance than traditional gradient-based topology optimization methods given various boundary conditions. The second design problem reformulates mechanical metamaterial unit cell design as a DRL task. The local unit cells of mechanical metamaterials are built by sequentially adding material elements according to a cubic Bezier curve methodology. The unit cells are built such that, when tessellated, they exhibit a targeted nonlinear deformation response under uniaxial compressive or tensile loading. Using a variational autoencoder for domain dimension reduction and a surrogate model for rapid deformation response prediction, the DRL environment was built to allow the agent to rapidly build mechanical metamaterials that exhibit a diverse array of deformation responses with variable degrees of nonlinearity. Finally, the third design problem expands on the second to train a DRL agent to design mechanical metamaterials with tailorable deformation and energy manipulation characteristics. The agent’s design performance was validated by creating metamaterials with a thermoplastic polyurethane (TPU) constitutive material that increased or decreased hysteresis while exhibiting the compressive deformation response of expanded thermoplastic polyurethane (E-TPU). These optimized designs were additively manufactured and underwent experimental cyclic compressive testing. The results showed the E-TPU and metamaterial with E-TPU target properties were well aligned, underscoring the feasibility of designing mechanical metamaterials with customizable deformation and energy manipulation responses. Finally, the agent\u27s generalized design capabilities were tested by designing multiple metamaterials with diverse desired loading deformation responses and specific hysteresis objectives. The combined success of these three design problems is critical in proving that a DRL agent can serve as a co-designer working with a human designer to achieve high-performing solutions in the domain of 2D structural topologies and is worthy of incorporation into a wide array of engineering design domains

    Onset of rigidty in glasses: from random to self-organized networks

    Full text link
    We review in this paper the signatures of a new elastic phase that is found in glasses with selected compositions. It is shown that in contrast with random networks, where rigidity percolates at a single threshold, networks that are able to self-organize to avoid stress will remain in an almost stress- free state during a compositional interval, an intermediate phase, that is bounded by a flexible phase and a stressed rigid phase. We report the experimental signatures and describe the theoretical efforts that have been accomplished to characterize the intermediate phase. We illustrate one of the methods used in more detail with the example of Group III chalcogenides and finally suggest further possible experimental signatures of self-organization.Comment: 27 pages, 6 figures, Proceedings of the Conference on Non-Crystalline Materials 10, to appear in Journal of Non-Crystalline Solid

    Numerical Modelling of Red Blood Cell Structural Mechanics

    Get PDF
    Red blood cells (RBCs) are the most abundant cellular elements in blood and their main function is the oxygen delivery. Structurally RBCs are highly deformable membrane-bounded liquid-core capsules. The deformability is critical to fulfill the functionality and is greatly affected by the RBC structural mechanics. Due to the small size, in vivo/vitro studies of the RBCs are often impossible where, being an alternative, numerical modellings stand out to be a robust approach to investigate the the RBC. In the recent years the spring-particle-based (SP) RBC modelling becomes very popular due to the simplicity and extensive modelling capability over the conventional approach using the continuum mechanics. The SP-RBC models use closed spring networks representing the membrane and the enclosed volume for the liquid core. Despite a number of successful applications, the modelling suitability still is questioned. In addition since the development of the SP-RBC model, the spring network employed is typically pre-stressed and results into inaccurate estimation of the membrane mechanical properties. Also the membrane bending is calculated based on the angle between the neighbouring triangle elements of the network and results in incapable of modelling complex membrane geometry. In light of these observations an enhanced SP-RBC model is proposed. In this model a stress-free spring element is used to comprise the network and the bending is calculated based on the membrane curvature. Through three replications of the experiment tests, i.e. optical tweezers test, vesicle transformation, and the stomatocyte-discocyte-echinocyte transformation, the accuracy and capability of the enhanced SP-RBC model is justified

    Reproducibility of Free Energy Calculations Across Different Molecular Simulation Software

    Get PDF
    <div> <div> <div> <p>Alchemical free energy calculations are an increasingly important modern simulation technique. Contemporary molecular simulation software such as AMBER, CHARMM, GROMACS and SOMD include support for the method. Implementation details vary among those codes but users expect reliability and reproducibility, i.e. for a given molec- ular model and set of forcefield parameters, comparable free energy should be obtained within statistical bounds regardless of the code used. Relative alchemical free energy (RAFE) simulation is increasingly used to support molecule discovery projects, yet the reproducibility of the methodology has been less well tested than its absolute counter- part. Here we present RAFE calculations of hydration free energies for a set of small organic molecules and demonstrate that free energies can be reproduced to within about 0.2 kcal/mol with aforementioned codes. Achieving this level of reproducibility requires considerable attention to detail and package–specific simulation protocols, and no uni- versally applicable protocol emerges. The benchmarks and protocols reported here should be useful for the community to validate new and future versions of software for free energy calculations.</p></div></div></div

    Improving protein docking with binding site prediction

    Get PDF
    Protein-protein and protein-ligand interactions are fundamental as many proteins mediate their biological function through these interactions. Many important applications follow directly from the identification of residues in the interfaces between protein-protein and protein-ligand interactions, such as drug design, protein mimetic engineering, elucidation of molecular pathways, and understanding of disease mechanisms. The identification of interface residues can also guide the docking process to build the structural model of protein-protein complexes. This dissertation focuses on developing computational approaches for protein-ligand and protein-protein binding site prediction and applying these predictions to improve protein-protein docking. First, we develop an automated approach LIGSITEcs to predict protein-ligand binding site, based on the notion of surface-solvent-surface events and the degree of conservation of the involved surface residues. We compare our algorithm to four other approaches, LIGSITE, CAST, PASS, and SURFNET, and evaluate all on a dataset of 48 unbound/bound structures and 210 bound-structures. LIGSITEcs performs slightly better than the other tools and achieves a success rate of 71% and 75%, respectively. Second, for protein-protein binding site, we develop metaPPI, a meta server for interface prediction. MetaPPI combines results from a number of tools, such as PPI_Pred, PPISP, PINUP, Promate, and SPPIDER, which predict enzyme-inhibitor interfaces with success rates of 23% to 55% and other interfaces with 10% to 28% on a benchmark dataset of 62 complexes. After refinement, metaPPI significantly improves prediction success rates to 70% for enzyme-inhibitor and 44% for other interfaces. Third, for protein-protein docking, we develop a FFT-based docking algorithm and system BDOCK, which includes specific scoring functions for specific types of complexes. BDOCK uses family-based residue interface propensities as a scoring function and obtains improvement factors of 4-30 for enzyme-inhibitor and 4-11 for antibody-antigen complexes in two specific SCOP families. Furthermore, the degrees of buriedness of surface residues are integrated into BDOCK, which improves the shape discriminator for enzyme-inhibitor complexes. The predicted interfaces from metaPPI are integrated as well, either during docking or after docking. The evaluation results show that reliable interface predictions improve the discrimination between near-native solutions and false positive. Finally, we propose an implicit method to deal with the flexibility of proteins by softening the surface, to improve docking for non enzyme-inhibitor complexes

    Chance and Necessity in Evolution: Lessons from RNA

    Full text link
    The relationship between sequences and secondary structures or shapes in RNA exhibits robust statistical properties summarized by three notions: (1) the notion of a typical shape (that among all sequences of fixed length certain shapes are realized much more frequently than others), (2) the notion of shape space covering (that all typical shapes are realized in a small neighborhood of any random sequence), and (3) the notion of a neutral network (that sequences folding into the same typical shape form networks that percolate through sequence space). Neutral networks loosen the requirements on the mutation rate for selection to remain effective. The original (genotypic) error threshold has to be reformulated in terms of a phenotypic error threshold. With regard to adaptation, neutrality has two seemingly contradictory effects: It acts as a buffer against mutations ensuring that a phenotype is preserved. Yet it is deeply enabling, because it permits evolutionary change to occur by allowing the sequence context to vary silently until a single point mutation can become phenotypically consequential. Neutrality also influences predictability of adaptive trajectories in seemingly contradictory ways. On the one hand it increases the uncertainty of their genotypic trace. At the same time neutrality structures the access from one shape to another, thereby inducing a topology among RNA shapes which permits a distinction between continuous and discontinuous shape transformations. To the extent that adaptive trajectories must undergo such transformations, their phenotypic trace becomes more predictable.Comment: 37 pages, 14 figures; 1998 CNLS conference; high quality figures at http://www.santafe.edu/~walte
    • …
    corecore