2,839 research outputs found

    Don Arturo E. Salazar Valencia, maestro investigador y visionario

    Get PDF

    Formulation, implementation and validation of a scalar damage model for brittle materials applied to three- dimensional solid elements

    Get PDF
    La mecánica del daño continuo describe el deterioro progresivo de las propiedades mecánicas de los materiales a partir de un modelo fenomenológico. En este trabajo se presenta la formulación, implementación y validación de un modelo de daño escalar aplicado a elementos sólidos tridimensionales. Se trata de un modelo de gran versatilidad definido a partir de una superficie de falla y una variable de daño escalar. Su campo de aplicación es la modelación numérica de materiales elásticos isótropos con degradación de la rigidez cuya tensión límite a tracción es igual a la de compresión. El modelo queda definido a partir de cuatro propiedades del material determinables en un ensayo Tensión-Desplazamiento. El modelo es implementado en el software ABAQUS por medio de una subrutina UMAT. Para la resolución de las ecuaciones de equilibrio no lineal se propone un algoritmo de tipo implícito (Método de Backward Euler). La validación a esfuerzos de tracción muestra una adecuada correlación entre los resultados numéricos y los experimentales, con una dispersión de la energía disipada del 6%. Finalmente, se presenta un ejemplo de aplicación. Los resultados alcanzados demuestran que se trata de una herramienta sencilla y, a la vez, poderosa para el análisis numérico de materiales frágiles.Continuum Damage Mechanics describes the progressive degradation of the material properties based on a phenomenological model. This work presents the formulation, implementation and validation of a scalar damage model applied to three-dimensional solid elements. It is a highly versatile model defined from a fault surface and a scalar damage variable. Isotropic elastic materials with softening behavior and a single threshold surface can be simulated by this model. Four parameters are necessary to define the model and they derive from the classical stress-strain test. The model is implemented through a user-defined UMAT subroutine in software ABAQUS. The non-linear equilibrium equations are solved by an implicit algorithm based on the Backward Euler Method. The tensile stress validation shows an adequate correlation between the numerical and experimental results, with a 6% dispersion of dissipated energy. Finally, an illustrative example is presented. The results show that it is a simple but powerful tool for the numerical analysis of brittle materials.Fil: González del Solar, Gerardo Andrés. Universidad Tecnológica Nacional. Facultad Regional Mendoza. Centro Regional de Desarrollos Tecnológicos para la Construcción, Sismología e Ingeniería Sísmica; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Martín, P.. Universidad Tecnológica Nacional. Facultad Regional Mendoza. Centro Regional de Desarrollos Tecnológicos para la Construcción, Sismología e Ingeniería Sísmica; ArgentinaFil: Maldonado, N.. Universidad Tecnológica Nacional. Facultad Regional Mendoza. Centro Regional de Desarrollos Tecnológicos para la Construcción, Sismología e Ingeniería Sísmica; Argentin

    Bridging boolean and quantitative synthesis using smoothed proof search

    Get PDF
    We present a new technique for parameter synthesis under boolean and quantitative objectives. The input to the technique is a "sketch" --- a program with missing numerical parameters --- and a probabilistic assumption about the program's inputs. The goal is to automatically synthesize values for the parameters such that the resulting program satisfies: (1) a {boolean specification}, which states that the program must meet certain assertions, and (2) a {quantitative specification}, which assigns a real valued rating to every program and which the synthesizer is expected to optimize. Our method --- called smoothed proof search --- reduces this task to a sequence of unconstrained smooth optimization problems that are then solved numerically. By iteratively solving these problems, we obtain parameter values that get closer and closer to meeting the boolean specification; at the limit, we obtain values that provably meet the specification. The approximations are computed using a new notion of smoothing for program abstractions, where an abstract transformer is approximated by a function that is continuous according to a metric over abstract states. We present a prototype implementation of our synthesis procedure, and experimental results on two benchmarks from the embedded control domain. The experiments demonstrate the benefits of smoothed proof search over an approach that does not meet the boolean and quantitative synthesis goals simultaneously.National Science Foundation (U.S.) (NSF Award #1162076

    Abstract Learning Frameworks for Synthesis

    Full text link
    We develop abstract learning frameworks (ALFs) for synthesis that embody the principles of CEGIS (counter-example based inductive synthesis) strategies that have become widely applicable in recent years. Our framework defines a general abstract framework of iterative learning, based on a hypothesis space that captures the synthesized objects, a sample space that forms the space on which induction is performed, and a concept space that abstractly defines the semantics of the learning process. We show that a variety of synthesis algorithms in current literature can be embedded in this general framework. While studying these embeddings, we also generalize some of the synthesis problems these instances are of, resulting in new ways of looking at synthesis problems using learning. We also investigate convergence issues for the general framework, and exhibit three recipes for convergence in finite time. The first two recipes generalize current techniques for convergence used by existing synthesis engines. The third technique is a more involved technique of which we know of no existing instantiation, and we instantiate it to concrete synthesis problems

    Temporal Stream Logic: Synthesis beyond the Bools

    Full text link
    Reactive systems that operate in environments with complex data, such as mobile apps or embedded controllers with many sensors, are difficult to synthesize. Synthesis tools usually fail for such systems because the state space resulting from the discretization of the data is too large. We introduce TSL, a new temporal logic that separates control and data. We provide a CEGAR-based synthesis approach for the construction of implementations that are guaranteed to satisfy a TSL specification for all possible instantiations of the data processing functions. TSL provides an attractive trade-off for synthesis. On the one hand, synthesis from TSL, unlike synthesis from standard temporal logics, is undecidable in general. On the other hand, however, synthesis from TSL is scalable, because it is independent of the complexity of the handled data. Among other benchmarks, we have successfully synthesized a music player Android app and a controller for an autonomous vehicle in the Open Race Car Simulator (TORCS.

    Are solar neutrino oscillations robust?

    Get PDF
    The robustness of the large mixing angle (LMA) oscillation (OSC) interpretation of the solar neutrino data is considered in a more general framework where non-standard neutrino interactions (NSI) are present. Such interactions may be regarded as a generic feature of models of neutrino mass. The 766.3 ton-yr data sample of the KamLAND collaboration are included in the analysis, paying attention to the background from the reaction ^13C(\alpha,n) ^16O. Similarly, the latest solar neutrino fluxes from the SNO collaboration are included. In addition to the solution which holds in the absence of NSI (LMA-I) there is a 'dark-side' solution (LMA-D) with sin^2 theta_Sol = 0.70, essentially degenerate with the former, and another light-side solution (LMA-0) allowed only at 97% CL. More precise KamLAND reactor measurements will not resolve the ambiguity in the determination of the solar neutrino mixing angle theta_Sol, as they are expected to constrain mainly Delta m^2. We comment on the complementary role of atmospheric, laboratory (e.g. CHARM) and future solar neutrino experiments in lifting the degeneracy between the LMA-I and LMA-D solutions. In particular, we show how the LMA-D solution induced by the simplest NSI between neutrinos and down-type-quarks-only is in conflict with the combination of current atmospheric data and data of the CHARM experiment. We also mention that establishing the issue of robustness of the oscillation picture in the most general case will require further experiments, such as those involving low energy solar neutrinos.Comment: 13 pages, 6 figures; Final version to appear in JHE

    Learning Moore Machines from Input-Output Traces

    Full text link
    The problem of learning automata from example traces (but no equivalence or membership queries) is fundamental in automata learning theory and practice. In this paper we study this problem for finite state machines with inputs and outputs, and in particular for Moore machines. We develop three algorithms for solving this problem: (1) the PTAP algorithm, which transforms a set of input-output traces into an incomplete Moore machine and then completes the machine with self-loops; (2) the PRPNI algorithm, which uses the well-known RPNI algorithm for automata learning to learn a product of automata encoding a Moore machine; and (3) the MooreMI algorithm, which directly learns a Moore machine using PTAP extended with state merging. We prove that MooreMI has the fundamental identification in the limit property. We also compare the algorithms experimentally in terms of the size of the learned machine and several notions of accuracy, introduced in this paper. Finally, we compare with OSTIA, an algorithm that learns a more general class of transducers, and find that OSTIA generally does not learn a Moore machine, even when fed with a characteristic sample

    Sociobiological Control of Plasmid copy number

    Get PDF
    Background:
All known mechanisms and genes responsible for the regulation of plasmid replication lie with the plasmid rather than the chromosome. It is possible therefore that there can be copy-up mutants. Copy-up mutants will have within host selective advantage. This would eventually result into instability of bacteria-plasmid association. In spite of this possibility low copy number plasmids appear to exist stably in host populations. We examined this paradox using a computer simulation model.

Model:
Our multilevel selection model assumes a wild type with tightly regulated replication to ensure low copy number. A mutant with slightly relaxed replication regulation can act as a “cheater” or “selfish” plasmid and can enjoy a greater within-host-fitness. However the host of a cheater plasmid has to pay a greater cost. As a result, in host level competition, host cell with low copy number plasmid has a greater fitness. Furthermore, another mutant that has lost the genes required for conjugation was introduced in the model. The non-conjugal mutant was assumed to undergo conjugal transfer in the presence of another conjugal plasmid in the host cell.

Results:
The simulatons showed that if the cost of carrying a plasmid was low, the copy-up mutant could drive the wild type to extinction or very low frequencies. Consequently, another mutant with a higher copy number could invade the first invader. This process could result into an increasing copy number. However above a certain copy number within-host selection was overcompensated by host level selection leading to a rock-paper-scissor (RPS) like situation. The RPS situation allowed the coexistence of high and low copy number plasmids. The non-conjugal “hypercheaters” could further arrest the copy numbers to a substantially lower level.

Conclusions:
These sociobiological interactions might explain the stability of copy numbers better than molecular mechanisms of replication regulation alone

    Verified lifting of stencil computations

    Get PDF
    This paper demonstrates a novel combination of program synthesis and verification to lift stencil computations from low-level Fortran code to a high-level summary expressed using a predicate language. The technique is sound and mostly automated, and leverages counter-example guided inductive synthesis (CEGIS) to find provably correct translations. Lifting existing code to a high-performance description language has a number of benefits, including maintainability and performance portability. For example, our experiments show that the lifted summaries can enable domain specific compilers to do a better job of parallelization as compared to an off-the-shelf compiler working on the original code, and can even support fully automatic migration to hardware accelerators such as GPUs. We have implemented verified lifting in a system called STNG and have evaluated it using microbenchmarks, mini-apps, and real-world applications. We demonstrate the benefits of verified lifting by first automatically summarizing Fortran source code into a high-level predicate language, and subsequently translating the lifted summaries into Halide, with the translated code achieving median performance speedups of 4.1X and up to 24X for non-trivial stencils as compared to the original implementation.United States. Department of Energy. Office of Science (Award DE-SC0008923)United States. Department of Energy. Office of Science (Award DE-SC0005288
    corecore