2,694 research outputs found

    On Optimization Modulo Theories, MaxSMT and Sorting Networks

    Full text link
    Optimization Modulo Theories (OMT) is an extension of SMT which allows for finding models that optimize given objectives. (Partial weighted) MaxSMT --or equivalently OMT with Pseudo-Boolean objective functions, OMT+PB-- is a very-relevant strict subcase of OMT. We classify existing approaches for MaxSMT or OMT+PB in two groups: MaxSAT-based approaches exploit the efficiency of state-of-the-art MAXSAT solvers, but they are specific-purpose and not always applicable; OMT-based approaches are general-purpose, but they suffer from intrinsic inefficiencies on MaxSMT/OMT+PB problems. We identify a major source of such inefficiencies, and we address it by enhancing OMT by means of bidirectional sorting networks. We implemented this idea on top of the OptiMathSAT OMT solver. We run an extensive empirical evaluation on a variety of problems, comparing MaxSAT-based and OMT-based techniques, with and without sorting networks, implemented on top of OptiMathSAT and {\nu}Z. The results support the effectiveness of this idea, and provide interesting insights about the different approaches.Comment: 17 pages, submitted at Tacas 1

    Insights from the classical MD simulations

    Get PDF
    Salt bridges and ionic interactions play an important role in protein stability, protein-protein interactions, and protein folding. Here, we provide the classical MD simulations of the structure and IR signatures of the arginine (Arg)–glutamate (Glu) salt bridge. The Arg-Glu model is based on the infinite polyalanine antiparallel two-stranded β-sheet structure. The 1 μs NPT simulations show that it preferably exists as a salt bridge (a contact ion pair). Bidentate (the end-on and side-on structures) and monodentate (the backside structure) configurations are localized [Donald et al., Proteins 79, 898–915 (2011)]. These structures are stabilized by the short +N–H⋯O− bonds. Their relative stability depends on a force field used in the MD simulations. The side-on structure is the most stable in terms of the OPLS-AA force field. If AMBER ff99SB-ILDN is used, the backside structure is the most stable. Compared with experimental data, simulations using the OPLS all-atom (OPLS-AA) force field describe the stability of the salt bridge structures quite realistically. It decreases in the following order: side-on > end-on > backside. The most stable side-on structure lives several nanoseconds. The less stable backside structure exists a few tenth of a nanosecond. Several short-living species (solvent shared, completely separately solvated ionic groups ion pairs, etc.) are also localized. Their lifetime is a few tens of picoseconds or less. Conformational flexibility of amino acids forming the salt bridge is investigated. The spectral signature of the Arg-Glu salt bridge is the IR-intensive band around 2200 cm−1. It is caused by the asymmetric stretching vibrations of the +N–H⋯O− fragment. Result of the present paper suggests that infrared spectroscopy in the 2000–2800 frequency region may be a rapid and quantitative method for the study of salt bridges in peptides and ionic interactions between proteins. This region is usually not considered in spectroscopic studies of peptides and proteins

    Applying machine learning to the problem of choosing a heuristic to select the variable ordering for cylindrical algebraic decomposition

    Get PDF
    Cylindrical algebraic decomposition(CAD) is a key tool in computational algebraic geometry, particularly for quantifier elimination over real-closed fields. When using CAD, there is often a choice for the ordering placed on the variables. This can be important, with some problems infeasible with one variable ordering but easy with another. Machine learning is the process of fitting a computer model to a complex function based on properties learned from measured data. In this paper we use machine learning (specifically a support vector machine) to select between heuristics for choosing a variable ordering, outperforming each of the separate heuristics.Comment: 16 page

    From LTL and Limit-Deterministic B\"uchi Automata to Deterministic Parity Automata

    Full text link
    Controller synthesis for general linear temporal logic (LTL) objectives is a challenging task. The standard approach involves translating the LTL objective into a deterministic parity automaton (DPA) by means of the Safra-Piterman construction. One of the challenges is the size of the DPA, which often grows very fast in practice, and can reach double exponential size in the length of the LTL formula. In this paper we describe a single exponential translation from limit-deterministic B\"uchi automata (LDBA) to DPA, and show that it can be concatenated with a recent efficient translation from LTL to LDBA to yield a double exponential, \enquote{Safraless} LTL-to-DPA construction. We also report on an implementation, a comparison with the SPOT library, and performance on several sets of formulas, including instances from the 2016 SyntComp competition

    On Security and Sparsity of Linear Classifiers for Adversarial Settings

    Full text link
    Machine-learning techniques are widely used in security-related applications, like spam and malware detection. However, in such settings, they have been shown to be vulnerable to adversarial attacks, including the deliberate manipulation of data at test time to evade detection. In this work, we focus on the vulnerability of linear classifiers to evasion attacks. This can be considered a relevant problem, as linear classifiers have been increasingly used in embedded systems and mobile devices for their low processing time and memory requirements. We exploit recent findings in robust optimization to investigate the link between regularization and security of linear classifiers, depending on the type of attack. We also analyze the relationship between the sparsity of feature weights, which is desirable for reducing processing cost, and the security of linear classifiers. We further propose a novel octagonal regularizer that allows us to achieve a proper trade-off between them. Finally, we empirically show how this regularizer can improve classifier security and sparsity in real-world application examples including spam and malware detection

    Evolving rules for document classification

    Get PDF
    We describe a novel method for using Genetic Programming to create compact classification rules based on combinations of N-Grams (character strings). Genetic programs acquire fitness by producing rules that are effective classifiers in terms of precision and recall when evaluated against a set of training documents. We describe a set of functions and terminals and provide results from a classification task using the Reuters 21578 dataset. We also suggest that because the induced rules are meaningful to a human analyst they may have a number of other uses beyond classification and provide a basis for text mining applications

    An efficient k.p method for calculation of total energy and electronic density of states

    Full text link
    An efficient method for calculating the electronic structure in large systems with a fully converged BZ sampling is presented. The method is based on a k.p-like approximation developed in the framework of the density functional perturbation theory. The reliability and efficiency of the method are demostrated in test calculations on Ar and Si supercells

    Speeding up the constraint-based method in difference logic

    Get PDF
    "The final publication is available at http://link.springer.com/chapter/10.1007%2F978-3-319-40970-2_18"Over the years the constraint-based method has been successfully applied to a wide range of problems in program analysis, from invariant generation to termination and non-termination proving. Quite often the semantics of the program under study as well as the properties to be generated belong to difference logic, i.e., the fragment of linear arithmetic where atoms are inequalities of the form u v = k. However, so far constraint-based techniques have not exploited this fact: in general, Farkas’ Lemma is used to produce the constraints over template unknowns, which leads to non-linear SMT problems. Based on classical results of graph theory, in this paper we propose new encodings for generating these constraints when program semantics and templates belong to difference logic. Thanks to this approach, instead of a heavyweight non-linear arithmetic solver, a much cheaper SMT solver for difference logic or linear integer arithmetic can be employed for solving the resulting constraints. We present encouraging experimental results that show the high impact of the proposed techniques on the performance of the VeryMax verification systemPeer ReviewedPostprint (author's final draft

    Cosmological entropy and generalized second law of thermodynamics in F(R,G)F(R,G) theory of gravity

    Full text link
    We consider a spatially flat Friedmann-Lemaitre-Robertson-Walker space time and investigate the second law and the generalized second law of thermodynamics for apparent horizon in generalized modified Gauss Bonnet theory of gravity (whose action contains a general function of Gauss Bonnet invariant and the Ricci scalar: F(R,G)F(R,G)). By assuming that the apparent horizon is in thermal equilibrium with the matter inside it, conditions which must be satisfied by F(R,G)F(R,G) are derived and elucidated through two examples: a quasi-de Sitter space-time and a universe with power law expansion.Comment: 10 pages, minor changes, typos corrected, accepted for publication in Europhysics Letter

    Geopolymer Materials for Low-Pressure Injections in Coarse Grained Soil: Multiscale Approach to the Study of the Mechanical Behaviour and Environmental Impact

    Get PDF
    The term soil improvement is commonly referred to the modification of soil structure in order to obtain a material with better physical and mechanical properties such as strength, stiffness or permeability. With this purpose, one of the most commonly used applications, particularly in coarse-grained soils, is the low pressure injection of cementitious mixtures. In recent years, there has been a growing demand for solutions with limited environmental impact and limited CO2 emissions and, in this regard, the cement present in the injected grout is evidently the weak point of traditional solutions. In this work, the experimental study of geopolymer materials as a substitute of cement mixture for low-pressure injection for coarse-grained soils improvement is presented. The study started with a focus on the geopolymer fresh mixture properties (density, viscosity, horizontal ellipsis ) and the evolution over the time of the mechanical properties (compression and tensile strength and stiffness) comparing three different mix designs at three different monitoring temperatures. The same evaluations were repeated on sand samples injected with the different types of mixtures previously analyzed. For a selected mix design, a permeation test was carried out under controlled conditions to test the pumpability and effectiveness of geopolymer injection. Finally, to deepen the chemical interaction between the injected mixture and interstitial water, an injection test was carried out using a scaled model of a real injection system. The experimental study carried out was aimed both at the analysis of the characteristics of the geopolymer material and at its physical interaction with coarse-grained soil, passing through the measurement of the mechanical characteristics of the geopolymer material and of the solid sand skeleton mixed with geopolymers. Finally, the possible chemical interaction of the mixtures with groundwater was also evaluated in order to highlight any environmental issues. The results shown provide a preliminary but sufficiently broad picture of the behavior of geopolymer mixtures for low-pressure injection for coarse-grained soil improvement purposes both from physical-mechanical and chemical points of view
    • …
    corecore