534 research outputs found

    ConoServer: updated content, knowledge, and discovery tools in the conopeptide database

    Get PDF
    ConoServer (http://www.conoserver.org) is a database specializing in the sequences and structures of conopeptides, which are toxins expressed by marine cone snails. Cone snails are carnivorous gastropods, which hunt their prey using a cocktail of toxins that potently subvert nervous system function. The ability of these toxins to specifically target receptors, channels and transporters of the nervous system has attracted considerable interest for their use in physiological research and as drug leads. Since the founding publication on ConoServer in 2008, the number of entries in the database has nearly doubled, the interface has been redesigned and new annotations have been added, including a more detailed description of cone snail species, biological activity measurements and information regarding the identification of each sequence. Automatically updated statistics on classification schemes, three-dimensional structures, conopeptide-bearing species and endoplasmic reticulum signal sequence conservation trends, provide a convenient overview of current knowledge on conopeptides. Transcriptomics and proteomics have began generating massive numbers of new conopeptide sequences, and two dedicated tools have been recently implemented in ConoServer to standardize the analysis of conopeptide precursor sequences and to help in the identification by mass spectrometry of toxins whose sequences were predicted at the nucleic acid level

    Splitting Proofs for Interpolation

    Full text link
    We study interpolant extraction from local first-order refutations. We present a new theoretical perspective on interpolation based on clearly separating the condition on logical strength of the formula from the requirement on the com- mon signature. This allows us to highlight the space of all interpolants that can be extracted from a refutation as a space of simple choices on how to split the refuta- tion into two parts. We use this new insight to develop an algorithm for extracting interpolants which are linear in the size of the input refutation and can be further optimized using metrics such as number of non-logical symbols or quantifiers. We implemented the new algorithm in first-order theorem prover VAMPIRE and evaluated it on a large number of examples coming from the first-order proving community. Our experiments give practical evidence that our work improves the state-of-the-art in first-order interpolation.Comment: 26th Conference on Automated Deduction, 201

    A Reduction from Unbounded Linear Mixed Arithmetic Problems into Bounded Problems

    Get PDF
    We present a combination of the Mixed-Echelon-Hermite transformation and the Double-Bounded Reduction for systems of linear mixed arithmetic that preserve satisfiability and can be computed in polynomial time. Together, the two transformations turn any system of linear mixed constraints into a bounded system, i.e., a system for which termination can be achieved easily. Existing approaches for linear mixed arithmetic, e.g., branch-and-bound and cuts from proofs, only explore a finite search space after application of our two transformations. Instead of generating a priori bounds for the variables, e.g., as suggested by Papadimitriou, unbounded variables are eliminated through the two transformations. The transformations orient themselves on the structure of an input system instead of computing a priori (over-)approximations out of the available constants. Experiments provide further evidence to the efficiency of the transformations in practice. We also present a polynomial method for converting certificates of (un)satisfiability from the transformed to the original system

    Synthesis, pharmacological and structural characterization of novel conopressins from Conus miliaris

    Get PDF
    Cone snails produce a fast-acting and often paralyzing venom, largely dominated by disulfide-rich conotoxins targeting ion channels. Although disulfide-poor conopeptides are usually minor components of cone snail venoms, their ability to target key membrane receptors such as GPCRs make them highly valuable as drug lead compounds. From the venom gland transcriptome of Conus miliaris, we report here on the discovery and characterization of two conopressins, which are nonapeptide ligands of the vasopressin/oxytocin receptor family. These novel sequence variants show unusual features, including a charge inversion at the critical position 8, with an aspartate instead of a highly conserved lysine or arginine residue. Both the amidated and acid C-terminal analogues were synthesized, followed by pharmacological characterization on human and zebrafish receptors and structural investigation by NMR. Whereas conopressin-M1 showed weak and only partial agonist activity at hV1bR (amidated form only) and ZFV1a1R (both amidated and acid form), both conopressin-M2 analogues acted as full agonists at the ZFV2 receptor with low micromolar a�nity. Together with the NMR structures of amidated conopressins-M1, -M2 and -G, this study provides novel structure-activity relationship information that may help in the design of more selective ligands

    Periplasmic Expression of 4/7 α-Conotoxin TxIA Analogs in E. coli Favors Ribbon Isomer Formation – Suggestion of a Binding Mode at the α7 nAChR

    Get PDF
    Peptides derived from animal venoms provide important research tools for biochemical and pharmacological characterization of receptors, ion channels, and transporters. Some venom peptides have been developed into drugs (such as the synthetic omega-conotoxin MVIIA, ziconotide) and several are currently undergoing clinical trials for various clinical indications. Challenges in the development of peptides include their usually limited supply from natural sources, cost-intensive chemical synthesis, and potentially complicated stereoselective disulfide-bond formation in the case of disulfide-rich peptides. In particular, if extended structure-function analysis is performed or incorporation of stable isotopes for NMR studies is required, the comparatively low yields and high costs of synthesized peptides might constitute a limiting factor. Here we investigated the expression of the 4/7 alpha-conotoxin TxIA, a potent blocker at alpha 3 beta 2 and alpha 7 nicotinic acetylcholine receptors (nAChRs), and three analogs in the form of maltose binding protein fusion proteins in Escherichia coli. Upon purification via nickel affinity chromatography and release of the toxins by protease cleavage, HPLC analysis revealed one major peak with the correct mass for all peptides. The final yield was 1-2 mg of recombinant peptide per liter of bacterial culture. Two-electrode voltage clamp analysis on oocyte-expressed nAChR subtypes demonstrated the functionality of these peptides but also revealed a 30 to 100-fold potency decrease of expressed TxIA compared to chemically synthesized TxIA. NMR spectroscopy analysis of TxIA and two of its analogs confirmed that the decreased activity was due to an alternative disulfide linkage rather than the missing C-terminal amidation, a post-translational modification that is common in alpha-conotoxins. All peptides preferentially formed in the ribbon conformation rather than the native globular conformation. Interestingly, in the case of the alpha 7 nAChR, but not the alpha 3 beta 2 subtype, the loss of potency could be rescued by an R5D substitution. In conclusion, we demonstrate efficient expression of functional but alternatively folded ribbon TxIA variants in E. coli and provide the first structure-function analysis for a ribbon 4/7-alpha-conotoxin at alpha 7 and alpha 3 beta 2 nAChRs. Computational analysis based on these data provide evidence for a ribbon alpha-conotoxin binding mode that might be exploited to design ligands with optimized selectivity

    An iterative approach to precondition inference using constrained Horn clauses

    Get PDF
    We present a method for automatic inference of conditions on the initial states of a program that guarantee that the safety assertions in the program are not violated. Constrained Horn clauses (CHCs) are used to model the program and assertions in a uniform way, and we use standard abstract interpretations to derive an over-approximation of the set of unsafe initial states. The precondition then is the constraint corresponding to the complement of that set, under-approximating the set of safe initial states. This idea of complementation is not new, but previous attempts to exploit it have suffered from the loss of precision. Here we develop an iterative specialisation algorithm to give more precise, and in some cases optimal safety conditions. The algorithm combines existing transformations, namely constraint specialisation, partial evaluation and a trace elimination transformation. The last two of these transformations perform polyvariant specialisation, leading to disjunctive constraints which improve precision. The algorithm is implemented and tested on a benchmark suite of programs from the literature in precondition inference and software verification competitions.Comment: Paper presented at the 34nd International Conference on Logic Programming (ICLP 2018), Oxford, UK, July 14 to July 17, 2018 18 pages, LaTe

    Formalising the Continuous/Discrete Modeling Step

    Full text link
    Formally capturing the transition from a continuous model to a discrete model is investigated using model based refinement techniques. A very simple model for stopping (eg. of a train) is developed in both the continuous and discrete domains. The difference between the two is quantified using generic results from ODE theory, and these estimates can be compared with the exact solutions. Such results do not fit well into a conventional model based refinement framework; however they can be accommodated into a model based retrenchment. The retrenchment is described, and the way it can interface to refinement development on both the continuous and discrete sides is outlined. The approach is compared to what can be achieved using hybrid systems techniques.Comment: In Proceedings Refine 2011, arXiv:1106.348

    A Survey of Satisfiability Modulo Theory

    Full text link
    Satisfiability modulo theory (SMT) consists in testing the satisfiability of first-order formulas over linear integer or real arithmetic, or other theories. In this survey, we explain the combination of propositional satisfiability and decision procedures for conjunctions known as DPLL(T), and the alternative "natural domain" approaches. We also cover quantifiers, Craig interpolants, polynomial arithmetic, and how SMT solvers are used in automated software analysis.Comment: Computer Algebra in Scientific Computing, Sep 2016, Bucharest, Romania. 201

    Applying SMT Solvers to the Test Template Framework

    Full text link
    The Test Template Framework (TTF) is a model-based testing method for the Z notation. In the TTF, test cases are generated from test specifications, which are predicates written in Z. In turn, the Z notation is based on first-order logic with equality and Zermelo-Fraenkel set theory. In this way, a test case is a witness satisfying a formula in that theory. Satisfiability Modulo Theory (SMT) solvers are software tools that decide the satisfiability of arbitrary formulas in a large number of built-in logical theories and their combination. In this paper, we present the first results of applying two SMT solvers, Yices and CVC3, as the engines to find test cases from TTF's test specifications. In doing so, shallow embeddings of a significant portion of the Z notation into the input languages of Yices and CVC3 are provided, given that they do not directly support Zermelo-Fraenkel set theory as defined in Z. Finally, the results of applying these embeddings to a number of test specifications of eight cases studies are analysed.Comment: In Proceedings MBT 2012, arXiv:1202.582
    corecore