96 research outputs found

    Guard-Function-Constraint-Based Refinement Method to Generate Dynamic Behaviors of Workflow Net with Table

    Get PDF
    In order to model complex workflow systems with databases, and detect their data-flow errors such as data inconsistency, we defined Workflow Net with Table model (WFT-net) in our previous work. We used a Petri net to describe control flows and data flows of a workflow system, and labeled some abstract table operation statements on transitions so as to simulate database operations. Meanwhile, we proposed a data refinement method to construct the state reachability graph of WFT-nets, and used it to verify some properties. However, this data refinement method has a defect, i.e., it does not consider the constraint relation between guard functions, and its state reachability graph possibly has some pseudo states. In order to overcome these problems, we propose a new data refinement method that considers some constraint relations, which can guarantee the correctness of our state reachability graph. What is more, we develop the related algorithms and tool. We also illustrate the usefulness and effectiveness of our method through some examples

    FunTAL: Reasonably Mixing a Functional Language with Assembly

    Full text link
    We present FunTAL, the first multi-language system to formalize safe interoperability between a high-level functional language and low-level assembly code while supporting compositional reasoning about the mix. A central challenge in developing such a multi-language is bridging the gap between assembly, which is staged into jumps to continuations, and high-level code, where subterms return a result. We present a compositional stack-based typed assembly language that supports components, comprised of one or more basic blocks, that may be embedded in high-level contexts. We also present a logical relation for FunTAL that supports reasoning about equivalence of high-level components and their assembly replacements, mixed-language programs with callbacks between languages, and assembly components comprised of different numbers of basic blocks.Comment: 15 pages; implementation at https://dbp.io/artifacts/funtal/; published in PLDI '17, Proceedings of the 38th ACM SIGPLAN Conference on Programming Language Design and Implementation, June 18 - 23, 2017, Barcelona, Spai

    Proving Expected Sensitivity of Probabilistic Programs with Randomized Variable-Dependent Termination Time

    Get PDF
    The notion of program sensitivity (aka Lipschitz continuity) specifies that changes in the program input result in proportional changes to the program output. For probabilistic programs the notion is naturally extended to expected sensitivity. A previous approach develops a relational program logic framework for proving expected sensitivity of probabilistic while loops, where the number of iterations is fixed and bounded. In this work, we consider probabilistic while loops where the number of iterations is not fixed, but randomized and depends on the initial input values. We present a sound approach for proving expected sensitivity of such programs. Our sound approach is martingale-based and can be automated through existing martingale-synthesis algorithms. Furthermore, our approach is compositional for sequential composition of while loops under a mild side condition. We demonstrate the effectiveness of our approach on several classical examples from Gambler's Ruin, stochastic hybrid systems and stochastic gradient descent. We also present experimental results showing that our automated approach can handle various probabilistic programs in the literature

    Determination of Chloramphenicol by Voltammetric Method

    Get PDF
    A new voltammetric method for the determination of chloramphenicol (CAP) is developed. The method is based on the reaction of chloramphenicol and zinc in HCl solution to form a new substance [I] which produced a sensitive oxidation peak at 0.825 V (vs. Ag/AgCl) in pH 7.0 phosphate buffer solutions using differential pulse voltammetry (DPV). There is a linear relationship between the intensity of the peak current and the concentration of CAP. Effects of different chemical and variables such as pH, heating time on the determination of CAP have been optimized. It had been discussed that the electrochemistry characteristic and the reaction mechanism. Calibration graphs were linear in the 0.8 to 30.0 μg mL −1 concentration range with a correlation coefficient of 0.9983. The relative standard deviation for 9 repetitive determinations of 10.0 μg mL −1 CAP was 2.7%. The method has been successfully applied to pharmaceutical formulations and spiked milk samples. © 2014 The Electrochemical Society. [DOI: 10.1149/2.058403jes] All rights reserved. Manuscript submitted October 23, 2013; revised manuscript received December 20, 2013. Published January 4, 2014 Chloramphenicol is a broad-spectrum antibiotic, exhibiting activity against both Gram-positive and Gram-negative bacteria. Owing to its low cost and ready availability, it is extensively used to treat animals, including food-producing animals. However, in certain susceptible individuals, chloramphenicol is associated with serious toxic effects in humans in the form of bone marrow depression, and can be particularly severe in the form of fatal aplastic anemia. 1,2 Hence, its applications in both human and veterinary medicine are restricted. Therefore, specific, economic, and sensitive methods are required in order to effectively monitor the occurrence of residues of CAP. The present methods for the determination of chloramphenicol are thin-layer chromatography, 3 gaschromatography (GC), This paper reports a voltammetric method for estimating the concentration of chloramphenicol, which is based on the reaction of chloramphenicol and zinc in HCl solution to form a new substance [I]. It can produced a sensitive oxidation peak at 0.825 V (vs. Ag/AgCl) in pH 7.0 phosphate buffer solutions using differential pulse voltammetry (DPV). This method is able to detect the quantitative of chloramphenicol. The principal advantages of the method are that it is rapid, sensitive, and possesses good reproducibility. Experimental Reagents.-Chloramphenicol was obtained from the National Institute for the Control of Pharmaceutical and Biological Products (Beijing, China). A 5.0 mg mL −1 stock solution was prepared by dissolving chloramphenicol in 1.0 mL anhydrous ethanol and diluted with water. Working solutions of chloramphenicol were prepared by dilution of the stock solution with water and phosphate buffer solution. Zinc was obtained from Shanghai sulfuric acid plant Shuangliu Industry and Trade Company (Shanghai, China). Five kinds of buffers, HAc-NH 4 Ac buffer, Na 2 HPO 4 -C 6 H 5 O 7 buffer, H 3 BO 3 -Na 3 BO 3 buffer, Tris-HCl buffer and phosphate buffer solution (PBS), were used in the experiments. All other reagents were of analytical grade, and twice-distilled water was used throughout. Apparatus.-Cyclic voltammetric (CV) and differential pulse voltammetric (DPV) experiments were performed with a CHI 760 electrochemical workstation (CH Instrument Company, Shanghai, China). A conventional three-electrode system was adopted. The working electrode was a platinum electrode (3.0 mm in diameter), the auxiliary and reference electrodes were platinum wire and Ag/AgCl electrode, respectively. Absorption spectra were acquired at a 1102 UV spectrofluorimeter (Shanghai Tian Mei Scientific instrument Co,. Ltd, Shanghai). Procedures.-A certain portion of 5.0 mg mL −1 of CAP solution was transferred into a 20.0 mL standard flask, 0.30 g zinc and 5.0 mL of 1.0 mol l −1 HCl solution were added, and diluted to approximately 15.0 mL with water. The mixture was shaken and heated for 60 min in a 90 • C water bath and then cooled in ice water at once, diluted to the mark with water and mixed well. Then the mixture was centrifuged for 5 min at 3000 rpm, so the supernatant was fetched. The prepared [I] used for voltammetric determination of CAP. The chloramphenicolHCl system used for comparison was prepared in the same way by omitting the zinc addition step. Ten microliters 0.10 M phosphate buffer solution (pH 7.0) with certain amount of [I] was transferred into a cell, and then the three-electrode system was immersed in it. The differential pulse voltammetric measurement the voltammogram was recorded from 0.5 to 1.1 V (experiment parameters: Init EV: ) unless CC License in place (see abstrac

    Proving expected sensitivity of probabilistic programs with randomized variable-dependent termination time

    Get PDF
    The notion of program sensitivity (aka Lipschitz continuity) specifies that changes in the program input result in proportional changes to the program output. For probabilistic programs the notion is naturally extended to expected sensitivity. A previous approach develops a relational program logic framework for proving expected sensitivity of probabilistic while loops, where the number of iterations is fixed and bounded. In this work, we consider probabilistic while loops where the number of iterations is not fixed, but randomized and depends on the initial input values. We present a sound approach for proving expected sensitivity of such programs. Our sound approach is martingale-based and can be automated through existing martingale-synthesis algorithms. Furthermore, our approach is compositional for sequential composition of while loops under a mild side condition. We demonstrate the effectiveness of our approach on several classical examples from Gambler's Ruin, stochastic hybrid systems and stochastic gradient descent. We also present experimental results showing that our automated approach can handle various probabilistic programs in the literature

    Extending the Reach of Eclipse Plug-in for Analysing Embedded SQL Queries

    Get PDF
    Alvor on Eclipse’i jaoks arendatud tööriist, mis kontrollib koodis leiduvate SQL lausete korrektsust. Käesoleva bakalaureuse töö eesmärgiks on Alvori tööriista kasutatavuse võimaldamine väljaspool Eclipse’i arenduskeskkonda.Autori ülesandeks oli täiendada Alvori tööriista, lisades võimaluse genereerida SQL kontrolli kohta tulemuste fail ja võimaldada seda teha ilma Eclipse’i käivitamata. Faili loomisel kasutati JUnit tulemusfaili struktuuri, et teha faili kasutamine erinevates keskkondades võimalikult lihtsaks. Peamiseks väljakutseks osutus õigete tööriistade leidmine ja kasutamine, et võimaldada Eclipse’i kompileerimisprotsessi käivitamist väljaspool Eclipse’i arenduskeskkonda.Alvor is a tool build for Eclipse IDE that checks the validity of SQL queries embedded in strings of a host programming languages. This thesis focuses on making the benefits of the Alvor tool available outside the Eclipse IDE.Solution for this was to implement an additional feature for the Alvor tool that creates a result file of the SQL check. In order to improve the usability of the result file, JUnit standard for the report file was used. The key challenge of making the tool available outside the Eclipse IDE was finding and combining the right tools to start the Eclipse build process, which would start the SQL check, without starting the IDE and doing so for any type of project, regardless if it’s an Eclipse project or not

    Produktiivsema Java EE ökosüsteemi poole

    Get PDF
    Väitekirja elektrooniline versioon ei sisalda publikatsioone.Alates Java programmeerimiskeele loomisest 1995. a. on selle üks kõige olulisemaid kasutusvaldkondi veebirakenduste programmeerimine. Java populaarsuse põhjuseks ei olnud ainult keeledisainilised omadused nagu objektorienteeritus ja range tüübisüsteem, vaid ennekõike platvormist sõltumatus ja standardiseeritud teekide rohkus, mis tegi tavaprogrammeerijatele veebirakenduste programmeerimise jõukohaseks. Kümme aastat hiljem oli olukord muutunud märkimisväärselt. Java oli kaotamas oma liidripositsiooni uutele, nn. dünaamilistele keeltele nagu PHP, Ruby ja Python. Seejuures polnud põhjuseks mitte see, et need keeled ise oleksid tunduvalt Javast paremad, vaid Java ökosüsteemi areng oli väga konservatiivne ja aeglane. Antud kontekstis alustasime aastal 2005 oma uuringuid eesmärgiga parandada suurimad probleemid Java ökosüsteemis ja viia see vähemalt samale tasemele ülalmainitud keeltega. Käesaolevas dissertatsioonis on esitatud vastavate uuringute tulemused. Dissertatsioon põhineb neljal publikatsioonil—kolmel eelretsenseeritud teadusartiklil ja ühel patendil. Esimeseks katseks oli uue veebiraamistike integreerimisraamistiku “Aranea” loomine. Antud hetkel oli Javas üle kolmekümne aktiivselt arendatavat veebiraamistikku, mistõttu otsustasime fokuseeruda kahele võtmeprobleemile: raamistike taaskasutatavuse lihtsus ja koostöövõime. Selleks töötasime välja uudse komponentmudeli, mis võimaldab kirjeldada süsteemi teenus- ja kasutajaliideskomponentide hierarhilisi seoseid, ja realiseerisime eri raamistike adapterid komponentmudelisse sobitumiseks. Järgmise probleemina käsitlesime andmete haldamiskihi kirjeldamist. Lähtusime eeldusest, et relatsioonilistes andmebaasides on SQL kõige enamlevinud andmete kirjelduskeel, ja efektiivne admehalduskiht peab võimaldama Javas lihtsasti esitada SQL päringuid, samas garanteerima konstrueeritavate päringute süntaktilise korrektsuse. Senised lahendused baseerusid reeglina SQL päringute programsel konstrueerimisel sõnedena, mistõttu päringute korrektsuse kontroll oli raskendatud. Lahenduseks töötasime välja nn. rakendispetsiifilise keele (i.k. domain-specific language, DSL) SQL päringute esitamiseks kasutades Java keele tüübisüsteemi vahendeid nende korrektsuse kompileerimisaegseks valideerimiseks. Töö käigus identifitseerisime üldised tarkvara disainimustrid, mis lihtsustavad analoogiliste tüübikindlate DSLide loomist, ja kasutasime neid kahe uue eksperimentaalse tüübikindla DSLi loomisel - Java klasside täitmisaegseks loomiseks ja manipuleerimiseks ning XMLi parsimiseks ja genereerimiseks. Kolmanda ülesandena pühendusime ühele olulisemale Java platvormi puudusele võrreldes dünaamiliste keeltega. Kui PHP’s või Ruby’s saab programmi koodi otseselt muuta ja tulemust koheselt näha, siis Java rakendusserverid nõuavad rakenduse “ehitamist” (i.k. build) ja “paigutamist” (i.k. deploy), mis suurte rakenduste korral võib võtta mitmeid või isegi kümneid minuteid. Probleemi lahenduseks töötasime välja uudse ja praktilise meetodi koodi ümberlaadimiseks Java platvormil, mille põhjal arendasime ja lasime välja toote “JRebel”. See kasutab Java baitkoodi laadimisaegset modifitseerimist koos spetsiaalse ümbersuunamiskihiga kutsekohtade, meetodite ja meetodikeha vahel, mis võimaldab hallata koodi erinevaid versioone ning täitmisajal suunata väljakutsed viimasele versioonile. Täna, rohkem kui seitse aastat pärast uuringute algust, tuleb tõdeda, et meie töö veebiraamistikega lõi küll eduka platvormi erinevate eksperimentaalsete ideede uurimiseks ja katsetamiseks, kuid reaalses tarkvaratööstuses ei ole leidnud laialdast kasutust. Töö tüübikindlate DSLidega oli edukam, sest see mõjutas otseselt edaspidiseid uuringuid antud teemal ning selle elemendid leidsid rakendust viimases JPA standardi spetsifikatsioonis. Kõige suurem mõju tarkvaratööstusele on meie dünaamiline koodiümberlaadimise lahendus, mis on tänapäeval Java kogukonnas laialdaselt kasutusel ning mida kasutavad igapäevaselt rohkem kui 3000 erinevat organisatsiooni üle maailma.Since the Tim Berners-Lee famous proposal of World Wide Web in 1990 and the introduction of the Common Gateway Interface in 1993, the world of online web applications has been booming. In the nineties the Java language and platform became the first choice for web development in and out of the enterprise. But by the mid-aughts the platform was in crisis - newcomers like PHP, Ruby and Python have picked up the flag as the most productive platforms, with Java left for conservative enterprises. However, this was not because those languages and platforms were significantly better than Java. Rather, the issue was that innovation in the Java ecosystem was slow, due to the ways the platform was managed. Large vendors dominated the space, committees designed standards and the brightest minds were moving to other JVM languages like Scala, Groovy or JRuby. In this context we started our investigations in 2005. Our goals were to address some of the more gaping holes in the Java ecosystem and bring it on par with the languages touted as more productive. The first effort was to design a better web framework, called “Aranea”. At that point of time Java had more than thirty actively developed web frameworks, and many of them were used simultaneously in the same projects. We decided to focus on two key issues: ease of reuse and framework interoperability. To solve the first issue we created a self-contained component model that allowed the construction of both simple and sophisticated systems using a simple object protocol and hierarchical aggregation in style of the Composite design pattern. This allowed one to capture every aspect of reuse in a dedicated component, be it a part of framework functionality, a repeating UI component or a whole UI process backed by complex logic. Those could be mixed and matched almost indiscriminately subject to rules expressed in the interfaces they implemented. To solve the second issue we proposed adapters between the component model and the various models of other frameworks. We later implemented some of those adapters both in a local and remote fashion, allowing one to almost effortlessly capture and mix different web application components together, no matter what the underlying implementation may be. The next issue that we focused on was the data access layer. At that point in the Java community the most popular ways of accessing data was either using embedded SQL strings or an Object-Relational Mapping tool ``Hibernate''. Both approaches had severe disadvantages. Using embedded SQL strings exposed the developers to typographical errors, lack of abstraction, very late validation and dangers of dynamic string concatenation. Using Hibernate/ORM introduced a layer of abstraction notorious for the level of misunderstanding and production performance issues it caused. We believed that SQL is the right way to access the data in a relational database, as it expresses exactly the data that is needed without much overhead. Instead of embedding it into strings, we decided to embed it using the constructs of the Java language, thus creating an embedded DSL. As one of the goals was to provide extensive compiler-time validation, we made extensive use of Java Generics and code generation to provide maximum possible static safety. We also built some basic SQL extensions into the language that provided a better interface between Java structures and relational queries as well as allowing effortless further extension and enabling ease of abstraction. Our work on the SQL DSL made us believe that building type safe embedded DSLs could be of great use for the Java community. We embarked on building two more experimental DSLs, one for generating and manipulating Java classes on-the-fly and the other for parsing and generating XML. These experiments exposed some common patterns, including restricting DSL syntax, collecting type safe history and using type safe metadata. Applying those patterns to different domains helps encode a truly type safe DSL in the Java language. Our final and largest effort concentrated on a major disadvantage of the Java platform as compared to the dynamically-typed language platforms. Namely, while in PHP or Ruby on Rails one could edit any line of code and see the result immediately, the Java application servers would force one to do “build” and “deploy”, which for larger applications could take minutes and even tens of minutes. Initial investigation revealed that the claims of fast code reloading were not quite solid across the board. Dynamically-typed languages would typically destroy state and recreate the application, just like the Java application servers. The crucial difference was that they did it quickly and the productivity of development was a large concern for language and framework designers. However as we investigated the issue deeper on the Java side, we came up with a novel and practical way of reloading code on the JVM, which we developed and released as the product “JRebel”. We made use of the fact that Java bytecode is a very high level encoding of the Java language, which is easy to modify during load time. This allowed us to insert a layer of indirection between the call sites, methods and method bodies which was versatile enough to manage multiple versions of code and redirect the calls to the latest version during runtime. There have been over the years some basic developments in the similar fashion, but unlike them we engineered JRebel to run on the stock JVM and to have no visible impact on application functional or non-functional behaviour. The latter was the hardest, as the layer of indirection both introduces numerous compatibility problems and adds performance overhead. To overcome those limitations we had to integrate deeply on many levels of the JVM and to use compiler techniques to remove the layer of indirection where possible
    corecore