76,427 research outputs found

    On Structuring Proof Search for First Order Linear Logic

    Full text link
    Full first order linear logic can be presented as an abstract logic programming language in Miller's system Forum, which yields a sensible operational interpretation in the 'proof search as computation' paradigm. However, Forum still has to deal with syntactic details that would normally be ignored by a reasonable operational semantics. In this respect, Forum improves on Gentzen systems for linear logic by restricting the language and the form of inference rules. We further improve on Forum by restricting the class of formulae allowed, in a system we call G-Forum, which is still equivalent to full first order linear logic. The only formulae allowed in G-Forum have the same shape as Forum sequents: the restriction does not diminish expressiveness and makes G-Forum amenable to proof theoretic analysis. G-Forum consists of two (big) inference rules, for which we show a cut elimination procedure. This does not need to appeal to finer detail in formulae and sequents than is provided by G-Forum, thus successfully testing the internal symmetries of our system.Comment: Author website at http://alessio.guglielmi.name/res

    FORUM and its implementation

    Get PDF
    Miller presented Forum as a specification logic: Forum extends several existing logic programming languages, for example Prolog, LO and Lolli. The crucial change in Forum is the extension from single succedent sequents, as in intuitionistic logic, to multiple succedent sequents, as in classical logic, with a corresponding extension of the notion of uniform proof. Forum uses the connectives of linear logic. Languages based on linear logic offer extra expressivity (in comparison with traditional logic languages), but also present new implementation challenges. One such challenge is that of context management, because the multiplicative linear connectives 'R', ''S'' and '-o' require context splitting. Hodas and Miller presented a solution (the 10 model) to this in 1991 for the language Lolli based on minimal linear logic. This thesis presents a technique which is an adaptation of the aforementioned approach for the language Forum and following a suggestion of Miller that the '.' constant be treated as primitive in order to avoid looping problems arising from its use as a derived symbol. Cervesato, Hodas and Pfenning have presented a technique for managing the 'T' constant, dividing each input context into a "slack" part and a "strict" part; the main novel contribution of this thesis is to modify this technique, by dividing instead the output context. This leads to a proof system with fewer rules (and consequent ease of implementation) but enhanced performance, for which we present some experimental evidence

    Debugging of Web Applications with Web-TLR

    Full text link
    Web-TLR is a Web verification engine that is based on the well-established Rewriting Logic--Maude/LTLR tandem for Web system specification and model-checking. In Web-TLR, Web applications are expressed as rewrite theories that can be formally verified by using the Maude built-in LTLR model-checker. Whenever a property is refuted, a counterexample trace is delivered that reveals an undesired, erroneous navigation sequence. Unfortunately, the analysis (or even the simple inspection) of such counterexamples may be unfeasible because of the size and complexity of the traces under examination. In this paper, we endow Web-TLR with a new Web debugging facility that supports the efficient manipulation of counterexample traces. This facility is based on a backward trace-slicing technique for rewriting logic theories that allows the pieces of information that we are interested to be traced back through inverse rewrite sequences. The slicing process drastically simplifies the computation trace by dropping useless data that do not influence the final result. By using this facility, the Web engineer can focus on the relevant fragments of the failing application, which greatly reduces the manual debugging effort and also decreases the number of iterative verifications.Comment: In Proceedings WWV 2011, arXiv:1108.208

    An Abstract Formal Basis for Digital Crowds

    Get PDF
    Crowdsourcing, together with its related approaches, has become very popular in recent years. All crowdsourcing processes involve the participation of a digital crowd, a large number of people that access a single Internet platform or shared service. In this paper we explore the possibility of applying formal methods, typically used for the verification of software and hardware systems, in analysing the behaviour of a digital crowd. More precisely, we provide a formal description language for specifying digital crowds. We represent digital crowds in which the agents do not directly communicate with each other. We further show how this specification can provide the basis for sophisticated formal methods, in particular formal verification.Comment: 32 pages, 4 figure

    Multi-loop Control Systems of Compensators for Powerful Sounding Pulses Generators

    Get PDF
    Construction principles of multi-loop control systems of compensators for powerful sounding pulses generators are presented. A method for controlling a compensating system using fuzzy logic and forecast control ideas is described. Proposed compensating system is able to solve different problems: reactive power compensation, harmonic elimination. The system is based on a combination of a thyristor compensator and an active power filter. Some practical results with Matlab-Simulink are presented to check the proposed control performance

    Collection analysis for Horn clause programs

    Get PDF
    We consider approximating data structures with collections of the items that they contain. For examples, lists, binary trees, tuples, etc, can be approximated by sets or multisets of the items within them. Such approximations can be used to provide partial correctness properties of logic programs. For example, one might wish to specify than whenever the atom sort(t,s)sort(t,s) is proved then the two lists tt and ss contain the same multiset of items (that is, ss is a permutation of tt). If sorting removes duplicates, then one would like to infer that the sets of items underlying tt and ss are the same. Such results could be useful to have if they can be determined statically and automatically. We present a scheme by which such collection analysis can be structured and automated. Central to this scheme is the use of linear logic as a omputational logic underlying the logic of Horn clauses
    corecore