809 research outputs found
Proof-theoretic Analysis of Rationality for Strategic Games with Arbitrary Strategy Sets
In the context of strategic games, we provide an axiomatic proof of the
statement Common knowledge of rationality implies that the players will choose
only strategies that survive the iterated elimination of strictly dominated
strategies. Rationality here means playing only strategies one believes to be
best responses. This involves looking at two formal languages. One is
first-order, and is used to formalise optimality conditions, like avoiding
strictly dominated strategies, or playing a best response. The other is a modal
fixpoint language with expressions for optimality, rationality and belief.
Fixpoints are used to form expressions for common belief and for iterated
elimination of non-optimal strategies.Comment: 16 pages, Proc. 11th International Workshop on Computational Logic in
Multi-Agent Systems (CLIMA XI). To appea
Abstract verification and debugging of constraint logic programs
The technique of Abstract Interpretation [13] has allowed the development of sophisticated program analyses which are provably correct and practical. The semantic approximations produced by such analyses have been traditionally applied to optimization during program compilation. However, recently, novel and promising applications of semantic approximations have been proposed in the more general context of program verification and debugging [3],[10],[7]
Using global analysis, partial specifications, and an extensible assertion language for program validation and debugging
We discuss a framework for the application of abstract interpretation as an aid during program development, rather than in the more traditional application of program optimization. Program validation and detection of errors is first performed statically by comparing (partial) specifications written in terms of assertions against information obtained from (global) static analysis of the program. The results of this process are expressed in the user assertion language. Assertions (or parts of assertions) which cannot be checked statically are translated into run-time tests. The framework allows the use of assertions to be optional. It also allows using very general properties in assertions, beyond the predefined set understandable by the static analyzer and including properties defined by user programs. We also report briefly on an implementation of the framework. The resulting tool generates and checks assertions for Prolog, CLP(R), and CHIP/CLP(fd) programs, and integrates compile-time and run-time checking in a uniform way. The tool allows using properties such as types, modes, non-failure, determinacy,
and computational cost, and can treat modules separately, performing incremental analysis
A Denotational Semantics for First-Order Logic
In Apt and Bezem [AB99] (see cs.LO/9811017) we provided a computational
interpretation of first-order formulas over arbitrary interpretations. Here we
complement this work by introducing a denotational semantics for first-order
logic. Additionally, by allowing an assignment of a non-ground term to a
variable we introduce in this framework logical variables.
The semantics combines a number of well-known ideas from the areas of
semantics of imperative programming languages and logic programming. In the
resulting computational view conjunction corresponds to sequential composition,
disjunction to ``don't know'' nondeterminism, existential quantification to
declaration of a local variable, and negation to the ``negation as finite
failure'' rule. The soundness result shows correctness of the semantics with
respect to the notion of truth. The proof resembles in some aspects the proof
of the soundness of the SLDNF-resolution.Comment: 17 pages. Invited talk at the Computational Logic Conference (CL
2000). To appear in Springer-Verlag Lecture Notes in Computer Scienc
Automatically Discovering Hidden Transformation Chaining Constraints
Model transformations operate on models conforming to precisely defined
metamodels. Consequently, it often seems relatively easy to chain them: the
output of a transformation may be given as input to a second one if metamodels
match. However, this simple rule has some obvious limitations. For instance, a
transformation may only use a subset of a metamodel. Therefore, chaining
transformations appropriately requires more information. We present here an
approach that automatically discovers more detailed information about actual
chaining constraints by statically analyzing transformations. The objective is
to provide developers who decide to chain transformations with more data on
which to base their choices. This approach has been successfully applied to the
case of a library of endogenous transformations. They all have the same source
and target metamodel but have some hidden chaining constraints. In such a case,
the simple metamodel matching rule given above does not provide any useful
information
Parameterized Model-Checking for Timed-Systems with Conjunctive Guards (Extended Version)
In this work we extend the Emerson and Kahlon's cutoff theorems for process
skeletons with conjunctive guards to Parameterized Networks of Timed Automata,
i.e. systems obtained by an \emph{apriori} unknown number of Timed Automata
instantiated from a finite set of Timed Automata templates.
In this way we aim at giving a tool to universally verify software systems
where an unknown number of software components (i.e. processes) interact with
continuous time temporal constraints. It is often the case, indeed, that
distributed algorithms show an heterogeneous nature, combining dynamic aspects
with real-time aspects. In the paper we will also show how to model check a
protocol that uses special variables storing identifiers of the participating
processes (i.e. PIDs) in Timed Automata with conjunctive guards. This is
non-trivial, since solutions to the parameterized verification problem often
relies on the processes to be symmetric, i.e. indistinguishable. On the other
side, many popular distributed algorithms make use of PIDs and thus cannot
directly apply those solutions
The Impatient May Use Limited Optimism to Minimize Regret
Discounted-sum games provide a formal model for the study of reinforcement
learning, where the agent is enticed to get rewards early since later rewards
are discounted. When the agent interacts with the environment, she may regret
her actions, realizing that a previous choice was suboptimal given the behavior
of the environment. The main contribution of this paper is a PSPACE algorithm
for computing the minimum possible regret of a given game. To this end, several
results of independent interest are shown. (1) We identify a class of
regret-minimizing and admissible strategies that first assume that the
environment is collaborating, then assume it is adversarial---the precise
timing of the switch is key here. (2) Disregarding the computational cost of
numerical analysis, we provide an NP algorithm that checks that the regret
entailed by a given time-switching strategy exceeds a given value. (3) We show
that determining whether a strategy minimizes regret is decidable in PSPACE
Efektivitas Niacinamide sebagai Lightening Agent
"Parwati, Tuntun. 2021. Efektivitas Niacinamide Sebagai Lightening Agent. Tugas Akhir, Program Studi Sarjana Farmasi Fakultas Kedokteran Universitas Brawijaya. Pembimbing : (1) apt. Oktavia Eka Puspita,S.Farm., M. Sc. (2) apt. Tamara Gusti Ebtavanny, S.Farm., M.Farm
Hiperpigmentasi merupakan kondisi kelebihan pigmen pada kulit yang salah satunya disebabkan karena paparan sinar ultraviolet (UV) sehingga diperlukan kosmetik yang mengandung agen pencerah (lightening agent) untuk mengatasi hiperpigmentasi pada kulit. Dalam systematic literature review ini bertujuan untuk mengetahui efektivitas sediaan krim yang mengandung niacinamide sebagai Lightening Agent serta penurunan hiperpigmentasi wajah dan perbaikan melasma yang dilihat dari adanya pengurangan jumlah pigmen, indeks melanin/ukuran eritema. Pencarian literatur yang komprehensif dilakukan menggunakan aplikasi Publish or Perish (PoP) pada beberapa database yaitu Google Scholar, Scopus, dan Crossref dengan alur penyeleksian artikel menggunakan protokol Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA). Dari tujuh artikel yang telah diidentifikasi hasil menunjukkan bahwa partisipan yang diberi intervensi krim niacinamide kombinasi dapat mengurangi keparahan melasma diukur menggunakan Individual topography Angle (ITA), dan chromameter. Kesimpulan dalam Systematic Literature Review ini yaitu sediaan krim yang mengandung niacinamide efektif dalam menurunkan pengurangan jumlah pigmen, indeks melanin/ukuran eritema tergantung dengan konsentrasi niacinamide dan bahan kombinasi yang digunakan.
Kata kunci: Hiperpigmentasi, Krim, Niacinamide, Lightening Agent, perbaikan melasma
On completeness of logic programs
Program correctness (in imperative and functional programming) splits in
logic programming into correctness and completeness. Completeness means that a
program produces all the answers required by its specification. Little work has
been devoted to reasoning about completeness. This paper presents a few
sufficient conditions for completeness of definite programs. We also study
preserving completeness under some cases of pruning of SLD-trees (e.g. due to
using the cut).
We treat logic programming as a declarative paradigm, abstracting from any
operational semantics as far as possible. We argue that the proposed methods
are simple enough to be applied, possibly at an informal level, in practical
Prolog programming. We point out importance of approximate specifications.Comment: 20 page
- âŠ