997 research outputs found
Existence of Monetary Steady States in a Matching Model: Indivisible Money
Existence of a monetary steady state is established for a random matching model with divisible goods, indivisible money, and take-it-or-leave-it offers by consumers. There is no restriction on individual money holdings. The background environment is that in papers by Shi and by Trejos and Wright. The monetary steady state shown to exist has nice properties: the value function, defined on money holdings, is increasing and strictly concave, and the measure over money holdings has full support.
Propositional Encoding of Constraints over Tree-Shaped Data
We present a functional programming language for specifying constraints over
tree-shaped data. The language allows for Haskell-like algebraic data types and
pattern matching. Our constraint compiler CO4 translates these programs into
satisfiability problems in propositional logic. We present an application from
the area of automated analysis of (non-)termination of rewrite systems
Computing FO-Rewritings in EL in Practice: from Atomic to Conjunctive Queries
A prominent approach to implementing ontology-mediated queries (OMQs) is to
rewrite into a first-order query, which is then executed using a conventional
SQL database system. We consider the case where the ontology is formulated in
the description logic EL and the actual query is a conjunctive query and show
that rewritings of such OMQs can be efficiently computed in practice, in a
sound and complete way. Our approach combines a reduction with a decomposed
backwards chaining algorithm for OMQs that are based on the simpler atomic
queries, also illuminating the relationship between first-order rewritings of
OMQs based on conjunctive and on atomic queries. Experiments with real-world
ontologies show promising results
Guided Unfoldings for Finding Loops in Standard Term Rewriting
In this paper, we reconsider the unfolding-based technique that we have
introduced previously for detecting loops in standard term rewriting. We
improve it by guiding the unfolding process, using distinguished positions in
the rewrite rules. This results in a depth-first computation of the unfoldings,
whereas the original technique was breadth-first. We have implemented this new
approach in our tool NTI and compared it to the previous one on a bunch of
rewrite systems. The results we get are promising (better times, more
successful proofs).Comment: Pre-proceedings paper presented at the 28th International Symposium
on Logic-Based Program Synthesis and Transformation (LOPSTR 2018), Frankfurt
am Main, Germany, 4-6 September 2018 (arXiv:1808.03326
Insulin Glargine in the Intensive Care Unit: A Model-Based Clinical Trial Design
Online 4 Oct 2012Introduction: Current succesful AGC (Accurate Glycemic Control) protocols require extra clinical effort and are impractical in less acute wards where patients are still susceptible to stress-induced hyperglycemia. Long-acting insulin Glargine has the potential to be used in a low effort controller. However, potential variability in efficacy and length of action, prevent direct in-hospital use in an AGC framework for less acute wards.
Method: Clinically validated virtual trials based on data from stable ICU patients from the SPRINT cohort who would be transferred to such an approach are used to develop a 24-hour AGC protocol robust to different Glargine potencies (1.0x, 1.5x and 2.0x regular insulin) and initial dose sizes (dose = total insulin over prior 12, 18 and 24 hours). Glycemic control in this period is provided only by varying nutritional inputs. Performance is assessed as %BG in the 4.0-8.0mmol/L band and safety by %BG<4.0mmol/L.
Results: The final protocol consisted of Glargine bolus size equal to insulin over the previous 18 hours. Compared to SPRINT there was a 6.9% - 9.5% absolute decrease in mild hypoglycemia (%BG<4.0mmol/L) and up to a 6.2% increase in %BG between 4.0 and 8.0mmol/L. When the efficacy is known (1.5x assumed) there were reductions of: 27% BG measurements, 59% insulin boluses, 67% nutrition changes, and 6.3% absolute in mild hypoglycemia.
Conclusion: A robust 24-48 clinical trial has been designed to safely investigate the efficacy and kinetics of Glargine as a first step towards developing a Glargine-based protocol for less acute wards. Ensuring robustness to variability in Glargine efficacy significantly affects the performance and safety that can be obtained
Marion A. Kaplan, Deborah Dash Moore (eds), Gender and Jewish History
Par leurs contributions Ă ce livre, vingt-trois chercheur.e.s rendent hommage Ă leur collĂšgue qui souvent fut aussi leur professeure, lâhistorienne juive Paula Hyman. Comme le rappelle Richard I. Cohen dans son avant-propos, P. Hyman fut une historienne des Juifs de France et les publications quâelle a consacrĂ©es Ă la vie des Juifs dans la France contemporaine ont constituĂ© un apport considĂ©rable Ă ce champ de recherches. Toutefois le volume dont il est ici question cĂ©lĂšbre P. Hyman pour ses ..
Towards Correctness of Program Transformations Through Unification and Critical Pair Computation
Correctness of program transformations in extended lambda calculi with a
contextual semantics is usually based on reasoning about the operational
semantics which is a rewrite semantics. A successful approach to proving
correctness is the combination of a context lemma with the computation of
overlaps between program transformations and the reduction rules, and then of
so-called complete sets of diagrams. The method is similar to the computation
of critical pairs for the completion of term rewriting systems. We explore
cases where the computation of these overlaps can be done in a first order way
by variants of critical pair computation that use unification algorithms. As a
case study we apply the method to a lambda calculus with recursive
let-expressions and describe an effective unification algorithm to determine
all overlaps of a set of transformations with all reduction rules. The
unification algorithm employs many-sorted terms, the equational theory of
left-commutativity modelling multi-sets, context variables of different kinds
and a mechanism for compactly representing binding chains in recursive
let-expressions.Comment: In Proceedings UNIF 2010, arXiv:1012.455
Formal Reasoning about Efficient Data Structures: A Case Study in ACL2
We describe in this paper the formal verification, using the
ACL2 system, of a syntactic unification algorithm where terms are represented
as directed acyclic graphs (dags) and these graphs are stored
in a single-threaded object (stobj). The use of stobjs allows destructive
operations on data (thus improving the performance of the algorithm),
while maintaining the applicative semantics of ACL2. We intend to show
how ACL2 provides an environment where execution of algorithms with
efficient data structures and formal reasoning about them can be carried
out.Ministerio de Ciencia y TecnologĂa TIC2000-1368-C03-0
Stream Productivity by Outermost Termination
Streams are infinite sequences over a given data type. A stream specification
is a set of equations intended to define a stream. A core property is
productivity: unfolding the equations produces the intended stream in the
limit. In this paper we show that productivity is equivalent to termination
with respect to the balanced outermost strategy of a TRS obtained by adding an
additional rule. For specifications not involving branching symbols
balancedness is obtained for free, by which tools for proving outermost
termination can be used to prove productivity fully automatically
- âŠ