40 research outputs found
A framework for efficient regression tests on database applications
Regression testing is an important software maintenance activity to ensure the integrity of a software after modification. However, most methods and tools developed for software testing today do not work well for database applications; these tools only work well if applications are stateless or tests can be designed in such a way that they do not alter the state. To execute tests for database applications efficiently, the challenge is to control the state of the database during testing and to order the test runs such that expensive database reset operations that bring the database into the right state need to be executed as seldom as possible. This work devises a regression testing framework for database applications so that test runs can be executed in parallel. The goal is to achieve linear speed-up and/or exploit the available resources as well as possible. This problem is challenging because parallel testing needs to consider both load balancing and controlling the state of the database. Experimental results show that test run execution can achieve linear speed-up by using the proposed framewor
Mathematizing C++ concurrency
Shared-memory concurrency in C and C++ is pervasive in systems programming, but has long been poorly defined. This motivated an ongoing shared effort by the standards committees to specify concurrent behaviour in the next versions of both languages. They aim to provide strong guarantees for race-free programs, together with new (but subtle) relaxed-memory atomic primitives for high-performance concurrent code. However, the current draft standards, while the result of careful deliberation, are not yet clear and rigorous definitions, and harbour substantial problems in their details.
In this paper we establish a mathematical (yet readable) semantics for C++ concurrency. We aim to capture the intent of the current (`Final Committee') Draft as closely as possible, but discuss changes that fix many of its problems. We prove that a proposed x86 implementation of the concurrency primitives is correct with respect to the x86-TSO model, and describe our Cppmem tool for exploring the semantics of examples, using code generated from our Isabelle/HOL definitions.
Having already motivated changes to the draft standard, this work will aid discussion of any further changes, provide a correctness condition for compilers, and give a much-needed basis for analysis and verification of concurrent C and C++ programs
Isabelle/PIDE as Platform for Educational Tools
The Isabelle/PIDE platform addresses the question whether proof assistants of
the LCF family are suitable as technological basis for educational tools. The
traditionally strong logical foundations of systems like HOL, Coq, or Isabelle
have so far been counter-balanced by somewhat inaccessible interaction via the
TTY (or minor variations like the well-known Proof General / Emacs interface).
Thus the fundamental question of math education tools with fully-formal
background theories has often been answered negatively due to accidental
weaknesses of existing proof engines.
The idea of "PIDE" (which means "Prover IDE") is to integrate existing
provers like Isabelle into a larger environment, that facilitates access by
end-users and other tools. We use Scala to expose the proof engine in ML to the
JVM world, where many user-interfaces, editor frameworks, and educational tools
already exist. This shall ultimately lead to combined mathematical assistants,
where the logical engine is in the background, without obstructing the view on
applications of formal methods, formalized mathematics, and math education in
particular.Comment: In Proceedings THedu'11, arXiv:1202.453
Total Haskell is Reasonable Coq
We would like to use the Coq proof assistant to mechanically verify
properties of Haskell programs. To that end, we present a tool, named
hs-to-coq, that translates total Haskell programs into Coq programs via a
shallow embedding. We apply our tool in three case studies -- a lawful Monad
instance, "Hutton's razor", and an existing data structure library -- and prove
their correctness. These examples show that this approach is viable: both that
hs-to-coq applies to existing Haskell code, and that the output it produces is
amenable to verification.Comment: 13 pages plus references. Published at CPP'18, In Proceedings of 7th
ACM SIGPLAN International Conference on Certified Programs and Proofs
(CPP'18). ACM, New York, NY, USA, 201
Recursive Definitions of Monadic Functions
Using standard domain-theoretic fixed-points, we present an approach for
defining recursive functions that are formulated in monadic style. The method
works both in the simple option monad and the state-exception monad of
Isabelle/HOL's imperative programming extension, which results in a convenient
definition principle for imperative programs, which were previously hard to
define.
For such monadic functions, the recursion equation can always be derived
without preconditions, even if the function is partial. The construction is
easy to automate, and convenient induction principles can be derived
automatically.Comment: In Proceedings PAR 2010, arXiv:1012.455
Automated Generation of User Guidance by Combining Computation and Deduction
Herewith, a fairly old concept is published for the first time and named
"Lucas Interpretation". This has been implemented in a prototype, which has
been proved useful in educational practice and has gained academic relevance
with an emerging generation of educational mathematics assistants (EMA) based
on Computer Theorem Proving (CTP).
Automated Theorem Proving (ATP), i.e. deduction, is the most reliable
technology used to check user input. However ATP is inherently weak in
automatically generating solutions for arbitrary problems in applied
mathematics. This weakness is crucial for EMAs: when ATP checks user input as
incorrect and the learner gets stuck then the system should be able to suggest
possible next steps.
The key idea of Lucas Interpretation is to compute the steps of a calculation
following a program written in a novel CTP-based programming language, i.e.
computation provides the next steps. User guidance is generated by combining
deduction and computation: the latter is performed by a specific language
interpreter, which works like a debugger and hands over control to the learner
at breakpoints, i.e. tactics generating the steps of calculation. The
interpreter also builds up logical contexts providing ATP with the data
required for checking user input, thus combining computation and deduction.
The paper describes the concepts underlying Lucas Interpretation so that open
questions can adequately be addressed, and prerequisites for further work are
provided.Comment: In Proceedings THedu'11, arXiv:1202.453
IL-23 stabilizes an effector Treg cell program in the tumor microenvironment
Interleukin-23 (IL-23) is a proinflammatory cytokine mainly produced by myeloid cells that promotes tumor growth in various preclinical cancer models and correlates with adverse outcomes. However, as to how IL-23 fuels tumor growth is unclear. Here, we found tumor-associated macrophages to be the main source of IL-23 in mouse and human tumor microenvironments. Among IL-23-sensing cells, we identified a subset of tumor-infiltrating regulatory T (T-reg) cells that display a highly suppressive phenotype across mouse and human tumors. The use of three preclinical models of solid cancer in combination with genetic ablation of Il23r in T-reg cells revealed that they are responsible for the tumor-promoting effect of IL-23. Mechanistically, we found that IL-23 sensing represents a crucial signal driving the maintenance and stabilization of effector T-reg cells involving the transcription factor Foxp3. Our data support that targeting the IL-23/IL-23R axis in cancer may represent a means of eliciting antitumor immunity
Guidelines for the use of flow cytometry and cell sorting in immunological studies (third edition)
The third edition of Flow Cytometry Guidelines provides the key aspects to consider when performing flow cytometry experiments and includes comprehensive sections describing phenotypes and functional assays of all major human and murine immune cell subsets. Notably, the Guidelines contain helpful tables highlighting phenotypes and key differences between human and murine cells. Another useful feature of this edition is the flow cytometry analysis of clinical samples with examples of flow cytometry applications in the context of autoimmune diseases, cancers as well as acute and chronic infectious diseases. Furthermore, there are sections detailing tips, tricks and pitfalls to avoid. All sections are written and peer‐reviewed by leading flow cytometry experts and immunologists, making this edition an essential and state‐of‐the‐art handbook for basic and clinical researchers.DFG, 389687267, Kompartimentalisierung, Aufrechterhaltung und Reaktivierung humaner Gedächtnis-T-Lymphozyten aus Knochenmark und peripherem BlutDFG, 80750187, SFB 841: Leberentzündungen: Infektion, Immunregulation und KonsequenzenEC/H2020/800924/EU/International Cancer Research Fellowships - 2/iCARE-2DFG, 252623821, Die Rolle von follikulären T-Helferzellen in T-Helferzell-Differenzierung, Funktion und PlastizitätDFG, 390873048, EXC 2151: ImmunoSensation2 - the immune sensory syste