93 research outputs found
Equivalence Checking a Floating-point Unit against a High-level C Model
Semiconductor companies have increasingly adopted a methodology that starts with a system-level design specification in C/C++/SystemC. This model is extensively simulated to ensure correct functionality and performance. Later, a Register Transfer Level (RTL) implementation is created in Verilog, either manually by a designer or automatically by a high-level synthesis tool. It is essential to check that the C and Verilog programs are consistent. In this paper, we present a two-step approach, embodied in two equivalence checking tools, VERIFOX and HW-CBMC, to validate designs at the software and RTL levels, respectively. VERIFOX is used for equivalence checking of an untimed software model in C against a high-level reference model in C. HW-CBMC verifies the equivalence of a Verilog RTL implementation against an untimed software model in C. To evaluate our tools, we applied them to a commercial floating-point arithmetic unit (FPU) from ARM and an open-source dual-path floating-point adder
Dynamic modeling and simulation of leukocyte integrin activation through an electronic design automation framework
Model development and analysis of biological systems is recognized as a key requirement for integrating in-vitro and in-vivo experimental data. In-silico simulations of a biochemical model allows one to test different experimental conditions, helping in the discovery of the dynamics that regulate the system. Several characteristics and issues of biological system modeling are common to the electronics system modeling, such as concurrency, reactivity, abstraction levels, as well as state space explosion during verification. This paper proposes a modeling and simulation framework for discrete event-based execution of biochemical systems based on SystemC. SystemC is the reference language in the electronic design automation (EDA) field for modeling and verifying complex systems at different abstraction levels. SystemC-based verification is the de-facto an alternative to model checking when such a formal verification technique cannot deal with the state space complexity of the model. The paper presents how the framework has been applied to model the intracellular signalling network controlling integrin activation mediating leukocyte recruitment from the blood into the tissues, by handling the solution space complexity through different levels of simulation accuracy
On the order of an automorphism of a smooth hypersurface
In this paper we give an effective criterion as to when a positive integer q
is the order of an automorphism of a smooth hypersurface of dimension n and
degree d, for every d>2, n>1, (n,d)\neq (2,4), and \gcd(q,d)=\gcd(q,d-1)=1.
This allows us to give a complete criterion in the case where q=p is a prime
number. In particular, we show the following result: If X is a smooth
hypersurface of dimension n and degree d admitting an automorphism of prime
order p then p(d-1)^n then X is isomorphic to the Klein
hypersurface, n=2 or n+2 is prime, and p=\Phi_{n+2}(1-d) where \Phi_{n+2} is
the (n+2)-th cyclotomic polynomial. Finally, we provide some applications to
intermediate jacobians of Klein hypersurfaces
Generating test case chains for reactive systems
Testing of reactive systems is challenging because long input sequences are often needed to drive them into a state to test a desired feature. This is particularly problematic in on-target testing, where a system is tested in its real-life application environment and the amount of time required for resetting is high. This article presents an approach to discovering a test case chain—a single software execution that covers a group of test goals and minimizes overall test execution time. Our technique targets the scenario in which test goals for the requirements are given as safety properties. We give conditions for the existence and minimality of a single test case chain and minimize the number of test case chains if a single test case chain is infeasible. We report experimental results with our ChainCover tool for C code generated from Simulink models and compare it to state-of-the-art test suite generators
Proving the Equivalence of Microstep and Macrostep Semantics
Abstract. Recently, an embedding of the synchronous programming language Quartz (an Esterel variant) in the theorem prover HOL has been presented. This embedding is based on control flow predicates that refer to macrosteps of the pro-grams. The original semantics of synchronous languages like Esterel is however normally given at the more detailed microstep level. This paper describes how a variant of the Esterel microstep semantics has been defined in HOL and how its equivalence to the control flow predicate semantics has been proved. Beneath proving the equivalence of the micro- and macrostep semantics, the work pre-sented here is also an important extension of the existing embedding: While rea-soning at the microstep level is not necessary for code generation, it is sometimes advantageous for understanding programs, as some effects like schizophrenia or causality problems become only visible at the microstep level.
A theorem proving framework for the formal verification of Web Services Composition
We present a rigorous framework for the composition of Web Services within a
higher order logic theorem prover. Our approach is based on the
proofs-as-processes paradigm that enables inference rules of Classical Linear
Logic (CLL) to be translated into pi-calculus processes. In this setting,
composition is achieved by representing available web services as CLL
sentences, proving the requested composite service as a conjecture, and then
extracting the constructed pi-calculus term from the proof. Our framework,
implemented in HOL Light, not only uses an expressive logic that allows us to
incorporate multiple Web Services properties in the composition process, but
also provides guarantees of soundness and correctness for the composition.Comment: In Proceedings WWV 2011, arXiv:1108.208
Ethical implications of the use of whole genome methods in medical research
The use of genome-wide association studies (GWAS) in medical research and the increased ability to share data give a new twist to some of the perennial ethical issues associated with genomic research. GWAS create particular challenges because they produce fine, detailed, genotype information at high resolution, and the results of more focused studies can potentially be used to determine genetic variation for a wide range of conditions and traits. The information from a GWA scan is derived from DNA that is a powerful personal identifier, and can provide information not just on the individual, but also on the individual's relatives, related groups, and populations. Furthermore, it creates large amounts of individual-specific digital information that is easy to share across international borders. This paper provides an overview of some of the key ethical issues around GWAS: consent, feedback of results, privacy, and the governance of research. Many of the questions that lie ahead of us in terms of the next generation sequencing methods will have been foreshadowed by GWAS and the debates around ethical and policy issues that these have created
- …