61 research outputs found

    Towards the automated modelling and formal verification of analog designs

    Get PDF
    The verification of analog circuits remains a very time consuming and expensive part of the design process. Complete simulation of the state space is not possible; a line is drawn by the designer when it is deemed that enough sets of inputs and outputs have been covered and therefore the circuit is "verified". Unfortunately, bugs could still exist and for safety critical applications this is not acceptable. As well, a bug in the design could lead to costly recalls and a loss of revenue. Formal methods, which use mathematical logic to prove correctness of a design have been developed. However, available techniques for the formal verification of analog circuits are plagued by inaccuracies and a high level of user effort and interaction. We propose in this thesis a complete methodology for the modelling and formal verification of analog circuits. Bond graphs, which are based on the flow of power, are used to automatically extract the circuit's system of Ordinary Differential Equations. Subsequently, two formal verification methods, one based on automated theorem proving with MetiTarski, the other on predicate abstraction based model checking with HybridSal, are then used to verify functional properties on the extracted models. The methodology proposed is mechanical in nature and can be made completely automated. We apply this modelling and verification methodology on a set of analog designs that exhibit complex non-linear behaviour

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems

    Toward Accessible Multilevel Modeling in Systems Biology: A Rule-based Language Concept

    Get PDF
    Promoted by advanced experimental techniques for obtaining high-quality data and the steadily accumulating knowledge about the complexity of life, modeling biological systems at multiple interrelated levels of organization attracts more and more attention recently. Current approaches for modeling multilevel systems typically lack an accessible formal modeling language or have major limitations with respect to expressiveness. The aim of this thesis is to provide a comprehensive discussion on associated problems and needs and to propose a concrete solution addressing them

    Limits for Stochastic Reaction Networks

    Get PDF
    Reaction systems have been introduced in the 70s to model biochemical systems. Nowadays their range of applications has increased and they are fruitfully used in different fields. The concept is simple: some chemical species react, the set of chemical reactions form a graph and a rate function is associated with each reaction. Such functions describe the speed of the different reactions, or their propensities. Two modelling regimes are then available: the evolution of the different species concentrations can be deterministically modelled through a system of ODE, while the counts of the different species at a certain time are stochastically modelled by means of a continuous-time Markov chain. Our work concerns primarily stochastic reaction systems, and their asymptotic properties. In Paper I, we consider a reaction system with intermediate species, i.e. species that are produced and fast degraded along a path of reactions. Let the rates of degradation of the intermediate species be functions of a parameter N that tends to infinity. We consider a reduced system where the intermediate species have been eliminated, and find conditions on the degradation rate of the intermediates such that the behaviour of the reduced network tends to that of the original one. In particular, we prove a uniform punctual convergence in distribution and weak convergence of the integrals of continuous functions along the paths of the two models. Under some extra conditions, we also prove weak convergence of the two processes. The result is stated in the setting of multiscale reaction systems: the amounts of all the species and the rates of all the reactions of the original model can scale as powers of N. A similar result also holds for the deterministic case, as shown in Appendix IA. In Paper II, we focus on the stationary distributions of the stochastic reaction systems. Specifically, we build a theory for stochastic reaction systems that is parallel to the deficiency zero theory for deterministic systems, which dates back to the 70s. A deficiency theory for stochastic reaction systems was missing, and few results connecting deficiency and stochastic reaction systems were known. The theory we build connects special form of product-form stationary distributions with structural properties of the reaction graph of the system. In Paper III, a special class of reaction systems is considered, namely systems exhibiting absolute concentration robust species. Such species, in the deterministic modelling regime, assume always the same value at any positive steady state. In the stochastic setting, we prove that, if the initial condition is a point in the basin of attraction of a positive steady state of the corresponding deterministic model and tends to infinity, then up to a fixed time T the counts of the species exhibiting absolute concentration robustness are, on average, near to their equilibrium value. The result is not obvious because when the counts of some species tend to infinity, so do some rate functions, and the study of the system may become hard. Moreover, the result states a substantial concordance between the paths of the stochastic and the deterministic models

    Silent steps in transition systems and Markov chains

    Get PDF

    Process Models for Learning Patterns in FLOSS Repositories

    Get PDF
    Evidence suggests that Free/Libre Open Source Software (FLOSS) environments provide unlimited learning opportunities. Community members engage in a number of activities both during their interaction with their peers and while making use of these environments’ repositories. To date, numerous studies document the existence of learning processes in FLOSS through surveys or by means of questionnaires filled by FLOSS projects participants. At the same time, there is a surge in developing tools and techniques for extracting and analyzing data from different FLOSS data sources that has birthed a new field called Mining Software Repositories (MSR). In spite of these growing tools and techniques for mining FLOSS repositories, there is limited or no existing approaches to providing empirical evidence of learning processes directly from these repositories. Therefore, in this work we sought to trigger such an initiative by proposing an approach based on Process Mining. With this technique, we aim to trace learning behaviors from FLOSS participants’ trails of activities as recorded in FLOSS repositories. We identify the participants as Novices and Experts. A Novice is defined as any FLOSS member that benefits from a learning experience through acquiring new skills while the Expert is the provider of these skills. The significance of our work is mainly twofold. First and foremost, we extend the MSR field by showing the potential of mining FLOSS repositories by applying Process Mining techniques. Lastly, our work provides critical evidence that boosts the understanding of learning behavior in FLOSS communities by analyzing the relevant repositories. In order to accomplish this, we have proposed and implemented a methodology that follows a seven-step approach including developing an appropriate terminology or ontology for learning processes in FLOSS, contextualizing learning processes through a-priori models, generating Event Logs, generating corresponding process models, interpreting and evaluating the value of process discovery, performing conformance analysis and verifying a number of formulated hypotheses with regard to tracing learning patterns in FLOSS communities. The implementation of this approach has resulted in the development of the Ontology of Learning in FLOSS (OntoLiFLOSS) environments that defines the terms needed to describe learning processes in FLOSS as well as providing a visual representation of these processes through Petri net-like Workflow nets. Moreover, another novelty pertains to the mining of FLOSS repositories by defining and describing the preliminaries required for preprocessing FLOSS data before applying Process Mining techniques for analysis. Through a step-by-step process, we effectively detail how the Event Logs are constructed through generating key phrases and making use of Semantic Search. Taking a FLOSS environment called Openstack as our data source, we apply our proposed techniques to identify learning activities based on key phrases catalogs and classification rules expressed through pseudo code as well as the appropriate Process Mining tool. We thus produced Event Logs that are based on the semantic content of messages in Openstack’s Mailing archives, Internet Relay Chat (IRC) messages, Reviews, Bug reports and Source code to retrieve the corresponding activities. Considering these repositories in light of the three learning process phases (Initiation, Progression and maturation), we produced an Event Log for each participant (Novice or Expert) in every phase on the corresponding dataset. Hence, we produced 14 Event Logs that helped build 14 corresponding process maps which are visual representation of the flow occurrence of learning activities in FLOSS for each participant. These process maps provide critical indications that speak volumes in terms of the presence of learning processes in the analyzed repositories. The results show that learning activities do occur at a significant rate during messages exchange on both Mailing archives and IRC messages. The slight differences between the two datasets can be highlighted in two ways. First, the involvement of Experts is more on iv IRC than it is on Mailing archives with 7.22% and 0.36% of Expert involvement respectively on IRC forums and Mailing lists. This can be justified by the differences in the length of messages sent on these two datasets. The average length of sent messages is 3261 characters for an email compared to 60 characters for a chat message. The evidence produced from this mining experiment solidifies the finding in terms of the existence of learning processes in FLOSS as well as the scale at which they occur. While the Initiation phase shows the Novice as the most involved in the start of the learning process, during Progression phase the involvement of the Expert can be seen to be significantly increasing. In order to trace the advanced skills in the Maturation phase, we look at repositories that store data about developing, creating code, examining and reviewing the code, identifying and fixing possible bugs. Therefore, we consider three repositories including Source Code, Bug reports and Reviews. The results obtained in this phase largely justify the choice of these three datasets to track learning behavior at this stage. Both the Bug reports and the Source code demonstrate the commitment of the Novice to seek answers and interact as much as possible in strengthening the acquired skills. With a participation of 49.22% for the Novice against 46.72% for the Expert and 46.19 % against 42.04% respectively on Bug reports and Source code, the Novice still engages significantly in learning. On the last dataset, Reviews, we notice an increase in the Expert’s role. The Expert performs activities to the tune of 40.36 % of total number of activities against 22.17 % for the Novice. The last steps of our methodology steer the comparison of the defined a-priori models with final models that describe how learning processes occur according to the actual behavior from Event Logs. Our attempts to producing process models start with depicting process maps to track the actual behaviour as it occurs in Openstack repositories, before concluding with final Petri net models representative of learning processes in FLOSS as a result of conformance analysis. For every dataset in the corresponding learning phase, we produce 3 process maps respectively depicting the overall learning behaviour for all FLOSS community members (Novice or Expert together), then the Novice and Expert. In total, we produced 21 process maps, empirically describing process models on real data, 14 process models in the form of Petri nets for every participant on each dataset. We make use of the Artificial Immune System (AIS) algorithms to merge the 14 Event Logs that uniquely capture the behaviour of every participant on different datasets in the three phases. We then reanalyze the resulting logs in order to produce 6 global models that inclusively provide a comprehensive depiction of participants’ learning behavior in FLOSS communities. This description hints that Workflow nets introduced as our a-priori models give rather a more simplistic representation of learning processes in FLOSS. Nevertheless, our experiments with Event Logs starting from process discovery to conformance checking from Openstack repositories demonstrate that the real learning behaviors are more complete and most importantly largely submerge these simplistic a-priori models. Finally, our methodology has proved to be effective in both providing a novel alternative for mining FLOSS repositories and providing empirical evidence that describes how knowledge is exchanged in FLOSS environments. Moreover, our results enrich the MSR field by providing a reproducible step-by-step problem solving approach that can be customized to answer subsequent research questions in FLOSS repositories using Process Mining

    Surrogate based Optimization and Verification of Analog and Mixed Signal Circuits

    Get PDF
    Nonlinear Analog and Mixed Signal (AMS) circuits are very complex and expensive to design and verify. Deeper technology scaling has made these designs susceptible to noise and process variations which presents a growing concern due to the degradation in the circuit performances and risks of design failures. In fact, due to process parameters, AMS circuits like phase locked loops may present chaotic behavior that can be confused with noisy behavior. To design and verify circuits, current industrial designs rely heavily on simulation based verification and knowledge based optimization techniques. However, such techniques lack mathematical rigor necessary to catch up with the growing design constraints besides being computationally intractable. Given all aforementioned barriers, new techniques are needed to ensure that circuits are robust and optimized despite process variations and possible chaotic behavior. In this thesis, we develop a methodology for optimization and verification of AMS circuits advancing three frontiers in the variability-aware design flow. The first frontier is a robust circuit sizing methodology wherein a multi-level circuit optimization approach is proposed. The optimization is conducted in two phases. First, a global sizing phase powered by a regional sensitivity analysis to quickly scout the feasible design space that reduces the optimization search. Second, nominal sizing step based on space mapping of two AMS circuits models at different levels of abstraction is developed for the sake of breaking the re-design loop without performance penalties. The second frontier concerns a dynamics verification scheme of the circuit behavior (i.e., study the chaotic vs. stochastic circuit behavior). It is based on a surrogate generation approach and a statistical proof by contradiction technique using Gaussian Kernel measure in the state space domain. The last frontier focus on quantitative verification approaches to predict parametric yield for both a single and multiple circuit performance constraints. The single performance approach is based on a combination of geometrical intertwined reachability analysis and a non-parametric statistical verification scheme. On the other hand, the multiple performances approach involves process parameter reduction, state space based pattern matching, and multiple hypothesis testing procedures. The performance of the proposed methodology is demonstrated on several benchmark analog and mixed signal circuits. The optimization approach greatly improves computational efficiency while locating a comparable/better design point than other approaches. Moreover, great improvements were achieved using our verification methods with many orders of speedup compared to existing techniques
    • …
    corecore