246,584 research outputs found
Recommended from our members
A neural-symbolic system for temporal reasoning with application to model verification and learning
The effective integration of knowledge representation, reasoning and learning into a robust computational model is one of the key challenges in Computer Science and Artificial Intelligence. In particular, temporal models have been fundamental in describing the behaviour of Computational and Neural-Symbolic Systems. Furthermore, knowledge acquisition of correct descriptions of the desired system’s behaviour is a complex task in several domains. Several efforts have been directed towards the development of tools that are capable of learning, describing and evolving software models.
This thesis contributes to two major areas of Computer Science, namely Artificial Intelligence (AI) and Software Engineering. Under an AI perspective, we present a novel neural-symbolic computational model capable of representing and learning temporal knowledge in recurrent networks. The model works in integrated fashion. It enables the effective representation of temporal knowledge, the adaptation of temporal models to a set of desirable system properties and effective learning from examples, which in turn can lead to symbolic temporal knowledge extraction from the corresponding trained neural networks. The model is sound, from a theoretical standpoint, but is also tested in a number of case studies.
An extension to the framework is shown to tackle aspects of verification and adaptation under the SE perspective. As regards verification, we make use of established techniques for model checking, which allow the verification of properties described as temporal models and return counter-examples whenever the properties are not satisfied. Our neural-symbolic framework is then extended to deal with different sources of information. This includes the translation of model descriptions into the neural structure, the evolution of such descriptions by the application of learning of counter examples, and also the learning of new models from simple observation of their behaviour.
In summary, we believe the thesis describes a principled methodology for temporal knowledge representation, learning and extraction, shedding new light on predictive temporal models, not only from a theoretical standpoint, but also with respect to a potentially large number of applications in AI, Neural Computation and Software Engineering, where temporal knowledge plays a fundamental role
BioModels Database: An enhanced, curated and annotated resource for published quantitative kinetic models
Background: Quantitative models of biochemical and cellular systems are used to answer a variety of questions in the
biological sciences. The number of published quantitative models is growing steadily thanks to increasing interest in
the use of models as well as the development of improved software systems and the availability of better, cheaper
computer hardware. To maximise the benefits of this growing body of models, the field needs centralised model
repositories that will encourage, facilitate and promote model dissemination and reuse. Ideally, the models stored in
these repositories should be extensively tested and encoded in community-supported and standardised formats. In
addition, the models and their components should be cross-referenced with other resources in order to allow their
unambiguous identification.
Description: BioModels Database http://www.ebi.ac.uk/biomodels/ is aimed at addressing exactly these needs. It is a
freely-accessible online resource for storing, viewing, retrieving, and analysing published, peer-reviewed quantitative
models of biochemical and cellular systems. The structure and behaviour of each simulation model distributed by
BioModels Database are thoroughly checked; in addition, model elements are annotated with terms from controlled
vocabularies as well as linked to relevant data resources. Models can be examined online or downloaded in various
formats. Reaction network diagrams generated from the models are also available in several formats. BioModels
Database also provides features such as online simulation and the extraction of components from large scale models
into smaller submodels. Finally, the system provides a range of web services that external software systems can use to
access up-to-date data from the database.
Conclusions: BioModels Database has become a recognised reference resource for systems biology. It is being used by
the community in a variety of ways; for example, it is used to benchmark different simulation systems, and to study the
clustering of models based upon their annotations. Model deposition to the database today is advised by several
publishers of scientific journals. The models in BioModels Database are freely distributed and reusable; the underlying
software infrastructure is also available from SourceForge https://sourceforge.net/projects/biomodels/ under the GNU
General Public License
Explicit Representation of Exception Handling in the Development of Dependable Component-Based Systems
Exception handling is a structuring technique that facilitates the design of systems by encapsulating the process of error recovery. In this paper, we present a systematic approach for incorporating exceptional behaviour in the development of component-based software. The premise of our approach is that components alone do not provide the appropriate means to deal with exceptional behaviour in an effective manner. Hence the need to consider the notion of collaborations for capturing the interactive behaviour between components, when error recovery involves more than one component. The feasibility of the approach is demonstrated in terms of the case study of the mining control system
A comparative evaluation of dynamic visualisation tools
Despite their potential applications in software comprehension, it appears that dynamic visualisation tools are seldom used outside the research laboratory. This paper presents an empirical evaluation of five dynamic visualisation tools - AVID, Jinsight, jRMTool, Together ControlCenter diagrams and Together ControlCenter debugger. The tools were evaluated on a number of general software comprehension and specific reverse engineering tasks using the HotDraw objectoriented framework. The tasks considered typical comprehension issues, including identification of software structure and behaviour, design pattern extraction, extensibility potential, maintenance issues, functionality location, and runtime load. The results revealed that the level of abstraction employed by a tool affects its success in different tasks, and that tools were more successful in addressing specific reverse engineering tasks than general software comprehension activities. It was found that no one tool performs well in all tasks, and some tasks were beyond the capabilities of all five tools. This paper concludes with suggestions for improving the efficacy of such tools
Automated generation of computationally hard feature models using evolutionary algorithms
This is the post-print version of the final paper published in Expert Systems with Applications. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2014 Elsevier B.V.A feature model is a compact representation of the products of a software product line. The automated extraction of information from feature models is a thriving topic involving numerous analysis operations, techniques and tools. Performance evaluations in this domain mainly rely on the use of random feature models. However, these only provide a rough idea of the behaviour of the tools with average problems and are not sufficient to reveal their real strengths and weaknesses. In this article, we propose to model the problem of finding computationally hard feature models as an optimization problem and we solve it using a novel evolutionary algorithm for optimized feature models (ETHOM). Given a tool and an analysis operation, ETHOM generates input models of a predefined size maximizing aspects such as the execution time or the memory consumption of the tool when performing the operation over the model. This allows users and developers to know the performance of tools in pessimistic cases providing a better idea of their real power and revealing performance bugs. Experiments using ETHOM on a number of analyses and tools have successfully identified models producing much longer executions times and higher memory consumption than those obtained with random models of identical or even larger size.European Commission (FEDER), the Spanish Government and
the Andalusian Government
Parameter extraction, modelling and circuit design for electrolyte-gated transistors on paper
Flexible and paper electronics have been getting a lot of attention in the last years. Not
only from the scientific community, but also from the end consumer. This ultimately converges
in efforts pushing towards the discovery of new and better materials for the TFT
technology. With this fast development of new devices, compact models for circuit simulation
based on older FETs become obsolete. The availability of fast and accurate models
is an essential part of going from single, proof-of-concept devices, to fully operational
circuits.
In this work, done in the Department of Engineering of the University of Cambridge,
the electrical characterization and parameter extraction of state-of-the-art electrolytegated
transistors (EGT) on paper substrate, fabricated at UNINOVA/I3N, led to the development
of a compact model capable of describing the behaviour of the devices.
A detailed overview of the model is provided throughout this work, from the characterization
of the device, to simple circuit simulations using a dozen of devices. This,
together with the provided Verilog-A code for CAD software implementation, will allow
both new and experienced users in circuit design to simulate simple circuits with these
EGTs or any other TFT device with similar behaviour with simple tweaks on the model
Recommended from our members
Resource Contention in Real-time Systems
The divide—and—conquer method is extensively used for system design. For real-time systems the separated components execute concurrently using some common computational infrastructure and this can lead to contention for system resources, such as processors, memory, communication channels, and so on. Unless the resource contention is accommodated, then a system built from the composition of components may not function as expected and the “proven” behaviour of the components can be invalid. To overcome this uncertainty a divide—conquer—and—system-composition method is required. This thesis takes a different approach to many of the existing notations which focus on descriptions of behaviour. The Composite Transition System notation and algebra presented here enables the resource usage of the components to be specified and combined to form a composite system of concurrently executing components. By relating the composite system to the realisable behaviour of the system resources provided by the common infrastructure it becomes possible to determine any violation of the constraints imposed by the system resources. If the composite system model is then constrained by the resource behaviours then it is possible through an extraction operation to determine the modified behaviour of the components that will yield a system free of resource contention. Component specification, concurrent composition, the application of system level constraints and extraction are applied in this thesis to a system encountered in a commercial application. The purpose of this example is to demonstrate contention modelling and the mathematics of the notation, rather than to prove any specific properties of the application. Deployment of the notation to more complex applications will require the development of software tools to compute concurrent composition and extraction, and this is the motivation for the mathematical treatment in this thesis
Provably Correct Control-Flow Graphs from Java Programs with Exceptions
We present an algorithm to extract flow graphs from Java bytecode, focusing on exceptional control flows. We prove its correctness, meaning that the behaviour of the extracted control-flow graph is an over-approximation of the behaviour of the original program. Thus any safety property that holds for the extracted control-flow graph also holds for the original program. This makes control-flow graphs suitable for performing different static analyses. For precision and efficiency, the extraction is performed in two phases. In the first phase the program is transformed into a BIR program, where BIR is a stack-less intermediate representation of Java bytecode; in the second phase the control-flow graph is extracted from the BIR representation. To prove the correctness of the two-phase extraction, we also define a direct extraction algorithm, whose correctness can be proven immediately. Then we show that the behaviour of the control-flow graph extracted via the intermediate representation is an over-approximation of the behaviour of the directly extracted graphs, and thus of the original program
- …