93,900 research outputs found
Formal Modeling of Connectionism using Concurrency Theory, an Approach Based on Automata and Model Checking
This paper illustrates a framework for applying formal methods techniques, which are symbolic in nature, to specifying and verifying neural networks, which are sub-symbolic in nature. The paper describes a communicating automata [Bowman & Gomez, 2006] model of neural networks. We also implement the model using timed automata [Alur & Dill, 1994] and then undertake a verification of these models using the model checker Uppaal [Pettersson, 2000] in order to evaluate the performance of learning algorithms. This paper also presents discussion of a number of broad issues concerning cognitive neuroscience and the debate as to whether symbolic processing or connectionism is a suitable representation of cognitive systems. Additionally, the issue of integrating symbolic techniques, such as formal methods, with complex neural networks is discussed. We then argue that symbolic verifications may give theoretically well-founded ways to evaluate and justify neural learning systems in the field of both theoretical research and real world applications
Supporting user-oriented analysis for multi-view domain-specific visual languages
This is the post-print version of the final paper published in Information and Software Technology. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2008 Elsevier B.V.The integration of usable and flexible analysis support in modelling environments is a key success factor in Model-Driven Development. In this paradigm, models are the core asset from which code is automatically generated, and thus ensuring model correctness is a fundamental quality control activity. For this purpose, a common approach is to transform the system models into formal semantic domains for verification. However, if the analysis results are not shown in a proper way to the end-user (e.g. in terms of the original language) they may become useless.
In this paper we present a novel DSVL called BaVeL that facilitates the flexible annotation of verification results obtained in semantic domains to different formats, including the context of the original language. BaVeL is used in combination with a consistency framework, providing support for all steps in a verification process: acquisition of additional input data, transformation of the system models into semantic domains, verification, and flexible annotation of analysis results.
The approach has been validated analytically by the cognitive dimensions framework, and empirically by its implementation and application to several DSVLs. Here we present a case study of a notation in the area of Digital Libraries, where the analysis is performed by transformations into Petri nets and a process algebra.Spanish Ministry of Education and Science and MODUWEB
Survey of Human Models for Verification of Human-Machine Systems
We survey the landscape of human operator modeling ranging from the early
cognitive models developed in artificial intelligence to more recent formal
task models developed for model-checking of human machine interactions. We
review human performance modeling and human factors studies in the context of
aviation, and models of how the pilot interacts with automation in the cockpit.
The purpose of the survey is to assess the applicability of available
state-of-the-art models of the human operators for the design, verification and
validation of future safety-critical aviation systems that exhibit higher-level
of autonomy, but still require human operators in the loop. These systems
include the single-pilot aircraft and NextGen air traffic management. We
discuss the gaps in existing models and propose future research to address
them
Reasoning about order errors in interaction
Reliability of an interactive system depends on users as well as the device implementation. User errors can result in catastrophic system
failure. However, work from the field of cognitive science shows that
systems can be designed so as to completely eliminate whole classes of
user errors. This means that user errors should also fall within the remit
of verification methods. In this paper we demonstrate how the HOL
theorem prover [7] can be used to detect and prove the absence of the
family of errors known as order errors. This is done by taking account
of the goals and knowledge of users. We provide an explicit generic user
model which embodies theory from the cognitive sciences about the way
people are known to act. The user model describes action based on user
communication goals. These are goals that a user adopts based on their
knowledge of the task they must perform to achieve their goals. We use
a simple example of a vending machine to demonstrate the approach.
We prove that a user does achieve their goal for a particular design of
machine. In doing so we demonstrate that communication goal based
errors cannot occur
Modelling dynamics of victims' stress during natural disaster
Natural disaster is one of the inescapable phenomenon through which numerous number of individuals are being affected via developing psychological problems. Stress is one of the essential psychological effects of natural disaster; it is a reality of nature where forces from the outside world affect individuals exposed to such phenomenon. In computational psychology domains, computational models were used as tools for understanding human cognitive functions and behavioural patterns. Meanwhile, psychological and cognitive theories as well as empirical studies have provided convergent evidence to identify important factors and psychological attributes
that affect the stress level of victims during natural disaster. Therefore, this study
implements a formal model (computational model) to understand the current state of victims' stress during natural disaster. From related theories, 22 of basic factors have been established and grouped into 7 main categories that include predisposed factors, resources, individual attributes, appraisal, resilience, coping, and stress. Those factors provide the fundamental knowledge of the behaviours of victims after disaster occurrence. A formal model was developed by using a set of differential equations. Later, this model was simulated by applying related scenarios based on three different cases, namely; 1) a good victim with low level of stress, 2) victim with high level of stress, and 3) victim with moderate level of stress) through the use of Matlab as a programming language. This computational model was then verified using two
techniques; 1) logical verification (Temporal Trace Language) and 2) mathematical verification (stability analysis). The experimental results have approximately predicted why victims develop stress differently when facing natural disasters
Refinement type contracts for verification of scientific investigative software
Our scientific knowledge is increasingly built on software output. User code
which defines data analysis pipelines and computational models is essential for
research in the natural and social sciences, but little is known about how to
ensure its correctness. The structure of this code and the development process
used to build it limit the utility of traditional testing methodology. Formal
methods for software verification have seen great success in ensuring code
correctness but generally require more specialized training, development time,
and funding than is available in the natural and social sciences. Here, we
present a Python library which uses lightweight formal methods to provide
correctness guarantees without the need for specialized knowledge or
substantial time investment. Our package provides runtime verification of
function entry and exit condition contracts using refinement types. It allows
checking hyperproperties within contracts and offers automated test case
generation to supplement online checking. We co-developed our tool with a
medium-sized (3000 LOC) software package which simulates
decision-making in cognitive neuroscience. In addition to helping us locate
trivial bugs earlier on in the development cycle, our tool was able to locate
four bugs which may have been difficult to find using traditional testing
methods. It was also able to find bugs in user code which did not contain
contracts or refinement type annotations. This demonstrates how formal methods
can be used to verify the correctness of scientific software which is difficult
to test with mainstream approaches
Research Priorities for Robust and Beneficial Artificial Intelligence
Success in the quest for artificial intelligence has the potential to bring
unprecedented benefits to humanity, and it is therefore worthwhile to
investigate how to maximize these benefits while avoiding potential pitfalls.
This article gives numerous examples (which should by no means be construed as
an exhaustive list) of such worthwhile research aimed at ensuring that AI
remains robust and beneficial.Comment: This article gives examples of the type of research advocated by the
open letter for robust & beneficial AI at
http://futureoflife.org/ai-open-lette
- …