66,974 research outputs found
Unifying an Introduction to Artificial Intelligence Course through Machine Learning Laboratory Experiences
This paper presents work on a collaborative project funded by the National Science Foundation that incorporates machine learning as a unifying theme to teach fundamental concepts typically covered in the introductory Artificial Intelligence courses. The project involves the development of an adaptable framework for the presentation of core AI topics. This is accomplished through the development, implementation, and testing of a suite of adaptable, hands-on laboratory projects that can be closely integrated into the AI course. Through the design and implementation of learning systems that enhance commonly-deployed applications, our model acknowledges that intelligent systems are best taught through their application to challenging problems. The goals of the project are to (1) enhance the student learning experience in the AI course, (2) increase student interest and motivation to learn AI by providing a framework for the presentation of the major AI topics that emphasizes the strong connection between AI and computer science and engineering, and (3) highlight the bridge that machine learning provides between AI technology and modern software engineering
Foundations of Multi-Paradigm Modelling for Cyber-Physical Systems
This open access book coherently gathers well-founded information on the fundamentals of and formalisms for modelling cyber-physical systems (CPS). Highlighting the cross-disciplinary nature of CPS modelling, it also serves as a bridge for anyone entering CPS from related areas of computer science or engineering. Truly complex, engineered systems—known as cyber-physical systems—that integrate physical, software, and network aspects are now on the rise. However, there is no unifying theory nor systematic design methods, techniques or tools for these systems. Individual (mechanical, electrical, network or software) engineering disciplines only offer partial solutions. A technique known as Multi-Paradigm Modelling has recently emerged suggesting to model every part and aspect of a system explicitly, at the most appropriate level(s) of abstraction, using the most appropriate modelling formalism(s), and then weaving the results together to form a representation of the system. If properly applied, it enables, among other global aspects, performance analysis, exhaustive simulation, and verification. This book is the first systematic attempt to bring together these formalisms for anyone starting in the field of CPS who seeks solid modelling foundations and a comprehensive introduction to the distinct existing techniques that are multi-paradigmatic. Though chiefly intended for master and post-graduate level students in computer science and engineering, it can also be used as a reference text for practitioners
On Oracles for Automated Diagnosis and Repair of Software Bugs
This HDR focuses on my work on automatic diagnosis and repair done over the past years. Among my past publications, it highlights three contributions on this topic, respectively published in ACM Transactions on Software Engineering and Methodology (TOSEM), IEEE Transactions on Software Engineering (TSE) and Elsevier Information & Software Technology (IST). My goal is to show that those three contributions share something deep, that they are founded on a unifying concept, which is the one of oracle. The first contribution is about statistical oracles. In the context of object-oriented software, we have defined a notion of context and normality that is specific to a fault class: missing method calls. Those inferred regularities act as oracle and their violations are considered as bugs. The second contribution is about test case based oracles for automatic repair. We describe an automatic repair system that fixes failing test cases by generating a patch. It is founded on the idea of refining the knowledge given by the violation of the oracle of the failing test case into finer-grain information, which we call a “micro-oracle”. By considering micro-oracles, we are capable of obtaining at the same time a precise fault localization diagnostic and a well-formed input-output specification to be used for program synthesis in order to repair a bug. The third contribution discusses a novel generic oracle in the context of exception handling. A generic oracle states properties that hold for many domains. Our technique verifies the compliance to this new oracle using test suite execution and exception injection. This document concludes with a research agenda about the future of engineering ultra-dependable and antifragile software systems
Recursive quantum repeater networks
Internet-scale quantum repeater networks will be heterogeneous in physical
technology, repeater functionality, and management. The classical control
necessary to use the network will therefore face similar issues as Internet
data transmission. Many scalability and management problems that arose during
the development of the Internet might have been solved in a more uniform
fashion, improving flexibility and reducing redundant engineering effort.
Quantum repeater network development is currently at the stage where we risk
similar duplication when separate systems are combined. We propose a unifying
framework that can be used with all existing repeater designs. We introduce the
notion of a Quantum Recursive Network Architecture, developed from the emerging
classical concept of 'recursive networks', extending recursive mechanisms from
a focus on data forwarding to a more general distributed computing request
framework. Recursion abstracts independent transit networks as single relay
nodes, unifies software layering, and virtualizes the addresses of resources to
improve information hiding and resource management. Our architecture is useful
for building arbitrary distributed states, including fundamental distributed
states such as Bell pairs and GHZ, W, and cluster states.Comment: 14 page
Executive control systems in the engineering design environment
An executive control system (ECS) is a software structure for unifying various applications codes into a comprehensive system. It provides a library of applications, a uniform access method through a cental user interface, and a data management facility. A survey of twenty-four executive control systems designed to unify various CAD/CAE applications for use in diverse engineering design environments within government and industry was conducted. The goals of this research were to establish system requirements to survey state-of-the-art architectural design approaches, and to provide an overview of the historical evolution of these systems. Foundations for design are presented and include environmental settings, system requirements, major architectural components, and a system classification scheme based on knowledge of the supported engineering domain(s). An overview of the design approaches used in developing the major architectural components of an ECS is presented with examples taken from the surveyed systems. Attention is drawn to four major areas of ECS development: interdisciplinary usage; standardization; knowledge utilization; and computer science technology transfer
Model driven software modernisation
Constant innovation of information technology and ever-changing market requirements
relegate more and more existing software to legacy status. Generating software through
reusing legacy systems has been a primary solution and software re-engineering has the
potential to improve software productivity and quality across the entire software life
cycle. The classical re-engineering technology starts at the level of program source code
which is the most or only reliable information on a legacy system. The program
specification derived from legacy source code will then facilitate the migration of
legacy systems in the subsequent forward engineering steps. A recent research trend in
re-engineering area carries this idea further and moves into model driven perspective
that the specification is presented with models.
The thesis focuses on engaging model technology to modernise legacy systems. A
unified approach, REMOST (Re-Engineering through MOdel conStruction and
Transformation), is proposed in the context of Model Driven Architecture (MDA). The
theoretical foundation is the construction of a WSL-based Modelling Language, known
as WML, which is an extension of WSL (Wide Spectrum Language). WML is defined
to provide a spectrum of models for the system re-engineering, including Common
Modelling Language (CML), Architecture Description Language (ADL) and Domain
Specific Modelling Language (DSML). 9rtetaWML is designed for model
transformation, providing query facilities, action primitives and metrics functions. A set
of transformation rules are defined in 9rtetaWML to conduct system abstraction and
refactoring. Model transformation for unifying WML and UML is also provided, which
can bridge the legacy systems to MDA. The architecture and working flow of the
REMOST approach are proposed and a prototype tool environment is developed for
testing the approach. A number of case studies are used for experiments with the
approach and the prototype tool, which show that the proposed approach is feasible and
promising in its domain. Conclusion is drawn based on analysis and further research
directions are also discussed
The SP theory of intelligence: benefits and applications
This article describes existing and expected benefits of the "SP theory of
intelligence", and some potential applications. The theory aims to simplify and
integrate ideas across artificial intelligence, mainstream computing, and human
perception and cognition, with information compression as a unifying theme. It
combines conceptual simplicity with descriptive and explanatory power across
several areas of computing and cognition. In the "SP machine" -- an expression
of the SP theory which is currently realized in the form of a computer model --
there is potential for an overall simplification of computing systems,
including software. The SP theory promises deeper insights and better solutions
in several areas of application including, most notably, unsupervised learning,
natural language processing, autonomous robots, computer vision, intelligent
databases, software engineering, information compression, medical diagnosis and
big data. There is also potential in areas such as the semantic web,
bioinformatics, structuring of documents, the detection of computer viruses,
data fusion, new kinds of computer, and the development of scientific theories.
The theory promises seamless integration of structures and functions within and
between different areas of application. The potential value, worldwide, of
these benefits and applications is at least $190 billion each year. Further
development would be facilitated by the creation of a high-parallel,
open-source version of the SP machine, available to researchers everywhere.Comment: arXiv admin note: substantial text overlap with arXiv:1212.022
- …