1,120 research outputs found

    Outline of a theory of truth as correctness for semantic information

    Get PDF
    The article develops a correctness theory of truth (CTT) for semantic information. After the introduction, in section two, semantic information is shown to be translatable into propositional semantic information (i). In section three, i is polarised into a query (Q) and a result (R), qualified by a specific context, a level of abstraction and a purpose. This polarization is normalised in section four, where [Q + R] is transformed into a Boolean question and its relative yes/no answer [Q + A]. This completes the reduction of the truth of i to the correctness of A. In sections five and six, it is argued that (1) A is the correct answer to Q if and only if (2) A correctly saturates (in a Fregean sense) Q by verifying and validating it (in the computer science’s sense of “verification” and “validation”); that (2) is the case if and only if (3) [Q + A] generates an adequate model (m) of the relevant system (s) identified by Q; that (3) is the case if and only if (4) m is a proxy of s (in the computer science’s sense of “proxy”) and (5) proximal access to m commutes with the distal access to s (in the category theory’s sense of “commutation”); and that (5) is the case if and only if (6) reading/writing (accessing, in the computer science’s technical sense of the term) m enables one to read/write (access) s. The last section draws a general conclusion about the nature of CTT as a theory for systems designers not just systems users

    Formal methods and digital systems validation for airborne systems

    Get PDF
    This report has been prepared to supplement a forthcoming chapter on formal methods in the FAA Digital Systems Validation Handbook. Its purpose is as follows: to outline the technical basis for formal methods in computer science; to explain the use of formal methods in the specification and verification of software and hardware requirements, designs, and implementations; to identify the benefits, weaknesses, and difficulties in applying these methods to digital systems used on board aircraft; and to suggest factors for consideration when formal methods are offered in support of certification. These latter factors assume the context for software development and assurance described in RTCA document DO-178B, 'Software Considerations in Airborne Systems and Equipment Certification,' Dec. 1992

    Automated analysis of feature models: Quo vadis?

    Get PDF
    Feature models have been used since the 90's to describe software product lines as a way of reusing common parts in a family of software systems. In 2010, a systematic literature review was published summarizing the advances and settling the basis of the area of Automated Analysis of Feature Models (AAFM). From then on, different studies have applied the AAFM in different domains. In this paper, we provide an overview of the evolution of this field since 2010 by performing a systematic mapping study considering 423 primary sources. We found six different variability facets where the AAFM is being applied that define the tendencies: product configuration and derivation; testing and evolution; reverse engineering; multi-model variability-analysis; variability modelling and variability-intensive systems. We also confirmed that there is a lack of industrial evidence in most of the cases. Finally, we present where and when the papers have been published and who are the authors and institutions that are contributing to the field. We observed that the maturity is proven by the increment in the number of journals published along the years as well as the diversity of conferences and workshops where papers are published. We also suggest some synergies with other areas such as cloud or mobile computing among others that can motivate further research in the future.Ministerio de Economía y Competitividad TIN2015-70560-RJunta de Andalucía TIC-186

    Mass collaboration problem solving: A new approach to wicked problems

    Full text link

    Resilient Computing Courseware

    Get PDF
    This Deliverable describes the courseware in support to teaching Resilient Computing in a Curriculum for an MSc track following the scheme of the Bologna process. The development of the supporting material for such a curriculum has required a rather intensive activity that involved not only the partners in ReSIST but also a much larger worldwide community with the aim of identifying available updated support material that can be used to build a progressive and methodical line of teaching to accompany students and interested persons in a profitable learning process. All this material is on-line on the official ReSIST web site http://www.resistnoe.org/, can be viewed and downloaded for use in a class and constitutes, at our knowledge, the first, almost comprehensive attempt, to build a database of support material related to Dependable and Resilient Computing.European Commission through NoE IST-4-026764-NOE (ReSIST

    Integrating Explanatory/Predictive and Prescriptive Science in Information Systems Research

    Get PDF
    The scholarly information systems (IS) field has a dual role. As an explanatory and predictive science, the field contributes to explaining the pervasive IS that shape the digital age and sometimes also makes predictions about those phenomena. As a prescriptive science, it participates in creating IS-related innovations by identifying means-ends relationships. The two can beneficially interact, such as when explanatory theory provides the basis for generating prescriptions or when applicable knowledge produces explanatory insights. In this commentary, we contribute to integrating these two roles by proposing a framework to help IS researchers navigate the field’s duality to extend the cumulative scholarly knowledge that it creates in terms of justified explanations and predictions and justified prescriptions. The process we describe builds on ongoing, dynamic, iterative, and interrelated research cycles. We identify a set of integrative research practices that occur at the interface between explanatory and predictive science and prescriptive science—the explanation-prescription nexus. We derive guidelines for IS research

    Visual language representation for use case evolution and traceability

    Get PDF
    The primary goal of this research is to assist non-technical stakeholders involved in requirements engineering with a comprehensible method for managing changing requirements within a specific domain. An important part of managing evolving requirements over time is to maintain a temporal ordering of the changes and to support traceability of the modifications. This research defines a semi-formal syntactical and semantic definition of such a method using a visual language, RE/TRAC (Requirements Evolution with Traceability), and a supporting formal semantic notation RE/TRAC-SEM. RE/TRAC-SEM is an ontological specification employing a combination of models, including verbal definitions, set theory and a string language specification RE/TRAC-CF. The language RE/TRAC-CF enables the separation of the syntactical description of the visual language from the semantic meaning of the model, permitting varying target representations and taking advantage of existing efficient parsing algorithms for context-free grammars. As an application of the RE/TRAC representation, this research depicts the hierarchical step-wise refinement of UML use case diagrams to demonstrate evolving system requirements. In the current arena of software development, where systems are described using platform independent models (PIMs) which emphasize the front-end design process, requirements and design documents, including the use cases, have become the primary artifacts of the system. Therefore the management of requirements’ evolution has become even more critical in the creation and maintenance of systems

    Technological roadmap on AI planning and scheduling

    Get PDF
    At the beginning of the new century, Information Technologies had become basic and indispensable constituents of the production and preparation processes for all kinds of goods and services and with that are largely influencing both the working and private life of nearly every citizen. This development will continue and even further grow with the continually increasing use of the Internet in production, business, science, education, and everyday societal and private undertaking. Recent years have shown, however, that a dramatic enhancement of software capabilities is required, when aiming to continuously provide advanced and competitive products and services in all these fast developing sectors. It includes the development of intelligent systems – systems that are more autonomous, flexible, and robust than today’s conventional software. Intelligent Planning and Scheduling is a key enabling technology for intelligent systems. It has been developed and matured over the last three decades and has successfully been employed for a variety of applications in commerce, industry, education, medicine, public transport, defense, and government. This document reviews the state-of-the-art in key application and technical areas of Intelligent Planning and Scheduling. It identifies the most important research, development, and technology transfer efforts required in the coming 3 to 10 years and shows the way forward to meet these challenges in the short-, medium- and longer-term future. The roadmap has been developed under the regime of PLANET – the European Network of Excellence in AI Planning. This network, established by the European Commission in 1998, is the co-ordinating framework for research, development, and technology transfer in the field of Intelligent Planning and Scheduling in Europe. A large number of people have contributed to this document including the members of PLANET non- European international experts, and a number of independent expert peer reviewers. All of them are acknowledged in a separate section of this document. Intelligent Planning and Scheduling is a far-reaching technology. Accepting the challenges and progressing along the directions pointed out in this roadmap will enable a new generation of intelligent application systems in a wide variety of industrial, commercial, public, and private sectors
    corecore