21,434 research outputs found

    Applying Ada to Beech Starship avionics

    Get PDF
    As Ada solidified in its development, it became evident that it offered advantages for avionics systems because of it support for modern software engineering principles and real time applications. An Ada programming support environment was developed for two major avionics subsystems in the Beech Starship. The two subsystems include electronic flight instrument displays and the flight management computer system. Both of these systems use multiple Intel 80186 microprocessors. The flight management computer provides flight planning, navigation displays, primary flight display of checklists and other pilot advisory information. Together these systems represent nearly 80,000 lines of Ada source code and to date approximately 30 man years of effort. The Beech Starship avionics systems are in flight testing

    From an automated flight-test management system to a flight-test engineer's workstation

    Get PDF
    The capabilities and evolution is described of a flight engineer's workstation (called TEST-PLAN) from an automated flight test management system. The concept and capabilities of the automated flight test management systems are explored and discussed to illustrate the value of advanced system prototyping and evolutionary software development

    Integrated testing and verification system for research flight software design document

    Get PDF
    The NASA Langley Research Center is developing the MUST (Multipurpose User-oriented Software Technology) program to cut the cost of producing research flight software through a system of software support tools. The HAL/S language is the primary subject of the design. Boeing Computer Services Company (BCS) has designed an integrated verification and testing capability as part of MUST. Documentation, verification and test options are provided with special attention on real time, multiprocessing issues. The needs of the entire software production cycle have been considered, with effective management and reduced lifecycle costs as foremost goals. Capabilities have been included in the design for static detection of data flow anomalies involving communicating concurrent processes. Some types of ill formed process synchronization and deadlock also are detected statically

    The Homeostasis Protocol: Avoiding Transaction Coordination Through Program Analysis

    Get PDF
    Datastores today rely on distribution and replication to achieve improved performance and fault-tolerance. But correctness of many applications depends on strong consistency properties - something that can impose substantial overheads, since it requires coordinating the behavior of multiple nodes. This paper describes a new approach to achieving strong consistency in distributed systems while minimizing communication between nodes. The key insight is to allow the state of the system to be inconsistent during execution, as long as this inconsistency is bounded and does not affect transaction correctness. In contrast to previous work, our approach uses program analysis to extract semantic information about permissible levels of inconsistency and is fully automated. We then employ a novel homeostasis protocol to allow sites to operate independently, without communicating, as long as any inconsistency is governed by appropriate treaties between the nodes. We discuss mechanisms for optimizing treaties based on workload characteristics to minimize communication, as well as a prototype implementation and experiments that demonstrate the benefits of our approach on common transactional benchmarks

    Bringing Back-in-Time Debugging Down to the Database

    Full text link
    With back-in-time debuggers, developers can explore what happened before observable failures by following infection chains back to their root causes. While there are several such debuggers for object-oriented programming languages, we do not know of any back-in-time capabilities at the database-level. Thus, if failures are caused by SQL scripts or stored procedures, developers have difficulties in understanding their unexpected behavior. In this paper, we present an approach for bringing back-in-time debugging down to the SAP HANA in-memory database. Our TARDISP debugger allows developers to step queries backwards and inspecting the database at previous and arbitrary points in time. With the help of a SQL extension, we can express queries covering a period of execution time within a debugging session and handle large amounts of data with low overhead on performance and memory. The entire approach has been evaluated within a development project at SAP and shows promising results with respect to the gathered developer feedback.Comment: 24th IEEE International Conference on Software Analysis, Evolution, and Reengineerin

    Developing a labelled object-relational constraint database architecture for the projection operator

    Get PDF
    Current relational databases have been developed in order to improve the handling of stored data, however, there are some types of information that have to be analysed for which no suitable tools are available. These new types of data can be represented and treated as constraints, allowing a set of data to be represented through equations, inequations and Boolean combinations of both. To this end, constraint databases were defined and some prototypes were developed. Since there are aspects that can be improved, we propose a new architecture called labelled object-relational constraint database (LORCDB). This provides more expressiveness, since the database is adapted in order to support more types of data, instead of the data having to be adapted to the database. In this paper, the projection operator of SQL is extended so that it works with linear and polynomial constraints and variables of constraints. In order to optimize query evaluation efficiency, some strategies and algorithms have been used to obtain an efficient query plan. Most work on constraint databases uses spatiotemporal data as case studies. However, this paper proposes model-based diagnosis since it is a highly potential research area, and model-based diagnosis permits more complicated queries than spatiotemporal examples. Our architecture permits the queries over constraints to be defined over different sets of variables by using symbolic substitution and elimination of variables.Ministerio de Ciencia y Tecnología DPI2006-15476-C02-0

    Parallel processing and expert systems

    Get PDF
    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited

    The place of expert systems in a typology of information systems

    Get PDF
    This article considers definitions and claims of Expert Systems ( ES) and analyzes them in view of traditional Information systems (IS). It is argued that the valid specifications for ES do not differ fran those for IS. Consequently the theoretical study and the practical development of ES should not be a monodiscipline. Integration of ES development in classical mathematics and computer science opens the door to existing knowledge and experience. Aspects of existing ES are reviewed from this interdisciplinary point of view
    corecore