760,441 research outputs found

    A knowledge based application of the extended aircraft interrogation and display system

    Get PDF
    A family of multiple-processor ground support test equipment was used to test digital flight-control systems on high-performance research aircraft. A unit recently built for the F-18 high alpha research vehicle project is the latest model in a series called the extended aircraft interrogation and display system. The primary feature emphasized monitors the aircraft MIL-STD-1553B data buses and provides real-time engineering units displays of flight-control parameters. A customized software package was developed to provide real-time data interpretation based on rules embodied in a highly structured knowledge database. The configuration of this extended aircraft interrogation and display system is briefly described, and the evolution of the rule based package and its application to failure modes and effects testing on the F-18 high alpha research vehicle is discussed

    The role of the host in a cooperating mainframe and workstation environment, volumes 1 and 2

    Get PDF
    In recent years, advancements made in computer systems have prompted a move from centralized computing based on timesharing a large mainframe computer to distributed computing based on a connected set of engineering workstations. A major factor in this advancement is the increased performance and lower cost of engineering workstations. The shift to distributed computing from centralized computing has led to challenges associated with the residency of application programs within the system. In a combined system of multiple engineering workstations attached to a mainframe host, the question arises as to how does a system designer assign applications between the larger mainframe host and the smaller, yet powerful, workstation. The concepts related to real time data processing are analyzed and systems are displayed which use a host mainframe and a number of engineering workstations interconnected by a local area network. In most cases, distributed systems can be classified as having a single function or multiple functions and as executing programs in real time or nonreal time. In a system of multiple computers, the degree of autonomy of the computers is important; a system with one master control computer generally differs in reliability, performance, and complexity from a system in which all computers share the control. This research is concerned with generating general criteria principles for software residency decisions (host or workstation) for a diverse yet coupled group of users (the clustered workstations) which may need the use of a shared resource (the mainframe) to perform their functions

    Feasibility of using a knowledge-based system concept for in-flight primary flight display research

    Get PDF
    A study was conducted to determine the feasibility of using knowledge-based systems architectures for inflight research of primary flight display information management issues. The feasibility relied on the ability to integrate knowledge-based systems with existing onboard aircraft systems. And, given the hardware and software platforms available, the feasibility also depended on the ability to use interpreted LISP software with the real time operation of the primary flight display. In addition to evaluating these feasibility issues, the study determined whether the software engineering advantages of knowledge-based systems found for this application in the earlier workstation study extended to the inflight research environment. To study these issues, two integrated knowledge-based systems were designed to control the primary flight display according to pre-existing specifications of an ongoing primary flight display information management research effort. These two systems were implemented to assess the feasibility and software engineering issues listed. Flight test results were successful in showing the feasibility of using knowledge-based systems inflight with actual aircraft data

    Methodology for object-oriented real-time systems analysis and design: Software engineering

    Get PDF
    Successful application of software engineering methodologies requires an integrated analysis and design life-cycle in which the various phases flow smoothly 'seamlessly' from analysis through design to implementation. Furthermore, different analysis methodologies often lead to different structuring of the system so that the transition from analysis to design may be awkward depending on the design methodology to be used. This is especially important when object-oriented programming is to be used for implementation when the original specification and perhaps high-level design is non-object oriented. Two approaches to real-time systems analysis which can lead to an object-oriented design are contrasted: (1) modeling the system using structured analysis with real-time extensions which emphasizes data and control flows followed by the abstraction of objects where the operations or methods of the objects correspond to processes in the data flow diagrams and then design in terms of these objects; and (2) modeling the system from the beginning as a set of naturally occurring concurrent entities (objects) each having its own time-behavior defined by a set of states and state-transition rules and seamlessly transforming the analysis models into high-level design models. A new concept of a 'real-time systems-analysis object' is introduced and becomes the basic building block of a series of seamlessly-connected models which progress from the object-oriented real-time systems analysis and design system analysis logical models through the physical architectural models and the high-level design stages. The methodology is appropriate to the overall specification including hardware and software modules. In software modules, the systems analysis objects are transformed into software objects

    A Model-Driven Co-Design Framework for Fusing Control and Scheduling Viewpoints

    Get PDF
    Model-Driven Engineering (MDE) is widely applied in the industry to develop new software functions and integrate them into the existing run-time environment of a Cyber-Physical System (CPS). The design of a software component involves designers from various viewpoints such as control theory, software engineering, safety, etc. In practice, while a designer from one discipline focuses on the core aspects of his field (for instance, a control engineer concentrates on designing a stable controller), he neglects or considers less importantly the other engineering aspects (for instance, real-time software engineering or energy efficiency). This may cause some of the functional and non-functional requirements not to be met satisfactorily. In this work, we present a co-design framework based on timing tolerance contract to address such design gaps between control and real-time software engineering. The framework consists of three steps: controller design, verified by jitter margin analysis along with co-simulation, software design verified by a novel schedulability analysis, and the run-time verification by monitoring the execution of the models on target. This framework builds on CPAL (Cyber-Physical Action Language), an MDE design environment based on model-interpretation, which enforces a timing-realistic behavior in simulation through timing and scheduling annotations. The application of our framework is exemplified in the design of an automotive cruise control system

    A Temporal Expert System for Engineering Design Change Workflow

    Get PDF
    Workflow management, which is concerned with the coordination and control of business processes using information technology, has grown from its origins in document routing to include the automation of process logic in business process engineering. Workflow also has a strong temporal aspect. Activity sequencing, deadlines, routing conditions,and scheduling all involve the element of time. Temporal expert systems, which use knowledge-based constructs to represent and reason about time, can be used to enhance the capabilities of workflow software. This paper presents a temporal expert system workflow component for tracking engineering design changes. We use Allen\u27s theory of temporal intervals in our model to enhance the decision-making, timing, and routing activities in a workflow application. We test the model using information from a real-world engineering design situation and suggest further research opportunitie

    Replication and fault-tolerance in real-time systems

    Get PDF
    PhD ThesisThe increased availability of sophisticated computer hardware and the corresponding decrease in its cost has led to a widespread growth in the use of computer systems for realtime plant and process control applications. Such applications typically place very high demands upon computer control systems and the development of appropriate control software for these application areas can present a number of problems not normally encountered in other applications. First of all, real-time applications must be correct in the time domain as well as the value domain: returning results which are not only correct but also delivered on time. Further, since the potential for catastrophic failures can be high in a process or plant control environment, many real-time applications also have to meet high reliability requirements. These requirements will typically be met by means of a combination of fault avoidance and fault tolerance techniques. This thesis is intended to address some of the problems encountered in the provision of fault tolerance in real-time applications programs. Specifically,it considers the use of replication to ensure the availability of services in real-time systems. In a real-time environment, providing support for replicated services can introduce a number of problems. In particular, the scope for non-deterministic behaviour in real-time applications can be quite large and this can lead to difficultiesin maintainingconsistent internal states across the members of a replica group. To tackle this problem, a model is proposed for fault tolerant real-time objects which not only allows such objects to perform application specific recovery operations and real-time processing activities such as event handling, but which also allows objects to be replicated. The architectural support required for such replicated objects is also discussed and, to conclude, the run-time overheads associated with the use of such replicated services are considered.The Science and Engineering Research Council

    Towards an SDN network control application for differentiated traffic routing

    Get PDF
    In the last years, Software Defined Networking has emerged as a promising paradigm to foster network innovation and address the issues coming from the ossification of the TCP/IP architecture. The clean separation between control and data plane, the definition of northbound and southbound interfaces are key features of the Software Defined Networking paradigm. Moreover, a centralised control plane allows network operators to deploy advanced control and management strategies. Effective traffic engineering and resources management policies allow to achieve a better utilisation of network resources and improve endto- end service performance. This paper deals with the architectural design and experimental validation of a control application that enables differentiated routing for traffic flows belonging to different service classes. The new control application makes routing decisions leveraging on OpenFlow network statistics, i.e., taking advantage of real-time network status information. Moreover, a Deep Packet Inspection module has been developed and integrated in the control application to detect VoIP traffic with Session Initiation Protocol signalling, enforcing this way policies for a differentiated treatment of VoIP traffic. Finally, a functional validation is performed in emulated environment.This work was supported by the EPSRC INTERNET Project EP/H040536/1.This is the author accepted manuscript. The final version is available from IEEE via http://dx.doi.org/10.1109/ICC.2015.724925

    Editorial for FGCS Special issue on “Time-critical Applications on Software-defined Infrastructures”

    Get PDF
    Performance requirements in many applications can often be modelled as constraints related to time, for example, the span of data processing for disaster early warning [1], latency in live event broadcasting [2], and jitter during audio/video conferences [3]. These time constraints are often treated either in an “as fast as possible” manner, such as sensitive latencies in high-performance computing or communication tasks, or in a “timeliness” way where tasks have to be finished within a given window in real-time systems, as classified in [4]. To meet the required time constraints, one has to carefully analyse time constraints, engineer and integrate system components, and optimise the scheduling for computing and communication tasks. The development of a time-critical application is thus time-consuming and costly. During the past decades, the infrastructure technologies of computing, storage and networking have made tremendous progress. Besides the capacity and performance of physical devices, the virtualisation technologies offer effective resource management and isolation at different levels, such as Java Virtual Machines at the application level, Dockers at the operating system level, and Virtual Machines at the whole system level. Moreover, the network embedding [5] and software-defined networking [6] provide network-level virtualisation and control that enable a new paradigm of infrastructure, where infrastructure resources can be virtualised, isolated, and dynamically customised based on application needs. The software-defined infrastructures, including Cloud, Fog, Edge, software-defined networking and network function virtualisation, emerge nowadays as new environments for distributed applications with time-critical application requirements, but also face challenges in effectively utilising the advanced infrastructure features in system engineering and dynamic control. This special issue on “time-critical applications and software-defined infrastructures” focuses on practical aspects of the design, development, customisation and performance-oriented operation of such applications for Clouds and other distributed environments

    GREEN COMPUTING FOR IOT – SOFTWARE APPROACH

    Get PDF
    More efficient usage of limited energy resources on embedded platforms, found in various IoT applications, is identified as a universal challenge in designing such devices and systems. Although many power management techniques for control and optimization of device power consumption have been introduced at the hardware and software level, only few of them are addressing device operation at the application level. In this paper, a software engineering approach for managing the operation of IoT edge devices is presented. This approach involves a set of the application-level software parameters that affect consumption of the IoT device and its real-time behavior. To investigate and illustrate the impact of the introduced parameters on the device performance and its energy footprint, we utilize a custom-built simulation environment. The simulation results obtained from analyzing simplified data producer-consumer configuration of IoT edge tier, under push-based communication model, confirm that careful tuning of the identified set of parameters can lead to more energy efficient IoT end-device operation
    corecore