593,654 research outputs found

    Building models of real-time systems from application software

    Get PDF
    We present a methodology for building timed models of real-time systems by adding time constraints to their application software. The applied constraints take into account execution times of atomic statements, the behavior of the system's external environment, and scheduling policies. The timed models of the application obtained in this manner can be analyzed by using time analysis techniques to check relevant real-time properties. We show an instance of the methodology developed in the TAXYS project for the modeling and analysis of real-time systems programmed in the Esterel language. This language has been extended to describe, by using pragmas, time constraints characterizing the execution platform and the external environment. An analyzable timed model of the real-time system is produced by composing instrumented C-code generated by the compiler. The latter has been re-engineered in order to take into account the pragmas. Finally, we report on applications of TAXYS to several nontrivial examples. © 2003 IEEE

    Methodology for object-oriented real-time systems analysis and design: Software engineering

    Get PDF
    Successful application of software engineering methodologies requires an integrated analysis and design life-cycle in which the various phases flow smoothly 'seamlessly' from analysis through design to implementation. Furthermore, different analysis methodologies often lead to different structuring of the system so that the transition from analysis to design may be awkward depending on the design methodology to be used. This is especially important when object-oriented programming is to be used for implementation when the original specification and perhaps high-level design is non-object oriented. Two approaches to real-time systems analysis which can lead to an object-oriented design are contrasted: (1) modeling the system using structured analysis with real-time extensions which emphasizes data and control flows followed by the abstraction of objects where the operations or methods of the objects correspond to processes in the data flow diagrams and then design in terms of these objects; and (2) modeling the system from the beginning as a set of naturally occurring concurrent entities (objects) each having its own time-behavior defined by a set of states and state-transition rules and seamlessly transforming the analysis models into high-level design models. A new concept of a 'real-time systems-analysis object' is introduced and becomes the basic building block of a series of seamlessly-connected models which progress from the object-oriented real-time systems analysis and design system analysis logical models through the physical architectural models and the high-level design stages. The methodology is appropriate to the overall specification including hardware and software modules. In software modules, the systems analysis objects are transformed into software objects

    SIGL:Securing Software Installations Through Deep Graph Learning

    Get PDF
    Many users implicitly assume that software can only be exploited after it is installed. However, recent supply-chain attacks demonstrate that application integrity must be ensured during installation itself. We introduce SIGL, a new tool for detecting malicious behavior during software installation. SIGL collects traces of system call activity, building a data provenance graph that it analyzes using a novel autoencoder architecture with a graph long short-term memory network (graph LSTM) for the encoder and a standard multilayer perceptron for the decoder. SIGL flags suspicious installations as well as the specific installation-time processes that are likely to be malicious. Using a test corpus of 625 malicious installers containing real-world malware, we demonstrate that SIGL has a detection accuracy of 96%, outperforming similar systems from industry and academia by up to 87% in precision and recall and 45% in accuracy. We also demonstrate that SIGL can pinpoint the processes most likely to have triggered malicious behavior, works on different audit platforms and operating systems, and is robust to training data contamination and adversarial attack. It can be used with application-specific models, even in the presence of new software versions, as well as application-agnostic meta-models that encompass a wide range of applications and installers.Comment: 18 pages, to appear in the 30th USENIX Security Symposium (USENIX Security '21

    SIMULATION OF A MULTIPROCESSOR COMPUTER SYSTEM

    Get PDF
    The introduction of computers and software engineering in telephone switching systems has dictated the need for powerful design aids for such complex systems. Among these design aids simulators - real-time environment simulators and flat-level simulators - have been found particularly useful in stored program controlled switching systems design and evaluation. However, both types of simulators suffer from certain disadvantages. An alternative methodology to the simulation of stored program controlled switching systems is proposed in this research. The methodology is based on the development of a process-based multilevel hierarchically structured software simulator. This methodology eliminates the disadvantages of environment and flat-level simulators. It enables the modelling of the system in a 1 to 1 transformation process retaining the sub-systems interfaces and, hence, making it easier to see the resemblance between the model and modelled system and to incorporate design modifications and/or additions in the simulator. This methodology has been applied in building a simulation package for the System X family of exchanges. The Processor Utility Sub-system used to control the exchanges is first simulated, verified and validated. The application sub-systems models are then added one level higher_, resulting in an open-ended simulator having sub-systems models at different levels of detail and capable of simulating any member of the System X family of exchanges. The viability of the methodology is demonstrated by conducting experiments to tune the real-time operating system and by simulating a particular exchange - The Digital Main Network Switching Centre - in order to determine its performance characteristics.The General Electric Company Ltd, GEC Hirst Research Cent, Wemble

    Real-time data coupling for hybrid testing in a geotechnical centrifuge

    Get PDF
    Geotechnical centrifuge models necessarily involve simplifications compared to the full-scale scenario under investigation. In particular, structural systems (e.g. buildings or foundations) generally can’t be replicated such that complex full-scale characteristics are obtained. Hybrid testing offers the ability to combine capabilities from physical and numerical modelling to overcome some of the experimental limitations. In this paper, the development of a coupled centrifuge-numerical model (CCNM) pseudo-dynamic hybrid test for the study of tunnel-building interaction is presented. The methodology takes advantage of the relative merits of centrifuge tests (modelling soil behaviour and soil-pile interactions) and numerical simulations (modelling building deformations and load redistribution), with pile load and displacement data being passed in real-time between the two model domains. To appropriately model the full-scale scenario, a challenging force-controlled system was developed (the first of its kind for hybrid testing in a geotechnical centrifuge). The CCNM application can accommodate simple structural frame analyses as well as more rigorous simulations conducted using the finite element analysis software ABAQUS, thereby extending the scope of application to non-linear structural behaviour. A novel data exchange method between ABAQUS and LabVIEW is presented which provides a significant enhancement compared to similar hybrid test developments. Data are provided from preliminary tests which highlight the capabilities of the system to accurately model the global tunnel-building interaction problem

    Advanced Data Analytics and Optimal Control of Building Energy Systems

    Get PDF
    This research addresses key issues for applying advanced building data analytics to energy efficient control opportunities. First the research identifies advancements and potential hurdles around the three primary means for acquiring data: energy management systems, dedicated measurement systems, and advanced computer software that accesses and archives data from energy management systems. These are described using case studies from commercial building control systems and web-based real time dedicated measurement technology. Next, the research describes effective rule-based data analytics and control strategies that are traditionally used. Rule-based data analytics utilize specific knowledge about HVAC systems to identify key data points and analytical methods to identify energy saving opportunities and develop improved control algorithms. The research describes both theory and application of these rule-based analytics for the control of systems like air-side economizer, ventilation fans, pumping and chilled water systems. Finally, the research proposes a framework to apply advanced machine learning and data mining techniques to the same problem. Machine-learning control differs from rule-based control in that this control type requires less specific knowledge about HVAC systems. The proposed framework uses existing data, where available, to pattern match and build robust models emulating the performance of the system under consideration. To these models, classical optimization algorithms (knapsack, greedy and shortest distance) and mathematical framework (Game theory and Design of Experiments) are adapted and applied to reach the best control strategy. For systems without past performance data, a stochastic framework using decision chains (Markov processes) and adaptive controls using the reinforcement learning method is proposed for the same. These techniques are demonstrated on select systems e.g. Pumping plants and HVAC systems.https://ecommons.udayton.edu/stander_posters/2611/thumbnail.jp

    Cyber Risk Assessment and Scoring Model for Small Unmanned Aerial Vehicles

    Get PDF
    The commercial-off-the-shelf small Unmanned Aerial Vehicle (UAV) market is expanding rapidly in response to interest from hobbyists, commercial businesses, and military operators. The core commercial mission set directly relates to many current military requirements and strategies, with a priority on short range, low cost, real time aerial imaging, and limited modular payloads. These small vehicles present small radar cross sections, low heat signatures, and carry a variety of sensors and payloads. As with many new technologies, security seems secondary to the goal of reaching the market as soon as innovation is viable. Research indicates a growth in exploits and vulnerabilities applicable to small UAV systems, from individual UAV guidance and autopilot controls to the mobile ground station devices that may be as simple as a cellphone application controlling several aircraft. Even if developers strive to improve the security of small UAVs, consumers are left without meaningful insight into the hardware and software protections installed when buying these systems. To date, there is no marketed or accredited risk index for small UAVs. Building from similar domains of aircraft operation, information technologies, cyber-physical systems, and cyber insurance, a cyber risk assessment methodology tailored for small UAVs is proposed and presented in this research. Through case studies of popular models and tailored mission-environment scenarios, the assessment is shown to meet the three objectives of ease-of-use, breadth, and readability. By allowing a cyber risk assessment at or before acquisition, organizations and individuals will be able to accurately compare and choose the best aircraft for their mission

    Design-time performance analysis of component-based real-time systems

    Get PDF
    In current real-time systems, performance metrics are one of the most challenging properties to specify, predict and measure. Performance properties depend on various factors, like environmental context, load profile, middleware, operating system, hardware platform and sharing of internal resources. Performance failures and not satisfying related requirements cause delays, cost overruns, and even abandonment of projects. In order to avoid these performancerelated project failures, the performance properties should be obtained and analyzed already at the early design phase of a project. In this thesis we employ principles of component-based software engineering (CBSE), which enable building software systems from individual components. The advantage of CBSE is that individual components can be modeled, reused and traded. The main objective of this thesis is to develop a method that enables to predict the performance properties of a system, based on the performance properties of the involved individual components. The prediction method serves rapid prototyping and performance analysis of the architecture or related alternatives, without performing the usual testing and implementation stages. The involved research questions are as follows. How should the behaviour and performance properties of individual components be specified in order to enable automated composition of these properties into an analyzable model of a complete system? How to synthesize the models of individual components into a model of a complete system in an automated way, such that the resulting system model can be analyzed against the performance properties? The thesis presents a new framework called DeepCompass, which realizes the concept of predictable assembly throughout all phases of the system design. The cornerstones of the framework are the composable models of individual software components and hardware blocks. The models are specified at the component development time and shipped in a component package. At the component composition phase, the models of the constituent components are synthesized into an executable system model. Since the thesis focuses on performance properties, we introduce performance-related types of component models, such as behaviour, performance and resource models. The dynamics of the system execution are captured in scenario models. The essential advantage of the introduced models is that, through the behaviour of individual components and scenario models, the behaviour of the complete system is synthesized in the executable system model. Further simulation-based analysis of the obtained executable system model provides application-specific and system-specific performance property values. To support the performance analysis, we have developed a CARAT software toolkit that provides and automates the algorithms for model synthesis and simulation. Besides this, the toolkit provides graphical tools for designing alternative architectures and visualization of obtained performance properties. We have conducted an empirical case study on the use of scenarios in the industry to analyze the system performance at the early design phase. It was found that industrial architects make extensive use of scenarios for performance evaluation. Based on the inputs of the architects, we have provided a set of guidelines for identification and use of performance-critical scenarios. At the end of this thesis, we have validated the DeepCompass framework by performing three case studies on performance prediction of real-time systems: an MPEG-4 video decoder, a Car Radio Navigation system and a JPEG application. For each case study, we have constructed models of the individual components, defined the SW/HW architecture, and used the CARAT toolkit to synthesize and simulate the executable system model. The simulation provided the predicted performance properties, which we later compared with the actual performance properties of the realized systems. With respect to resource usage properties and average task latencies, the variation of the prediction error showed to be within 30% of the actual performance. Concerning the pick loads on the processor nodes, the actual values were sometimes three times larger than the predicted values. As a conclusion, the framework has proven to be effective in rapid architecture prototyping and performance analysis of a complete system. This is valid, as in the case studies we have spent not more than 4-5 days on the average for the complete iteration cycle, including the design of several architecture alternatives. The framework can handle different architectural styles, which makes it widely applicable. A conceptual limitation of the framework is that it assumes that the models of individual components are already available at the design phase

    INDUCTIVE SYSTEM HEALTH MONITORING WITH STATISTICAL METRICS

    Get PDF
    Model-based reasoning is a powerful method for performing system monitoring and diagnosis. Building models for model-based reasoning is often a difficult and time consuming process. The Inductive Monitoring System (IMS) software was developed to provide a technique to automatically produce health monitoring knowledge bases for systems that are either difficult to model (simulate) with a computer or which require computer models that are too complex to use for real time monitoring. IMS processes nominal data sets collected either directly from the system or from simulations to build a knowledge base that can be used to detect anomalous behavior in the system. Machine learning and data mining techniques are used to characterize typical system behavior by extracting general classes of nominal data from archived data sets. In particular, a clustering algorithm forms groups of nominal values for sets of related parameters. This establishes constraints on those parameter values that should hold during nominal operation. During monitoring, IMS provides a statistically weighted measure of the deviation of current system behavior from the established normal baseline. If the deviation increases beyond the expected level, an anomaly is suspected, prompting further investigation by an operator or automated system. IMS has shown potential to be an effective, low cost technique to produce system monitoring capability for a variety of applications. We describe the training and system health monitoring techniques of IMS. We also present the application of IMS to a data set from the Space Shuttle Columbia STS-107 flight. IMS was able to detect an anomaly in the launch telemetry shortly after a foam impact damaged Columbia's thermal protection system
    corecore