866 research outputs found

    Modeling and Analyzing Adaptive User-Centric Systems in Real-Time Maude

    Full text link
    Pervasive user-centric applications are systems which are meant to sense the presence, mood, and intentions of users in order to optimize user comfort and performance. Building such applications requires not only state-of-the art techniques from artificial intelligence but also sound software engineering methods for facilitating modular design, runtime adaptation and verification of critical system requirements. In this paper we focus on high-level design and analysis, and use the algebraic rewriting language Real-Time Maude for specifying applications in a real-time setting. We propose a generic component-based approach for modeling pervasive user-centric systems and we show how to analyze and prove crucial properties of the system architecture through model checking and simulation. For proving time-dependent properties we use Metric Temporal Logic (MTL) and present analysis algorithms for model checking two subclasses of MTL formulas: time-bounded response and time-bounded safety MTL formulas. The underlying idea is to extend the Real-Time Maude model with suitable clocks, to transform the MTL formulas into LTL formulas over the extended specification, and then to use the LTL model checker of Maude. It is shown that these analyses are sound and complete for maximal time sampling. The approach is illustrated by a simple adaptive advertising scenario in which an adaptive advertisement display can react to actions of the users in front of the display.Comment: In Proceedings RTRTS 2010, arXiv:1009.398

    Performance evaluation and design for variable threshold alarm systems through semi-Markov process

    Get PDF
    YesIn large industrial systems, alarm management is one of the most important issues to improve the safety and efficiency of systems in practice. Operators of such systems often have to deal with a numerous number of simultaneous alarms. Different kinds of thresholding or filtration are applied to decrease alarm nuisance and improve performance indices, such as Averaged Alarm Delay (ADD), Missed Alarm and False Alarm Rates (MAR and FAR). Among threshold-based approaches, variable thresholding methods are well-known for reducing the alarm nuisance and improving the performance of the alarm system. However, the literature suffers from the lack of an appropriate method to assess performance parameters of Variable Threshold Alarm Systems (VTASs). This study introduces two types of variable thresholding and proposes a novel approach for performance assessment of VTASs using Priority-AND gate and semi-Markov process. Application of semi-Markov process allows the proposed approach to consider industrial measurements with non-Gaussian distributions. In addition, the paper provides a genetic algorithm based optimized design process for optimal parameter setting to improve performance indices. The effectiveness of the proposed approach is illustrated via three numerical examples and through a comparison with previous studies.Noavaran Electronic Adar Sameh company [Grant NO: IRAM17S1]

    Computer architecture for efficient algorithmic executions in real-time systems: New technology for avionics systems and advanced space vehicles

    Get PDF
    Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed

    Wireless Sensor Data Transport, Aggregation and Security

    Get PDF
    abstract: Wireless sensor networks (WSN) and the communication and the security therein have been gaining further prominence in the tech-industry recently, with the emergence of the so called Internet of Things (IoT). The steps from acquiring data and making a reactive decision base on the acquired sensor measurements are complex and requires careful execution of several steps. In many of these steps there are still technological gaps to fill that are due to the fact that several primitives that are desirable in a sensor network environment are bolt on the networks as application layer functionalities, rather than built in them. For several important functionalities that are at the core of IoT architectures we have developed a solution that is analyzed and discussed in the following chapters. The chain of steps from the acquisition of sensor samples until these samples reach a control center or the cloud where the data analytics are performed, starts with the acquisition of the sensor measurements at the correct time and, importantly, synchronously among all sensors deployed. This synchronization has to be network wide, including both the wired core network as well as the wireless edge devices. This thesis studies a decentralized and lightweight solution to synchronize and schedule IoT devices over wireless and wired networks adaptively, with very simple local signaling. Furthermore, measurement results have to be transported and aggregated over the same interface, requiring clever coordination among all nodes, as network resources are shared, keeping scalability and fail-safe operation in mind. Furthermore ensuring the integrity of measurements is a complicated task. On the one hand Cryptography can shield the network from outside attackers and therefore is the first step to take, but due to the volume of sensors must rely on an automated key distribution mechanism. On the other hand cryptography does not protect against exposed keys or inside attackers. One however can exploit statistical properties to detect and identify nodes that send false information and exclude these attacker nodes from the network to avoid data manipulation. Furthermore, if data is supplied by a third party, one can apply automated trust metric for each individual data source to define which data to accept and consider for mentioned statistical tests in the first place. Monitoring the cyber and physical activities of an IoT infrastructure in concert is another topic that is investigated in this thesis.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Composition and synchronization of real-time components upon one processor

    Get PDF
    Many industrial systems have various hardware and software functions for controlling mechanics. If these functions act independently, as they do in legacy situations, their overall performance is not optimal. There is a trend towards optimizing the overall system performance and creating a synergy between the different functions in a system, which is achieved by replacing more and more dedicated, single-function hardware by software components running on programmable platforms. This increases the re-usability of the functions, but their synergy requires also that (parts of) the multiple software functions share the same embedded platform. In this work, we look at the composition of inter-dependent software functions on a shared platform from a timing perspective. We consider platforms comprised of one preemptive processor resource and, optionally, multiple non-preemptive resources. Each function is implemented by a set of tasks; the group of tasks of a function that executes on the same processor, along with its scheduler, is called a component. The tasks of a component typically have hard timing constraints. Fulfilling these timing constraints of a component requires analysis. Looking at a single function, co-operative scheduling of the tasks within a component has already proven to be a powerful tool to make the implementation of a function more predictable. For example, co-operative scheduling can accelerate the execution of a task (making it easier to satisfy timing constraints), it can reduce the cost of arbitrary preemptions (leading to more realistic execution-time estimates) and it can guarantee access to other resources without the need for arbitration by other protocols. Since timeliness is an important functional requirement, (re-)use of a component for composition and integration on a platform must deal with timing. To enable us to analyze and specify the timing requirements of a particular component in isolation from other components, we reserve and enforce the availability of all its specified resources during run-time. The real-time systems community has proposed hierarchical scheduling frameworks (HSFs) to implement this isolation between components. After admitting a component on a shared platform, a component in an HSF keeps meeting its timing constraints as long as it behaves as specified. If it violates its specification, it may be penalized, but other components are temporally isolated from the malignant effects. A component in an HSF is said to execute on a virtual platform with a dedicated processor at a speed proportional to its reserved processor supply. Three effects disturb this point of view. Firstly, processor time is supplied discontinuously. Secondly, the actual processor is faster. Thirdly, the HSF no longer guarantees the isolation of an individual component when two arbitrary components violate their specification during access to non-preemptive resources, even when access is arbitrated via well-defined real-time protocols. The scientific contributions of this work focus on these three issues. Our solutions to these issues cover the system design from component requirements to run-time allocation. Firstly, we present a novel scheduling method that enables us to integrate the component into an HSF. It guarantees that each integrated component executes its tasks exactly in the same order regardless of a continuous or a discontinuous supply of processor time. Using our method, the component executes on a virtual platform and it only experiences that the processor speed is different from the actual processor speed. As a result, we can focus on the traditional scheduling problem of meeting deadline constraints of tasks on a uni-processor platform. For such platforms, we show how scheduling tasks co-operatively within a component helps to meet the deadlines of this component. We compare the strength of these cooperative scheduling techniques to theoretically optimal schedulers. Secondly, we standardize the way of computing the resource requirements of a component, even in the presence of non-preemptive resources. We can therefore apply the same timing analysis to the components in an HSF as to the tasks inside, regardless of their scheduling or their protocol being used for non-preemptive resources. This increases the re-usability of the timing analysis of components. We also make non-preemptive resources transparent during the development cycle of a component, i.e., the developer of a component can be unaware of the actual protocol being used in an HSF. Components can therefore be unaware that access to non-preemptive resources requires arbitration. Finally, we complement the existing real-time protocols for arbitrating access to non-preemptive resources with mechanisms to confine temporal faults to those components in the HSF that share the same non-preemptive resources. We compare the overheads of sharing non-preemptive resources between components with and without mechanisms for confinement of temporal faults. We do this by means of experiments within an HSF-enabled real-time operating system
    • …
    corecore