101,638 research outputs found

    Scalability in Real-Time Systems

    Get PDF
    The number and complexity of applications that run in real-time environments have posed demanding requirements on the part of the real-time system designer. It has now become important to accommodate the application complexity at early stages of the design cycle. Further, the stringent demands to guarantee task deadlines (particularly in a hard real-time environment, which is the assumed environment in this thesis) have motivated both practioners and researchers to look at ways to analyze systems prior to run-time. This thesis reports a new perspective to analyzing real-time systems that in addition to ascertaining the ability of a system to meet task deadlines also qualifies these guarantees. The guarantees are qualified by a measure (called the scaling factor) of the systems ability to continue to provide these guarantees under possible changes to the tasks. This measure is shown to have many applications in the design (task execution time estimation), development (portability and fault tolerance) and maintenance (scalability) of real-time systems. The measure is shown to bear relevance in both uniprocessor and distributed (more generally referred to as end-to-end) real-time systems. However, the derivation of this measure in end-to-end systems requires that we solve a fundamental (very important, yet unsolved) problem--the end-to-end schedulability problem. The thesis reports a solution to the end-to-end schedulability problem which is based on a solution to another fundamental problem relevant to single-component real-time systems (a uniprocessor system is a special instance of such a system). The problem of interest here is the schedulability of a set of tasks with arbitrary arrival times, that run on a single component. The thesis presents an optimal solution to this problem. One important consequence of this result (besides serving as a basis for the end-to-end schedulability problem) is its applicability to tbe classical approach to real-time scheduling, viz., static scheduling. The final contribution of the thesis comes as an application of the results to the area of real-time communication. More specifically, we report a heuristic approach to the problem of admission control in real-time traffic networks. The heuristic is based on the scaling factor measure

    Gleichstellungs-News : Nr. 14

    Get PDF
    Many embedded systems with real-time requirements demand minimal jitter and low communication end-to-end latency for its communication networks. The time-triggered paradigm, adopted by many real-time protocols, was designed to cope with these demands. A cost-efficient way to implement this paradigm is to synthesize a static schedule that indicates the transmission times of all the time-triggered frames such that all requirements are met. Synthesizing this schedule can be seen as a bin-packing problem, known to be NPcomplete, with complexity driven by the number of frames. In the last years, requirements on the amount of data being transmitted and the scalability of the network have increased. A solution was proposed, adapting real-time switched Ethernet to benefit from its high bandwidth. However, it added more complexity in computing the schedule, since every frame is distributed over multiple links. Tools like Satisfiability Modulo Theory solvers were able to cope with the added complexity and synthesize schedules of industrial size networks. Despite the success of such tools, applications are appearing requiring embedded systems with even more complex networks. In the future, real-time embedded systems, such as large factory automation or smart cities, will need extremely large hybrid networks, combining wired and wireless communication, with schedules that cannot be synthesized with current tools in a reasonable amount of time. With this in mind, the first thesis goal is to identify the performance limits of Satisfiability Modulo Theory solvers in schedule synthesis. Given these limitations, the next step is to define and develop a divide and conquer approach for decomposing the entire scheduling problem in smaller and easy solvable subproblems. However, there are constraints that relate frames from different subproblems. These constraints need to be treated differently and taken into account at the start of every subproblem. The third thesis goal is to develop an approach that is able to synthesize schedules when different frame constraints related to different subproblems are inter-dependent. Last, is to define the requirements that the integration of wireless communication in hybrid networks will bring to the schedule synthesis and how to cope with the increased complexity. We demonstrate the viability of our approaches by means of evaluations, showing that our method is capable to synthesize schedules of hundred of thousands of frames in less than 5 hours.RetNe

    1-D Coordinate Based on Local Information for MAC and Routing Issues in WSNs

    Get PDF
    More and more critical Wireless Sensor Networks (WSNs) applications are emerging. Those applications need reliability and respect of time constraints. The underlying mechanisms such as MAC and routing must handle such requirements. Our approach to the time constraint problem is to bound the hop-count between a node and the sink and the time it takes to do a hop so the end-to-end delay can be bounded and the communications are thus real-time. For reliability purpose we propose to select forwarder nodes depending on how they are connected in the direction of the sink. In order to be able to do so we need a coordinate (or a metric) that gives information on hop-count, that allows to strongly differentiate nodes and gives information on the connectivity of each node keeping in mind the intrinsic constraints of WSWs such as energy consumption, autonomy, etc. Due to the efficiency and scalability of greedy routing in WSNs and the financial cost of GPS chips, Virtual Coordinate Systems (VCSs) for WSNs have been proposed. A category of VCSs is based on the hop-count from the sink, this scheme leads to many nodes having the same coordinate. The main advantage of this system is that the hops number of a packet from a source to the sink is known. Nevertheless, it does not allow to differentiate the nodes with the same hop-count. In this report we propose a novel hop-count-based VCS which aims at classifying the nodes having the same hop-count depending on their connectivity and at differentiating nodes in a 2-hop neighborhood. Those properties make the coordinates, which also can be viewed as a local identifier, a very powerful metric which can be used in WSNs mechanisms.Comment: (2011

    The Fog Development Kit: A Platform for the Development and Management of Fog Systems

    Get PDF
    With the rise of the Internet of Things (IoT), fog computing has emerged to help traditional cloud computing in meeting scalability demands. Fog computing makes it possible to fulfill real-time requirements of applications by bringing more processing, storage, and control power geographically closer to end-devices. How- ever, since fog computing is a relatively new field, there is no standard platform for research and development in a realistic environment, and this dramatically inhibits innovation and development of fog-based applications. In response to these challenges, we propose the Fog Development Kit (FDK). By providing high-level interfaces for allocating computing and networking resources, the FDK abstracts the complexities of fog computing from developers and enables the rapid development of fog systems. In addition to supporting application development on a physical deployment, the FDK supports the use of emulation tools (e.g., GNS3 and Mininet) to create realistic environments, allowing fog application prototypes to be built with zero additional costs and enabling seamless portability to a physical infrastructure. Using a physical testbed and various kinds of applications running on it, we verify the operation and study the performance of the FDK. Specifically, we demonstrate that resource allocations are appropriately enforced and guaranteed, even amidst extreme network congestion. We also present simulation-based scalability analysis of the FDK versus the number of switches, the number of end-devices, and the number of fog-devices

    Resource efficient on-node spike sorting

    Get PDF
    Current implantable brain-machine interfaces are recording multi-neuron activity by utilising multi-channel, multi-electrode micro-electrodes. With the rapid increase in recording capability has come more stringent constraints on implantable system power consumption and size. This is even more so with the increasing demand for wireless systems to increase the number of channels being monitored whilst overcoming the communication bottleneck (in transmitting raw data) via transcutaneous bio-telemetries. For systems observing unit activity, real-time spike sorting within an implantable device offers a unique solution to this problem. However, achieving such data compression prior to transmission via an on-node spike sorting system has several challenges. The inherent complexity of the spike sorting problem arising from various factors (such as signal variability, local field potentials, background and multi-unit activity) have required computationally intensive algorithms (e.g. PCA, wavelet transform, superparamagnetic clustering). Hence spike sorting systems have traditionally been implemented off-line, usually run on work-stations. Owing to their complexity and not-so-well scalability, these algorithms cannot be simply transformed into a resource efficient hardware. On the contrary, although there have been several attempts in implantable hardware, an implementation to match comparable accuracy to off-line within the required power and area requirements for future BMIs have yet to be proposed. Within this context, this research aims to fill in the gaps in the design towards a resource efficient implantable real-time spike sorter which achieves performance comparable to off-line methods. The research covered in this thesis target: 1) Identifying and quantifying the trade-offs on subsequent signal processing performance and hardware resource utilisation of the parameters associated with analogue-front-end. Following the development of a behavioural model of the analogue-front-end and an optimisation tool, the sensitivity of the spike sorting accuracy to different front-end parameters are quantified. 2) Identifying and quantifying the trade-offs associated with a two-stage hybrid solution to realising real-time on-node spike sorting. Initial part of the work focuses from the perspective of template matching only, while the second part of the work considers these parameters from the point of whole system including detection, sorting, and off-line training (template building). A set of minimum requirements are established which ensure robust, accurate and resource efficient operation. 3) Developing new feature extraction and spike sorting algorithms towards highly scalable systems. Based on waveform dynamics of the observed action potentials, a derivative based feature extraction and a spike sorting algorithm are proposed. These are compared with most commonly used methods of spike sorting under varying noise levels using realistic datasets to confirm their merits. The latter is implemented and demonstrated in real-time through an MCU based platform.Open Acces
    • …
    corecore