8,290 research outputs found

    Distributed real-time fault tolerance in a virtualized separation kernel

    Full text link
    Computers are increasingly being placed in scenarios where a computer error could result in the loss of human life or significant financial loss. Fault tolerant techniques must be employed to prevent an error from resulting in a fault causing such losses. Two types of errors that are common in real-time and embedded system are soft errors, i.e. data bit corruption, and timing errors, such as missed deadlines. Purely software based techniques to address these types of errors have the advantage of not requiring specialized hardware and are able to use more readily available commercial off-the-shelf hardware. Timing errors are addressed using Adaptive Mixed-Criticality, a scheduling technique where higher criticality tasks are given precedence over those of lower criticality when it is impossible to guarantee the schedulability of all tasks. While mixed-criticality scheduling has gained attention in recent years, most approaches assume a periodic task model and that the system has a single criticality level which dictates the available budget to all tasks. In practice these assumptions do not hold: different types of tasks are better served by different scheduling approaches and only a subset of high critical tasks might require additional capacity to meet deadlines. In the latter case, this occurs when a process has experienced a fault and requires additional capacity to perform the recovery. In this thesis, soft errors are addressed using a novel real-time fault tolerance method based on a virtualized separation kernel. Instead of executing redundant copies of an application on separate machines, the applications are consolidated onto one multi-core processor and use hardware virtualization extensions to partition the applications. This allows new recovery schemes to be explored. In addition, the maximum recovery time is sufficiently bounded to ensure recovery occurs in a timely manner without affecting the normal execution of the application. A virtualized separation kernel in combination with Adaptive Mixed-Criticality techniques creates a fault tolerant system that predictably detects and recovers from timing and soft errors

    Automatic application object migration in sensor networks

    Get PDF
    Object migration in wireless sensor networks has the potential to reduce energy consumption for a wireless sensor network mesh. Automated migration reduces the need for the programmer to perform manual static analysis to find an efficient layout solution. Instead, the system can self-optimise and adjust to changing conditions. This paper describes an automated, transparent object migration system for wireless sensor networks, implemented on a micro Java virtual machine. The migration system moves objects at runtime around the sensor mesh to reduce communication overheads. The movement of objects is transparent to the application developer. Automated transparent object migration is a core component of Hydra, a distributed operating system for wireless sensor networks that is currently under development. Performance of the system under a complex performance test scenario using a real-world dataset of seismic events is described. The results show that under both simple and complex conditions the migration technique can result in lower data traffic and consequently lower overall energy cost

    Operating System Support for Redundant Multithreading

    Get PDF
    Failing hardware is a fact and trends in microprocessor design indicate that the fraction of hardware suffering from permanent and transient faults will continue to increase in future chip generations. Researchers proposed various solutions to this issue with different downsides: Specialized hardware components make hardware more expensive in production and consume additional energy at runtime. Fault-tolerant algorithms and libraries enforce specific programming models on the developer. Compiler-based fault tolerance requires the source code for all applications to be available for recompilation. In this thesis I present ASTEROID, an operating system architecture that integrates applications with different reliability needs. ASTEROID is built on top of the L4/Fiasco.OC microkernel and extends the system with Romain, an operating system service that transparently replicates user applications. Romain supports single- and multi-threaded applications without requiring access to the application's source code. Romain replicates applications and their resources completely and thereby does not rely on hardware extensions, such as ECC-protected memory. In my thesis I describe how to efficiently implement replication as a form of redundant multithreading in software. I develop mechanisms to manage replica resources and to make multi-threaded programs behave deterministically for replication. I furthermore present an approach to handle applications that use shared-memory channels with other programs. My evaluation shows that Romain provides 100% error detection and more than 99.6% error correction for single-bit flips in memory and general-purpose registers. At the same time, Romain's execution time overhead is below 14% for single-threaded applications running in triple-modular redundant mode. The last part of my thesis acknowledges that software-implemented fault tolerance methods often rely on the correct functioning of a certain set of hardware and software components, the Reliable Computing Base (RCB). I introduce the concept of the RCB and discuss what constitutes the RCB of the ASTEROID system and other fault tolerance mechanisms. Thereafter I show three case studies that evaluate approaches to protecting RCB components and thereby aim to achieve a software stack that is fully protected against hardware errors

    Performance Evaluation of Scheduling Algorithms for Real Time Cloud Computing Systems

    Get PDF
    Cloud computing shares data and oers services transparently among its users. With the increase in number of users of cloud the tasks to be scheduled increases. The performance of cloud depends on the task scheduling algorithms used in the scheduling components or brokering components. Scheduling of tasks on cloud computing systems is one of the research problem, Where the matching of machines and completion time of the tasks are considered. Tasks matching of machines problem is that, assume number of active hosts are Y, number of VMs in each host are Z. Maximum number of possible Virtual Machines(VMs) to schedule a single task is (y*z). If we need to schedule X tasks, number of possibilities are (y *z)^x. So scheduling of tasks is NP Hard problem. NP Hard means this scheduling of tasks on VMs not having polynomial time complexity, but it may have algorithm for verifying solution. Fault-tolerance becomes an important key to establish dependability in cloud computing system. In task scheduling, if task not completed in it's deadline ,then it is one type of fault in scheduling of tasks. In this thesis this type of faults are taken and try to overcome it. In this thesis we present a non-preemptive scheduling algorithm, By inserting the ideal time for postponing the task by ensuring the other task will completes its execution with in the deadline. In simulation the proposed algorithm maximizes the prot of 25%, throughput of 25% and minimizes the penalty of 20% over EDF

    Proceedings of the 5th International Workshop on Reconfigurable Communication-centric Systems on Chip 2010 - ReCoSoC\u2710 - May 17-19, 2010 Karlsruhe, Germany. (KIT Scientific Reports ; 7551)

    Get PDF
    ReCoSoC is intended to be a periodic annual meeting to expose and discuss gathered expertise as well as state of the art research around SoC related topics through plenary invited papers and posters. The workshop aims to provide a prospective view of tomorrow\u27s challenges in the multibillion transistor era, taking into account the emerging techniques and architectures exploring the synergy between flexible on-chip communication and system reconfigurability

    Embedded Virtual Machines for Robust Wireless Control and Actuation

    Get PDF
    Embedded wireless networks have largely focused on open-loop sensing and monitoring. To address actuation in closed-loop wireless control systems there is a strong need to re-think the communication architectures and protocols for reliability, coordination and control. As the links, nodes and topology of wireless systems are inherently unreliable, such time-critical and safety-critical applications require programming abstractions and runtime systems where the tasks are assigned to the sensors, actuators and controllers as a single component rather than statically mapping a set of tasks to a specific physical node at design time. To this end, we introduce the Embedded Virtual Machine (EVM), a powerful and flexible programming abstraction where virtual components and their properties are maintained across node boundaries. In the context of process and discrete control, an EVM is the distributed runtime system that dynamically selects primary-backup sets of controllers to guarantee QoS given spatial and temporal constraints of the underlying wireless network. The EVM architecture defines explicit mechanisms for control, data and fault communication within the virtual component. EVM-based algorithms introduce new capabilities such as predictable outcomes and provably minimal graceful degradation during sensor/actuator failure, adaptation to mode changes and runtime optimization of resource consumption. Through case studies in process control we demonstrate the preliminary capabilities of EVM-based wireless networks
    corecore