7 research outputs found
A watchdog processor to detect data and control flow errors
A watchdog processor for the MOTOROLA M68040 microprocessor is presented. Its main task is to protect from transient faults caused by SEUs the transmission of data between the processor and the system memory, and to ensure a correct instructions' flow, just monitoring the external bus, without modifying the internal architecture of the M68040. A description of the principal procedures is given, together with the method used for monitoring the instructions' flow
A watchdog processor to detect data and control flow errors
A watchdog processor for the MOTOROLA M68040 microprocessor is presented. Its main task is to protect from transient faults caused by SEUs the transmission of data between the processor and the system memory, and to ensure a correct instructions' flow, just monitoring the external bus, without modifying the internal architecture of the M68040. A description of the principal procedures is given, together with the method used for monitoring the instructions' flow
New data structures, models, and algorithms for real-time resource management
Real-time resource management is the core and critical task in real-time systems. This dissertation explores new data structures, models, and algorithms for real-time resource management.
At first, novel data structures, i.e., a class of Testing Interval Trees (TITs), are proposed to help build efficient scheduling modules in real-time systems. With a general data structure, i.e., the TIT* tree, the average costs of the schedulability tests in a wide variety of real-time systems can be reduced. With the Testing Interval Tree for Vacancy analysis (TIT-V), the complexities of the schedulability tests in a class of parallel/distributed real-time systems can be effectively reduced from 0(m²nlogn) to 0(mlogn+mlogm), where m is the number of processors and n is the number of tasks. Similarly, with the Testing Interval Tree for Release time and Laxity analysis (TIT-RL), the complexity of the online admission control in a uni-processor based real-time system can be reduced from 0(n²) to 0(nlogn), where n is the number of tasks. The TIT-RL tree can also be applied to a class of parallel/distributed real-time systems. Therefore, the TIT trees are effective approaches to efficient real-time scheduling modules.
Secondly, a new utility accrual model, i.e., UAM+, is established for the resource management in real-time distributed systems. UAM+ is constructed based on the timeliness of computation and communication. Most importantly, the interplay between computation and communication is captured and characterized in the model. Under UAM+, resource managers are guided towards maximizing system-wide utility by exploring the interplay between computation and communication. This is in sharp contrast to traditional approaches that attempt to meet the timing constraints on computation and communication separately. To validate the effectiveness of UAM+, a resource allocation algorithm called IAUASA is developed. Simulation results reveal that IAUASA is far superior to two other resource allocation algorithms that are developed according to traditional utility accrual model and traditional idea. Furthermore, an online algorithm called IDRSA is also developed under UAM+, and a Dynamic Deadline Adjustment (DDA) technique is incorporated into IDRSA algorithm to explore the interplay between computation and communication. The simulation results show that the performance of IDRSA is very promising, especially when the interplay between computation and communication is tight. Therefore, the new utility accrual model provides a more effective approach to the resource allocation in distributed real-time systems.
Thirdly, a general task model, which adapts the concept of calculus curve from the network calculus domain, is established for those embedded real-time systems with random event/task arrivals. Under this model, a prediction technique based on history window and calculus curves is established, and it provides the foundation for dynamic voltage-frequency scaling in those embedded real-time systems. Based on this prediction technique, novel energy-efficient algorithms that can dynamically adjust the operating voltage-frequency according to the predicted workload are developed. These algorithms aim to reduce energy consumption while meeting hard deadlines. They can accommodate and well adapt to the variation between the predicted and the actual arrivals of tasks as well as the variation between the predicted and the actual execution times of tasks. Simulation results validate the effectiveness of these algorithms in energy saving
The Effects of the architectural design, replacement algorithm, and size parameters of cache memory in uniprocessor computer systems
To investigate the effects of cache coherency on multiprocessors, it is helpful to first explore coherency issues within uniprocessors, working with a small part of a big problem instead of attacking the big problem from the start. This thesis will investigate the design and implementation of three different cache designs, varying the mapping strategy, replacement algorithm, and size parameters to determine the effects each have on the cache miss ratio, coherency, and average memory access time. VHDL is used to create software models of each cache design investigated, so that parameter values can be easily changed, and so that no money or time is wasted by first prototyping the cache design in actual hardware. These VHDL implementations are presented, along with several test-bench programs that were used to not only validate the performance of the VHDL implementation, but also to explore the program-dependent performance factors and coherency. Several snoopy cache coherence protocols are presented at the end of the thesis, in order to suggest future research into the VHDL implementation of shared-memory multiprocessors
Redundant disk arrays: Reliable, parallel secondary storage
During the past decade, advances in processor and memory technology have given rise to increases in computational performance that far outstrip increases in the performance of secondary storage technology. Coupled with emerging small-disk technology, disk arrays provide the cost, volume, and capacity of current disk subsystems, by leveraging parallelism, many times their performance. Unfortunately, arrays of small disks may have much higher failure rates than the single large disks they replace. Redundant arrays of inexpensive disks (RAID) use simple redundancy schemes to provide high data reliability. The data encoding, performance, and reliability of redundant disk arrays are investigated. Organizing redundant data into a disk array is treated as a coding problem. Among alternatives examined, codes as simple as parity are shown to effectively correct single, self-identifying disk failures
Recommended from our members
Split array and scalar data cache: A comprehensive study of data cache organization.
Existing cache organization suffers from the inability to distinguish different types of localities, and non-selectively cache all data rather than making any attempt to take special advantage of the locality type. This causes unnecessary movement of data among the levels of the memory hierarchy and increases in miss ratio. In this dissertation I propose a split data cache architecture that will group memory accesses as scalar or array references according to their inherent locality and will subsequently map each group to a dedicated cache partition. In this system, because scalar and array references will no longer negatively affect each other, cache-interference is diminished, delivering better performance. Further improvement is achieved by the introduction of victim cache, prefetching, data flattening and reconfigurability to tune the array and scalar caches for specific application. The most significant contribution of my work is the introduction of novel cache architecture for embedded microprocessor platforms. My proposed cache architecture uses reconfigurability coupled with split data caches to reduce area and power consumed by cache memories while retaining performance gains. My results show excellent reductions in both memory size and memory access times, translating into reduced power consumption. Since there was a huge reduction in miss rates at L-1 caches, further power reduction is achieved by partially or completely shutting down L-2 data or L-2 instruction caches. The saving in cache sizes resulting from these designs can be used for other processor activities including instruction and data prefetching, branch-prediction buffers. The potential benefits of such techniques for embedded applications have been evaluated in my work. I also explore how my cache organization performs for non-numeric data structures. I propose a novel idea called "Data flattening" which is a profile based memory allocation technique to compress sparsely scattered pointer data into regular contiguous memory locations and explore the potentials of my proposed Spit cache organization for data treated with data flattening method