7 research outputs found

    A watchdog processor to detect data and control flow errors

    Get PDF
    A watchdog processor for the MOTOROLA M68040 microprocessor is presented. Its main task is to protect from transient faults caused by SEUs the transmission of data between the processor and the system memory, and to ensure a correct instructions' flow, just monitoring the external bus, without modifying the internal architecture of the M68040. A description of the principal procedures is given, together with the method used for monitoring the instructions' flow

    A watchdog processor to detect data and control flow errors

    Get PDF
    A watchdog processor for the MOTOROLA M68040 microprocessor is presented. Its main task is to protect from transient faults caused by SEUs the transmission of data between the processor and the system memory, and to ensure a correct instructions' flow, just monitoring the external bus, without modifying the internal architecture of the M68040. A description of the principal procedures is given, together with the method used for monitoring the instructions' flow

    Parallel algorithms for Hough transform

    Get PDF

    New data structures, models, and algorithms for real-time resource management

    Get PDF
    Real-time resource management is the core and critical task in real-time systems. This dissertation explores new data structures, models, and algorithms for real-time resource management. At first, novel data structures, i.e., a class of Testing Interval Trees (TITs), are proposed to help build efficient scheduling modules in real-time systems. With a general data structure, i.e., the TIT* tree, the average costs of the schedulability tests in a wide variety of real-time systems can be reduced. With the Testing Interval Tree for Vacancy analysis (TIT-V), the complexities of the schedulability tests in a class of parallel/distributed real-time systems can be effectively reduced from 0(m²nlogn) to 0(mlogn+mlogm), where m is the number of processors and n is the number of tasks. Similarly, with the Testing Interval Tree for Release time and Laxity analysis (TIT-RL), the complexity of the online admission control in a uni-processor based real-time system can be reduced from 0(n²) to 0(nlogn), where n is the number of tasks. The TIT-RL tree can also be applied to a class of parallel/distributed real-time systems. Therefore, the TIT trees are effective approaches to efficient real-time scheduling modules. Secondly, a new utility accrual model, i.e., UAM+, is established for the resource management in real-time distributed systems. UAM+ is constructed based on the timeliness of computation and communication. Most importantly, the interplay between computation and communication is captured and characterized in the model. Under UAM+, resource managers are guided towards maximizing system-wide utility by exploring the interplay between computation and communication. This is in sharp contrast to traditional approaches that attempt to meet the timing constraints on computation and communication separately. To validate the effectiveness of UAM+, a resource allocation algorithm called IAUASA is developed. Simulation results reveal that IAUASA is far superior to two other resource allocation algorithms that are developed according to traditional utility accrual model and traditional idea. Furthermore, an online algorithm called IDRSA is also developed under UAM+, and a Dynamic Deadline Adjustment (DDA) technique is incorporated into IDRSA algorithm to explore the interplay between computation and communication. The simulation results show that the performance of IDRSA is very promising, especially when the interplay between computation and communication is tight. Therefore, the new utility accrual model provides a more effective approach to the resource allocation in distributed real-time systems. Thirdly, a general task model, which adapts the concept of calculus curve from the network calculus domain, is established for those embedded real-time systems with random event/task arrivals. Under this model, a prediction technique based on history window and calculus curves is established, and it provides the foundation for dynamic voltage-frequency scaling in those embedded real-time systems. Based on this prediction technique, novel energy-efficient algorithms that can dynamically adjust the operating voltage-frequency according to the predicted workload are developed. These algorithms aim to reduce energy consumption while meeting hard deadlines. They can accommodate and well adapt to the variation between the predicted and the actual arrivals of tasks as well as the variation between the predicted and the actual execution times of tasks. Simulation results validate the effectiveness of these algorithms in energy saving

    The Effects of the architectural design, replacement algorithm, and size parameters of cache memory in uniprocessor computer systems

    Get PDF
    To investigate the effects of cache coherency on multiprocessors, it is helpful to first explore coherency issues within uniprocessors, working with a small part of a big problem instead of attacking the big problem from the start. This thesis will investigate the design and implementation of three different cache designs, varying the mapping strategy, replacement algorithm, and size parameters to determine the effects each have on the cache miss ratio, coherency, and average memory access time. VHDL is used to create software models of each cache design investigated, so that parameter values can be easily changed, and so that no money or time is wasted by first prototyping the cache design in actual hardware. These VHDL implementations are presented, along with several test-bench programs that were used to not only validate the performance of the VHDL implementation, but also to explore the program-dependent performance factors and coherency. Several snoopy cache coherence protocols are presented at the end of the thesis, in order to suggest future research into the VHDL implementation of shared-memory multiprocessors

    Redundant disk arrays: Reliable, parallel secondary storage

    Get PDF
    During the past decade, advances in processor and memory technology have given rise to increases in computational performance that far outstrip increases in the performance of secondary storage technology. Coupled with emerging small-disk technology, disk arrays provide the cost, volume, and capacity of current disk subsystems, by leveraging parallelism, many times their performance. Unfortunately, arrays of small disks may have much higher failure rates than the single large disks they replace. Redundant arrays of inexpensive disks (RAID) use simple redundancy schemes to provide high data reliability. The data encoding, performance, and reliability of redundant disk arrays are investigated. Organizing redundant data into a disk array is treated as a coding problem. Among alternatives examined, codes as simple as parity are shown to effectively correct single, self-identifying disk failures
    corecore