13,507 research outputs found

    General Purpose Computer (GPC) to GPC systems interface description

    Get PDF
    The General Purpose Computer (GPC) 'subsystem' of the Orbiter Data Processing System was described. Two interface areas are discussed. One is the area of GPC intraconnections and intracommunications involving the hardware/software interface between the Central Processing Unit (CPU) and the Input/Output Processor (IOP). The other is the area of GPC interconnections and intercommunications and involves the hardware/software interface between the five Orbiter GPC's. Based on the detailed GPC interface given, it is felt that the basic CPU to IOP interface and the GPC to GPC interface have the potential for trouble free operation. However, due to the complexity of the interface and the criticality of GPC synchronization to overall avionics performance, the GPC to GPC interface should be carefully evaluated when attempting to resolve test anomalies that may involve GPC timing and synchronization errors

    Energy Saving Techniques for Phase Change Memory (PCM)

    Full text link
    In recent years, the energy consumption of computing systems has increased and a large fraction of this energy is consumed in main memory. Towards this, researchers have proposed use of non-volatile memory, such as phase change memory (PCM), which has low read latency and power; and nearly zero leakage power. However, the write latency and power of PCM are very high and this, along with limited write endurance of PCM present significant challenges in enabling wide-spread adoption of PCM. To address this, several architecture-level techniques have been proposed. In this report, we review several techniques to manage power consumption of PCM. We also classify these techniques based on their characteristics to provide insights into them. The aim of this work is encourage researchers to propose even better techniques for improving energy efficiency of PCM based main memory.Comment: Survey, phase change RAM (PCRAM

    Characterizing and Subsetting Big Data Workloads

    Full text link
    Big data benchmark suites must include a diversity of data and workloads to be useful in fairly evaluating big data systems and architectures. However, using truly comprehensive benchmarks poses great challenges for the architecture community. First, we need to thoroughly understand the behaviors of a variety of workloads. Second, our usual simulation-based research methods become prohibitively expensive for big data. As big data is an emerging field, more and more software stacks are being proposed to facilitate the development of big data applications, which aggravates hese challenges. In this paper, we first use Principle Component Analysis (PCA) to identify the most important characteristics from 45 metrics to characterize big data workloads from BigDataBench, a comprehensive big data benchmark suite. Second, we apply a clustering technique to the principle components obtained from the PCA to investigate the similarity among big data workloads, and we verify the importance of including different software stacks for big data benchmarking. Third, we select seven representative big data workloads by removing redundant ones and release the BigDataBench simulation version, which is publicly available from http://prof.ict.ac.cn/BigDataBench/simulatorversion/.Comment: 11 pages, 6 figures, 2014 IEEE International Symposium on Workload Characterizatio

    C-MOS array design techniques: SUMC multiprocessor system study

    Get PDF
    The current capabilities of LSI techniques for speed and reliability, plus the possibilities of assembling large configurations of LSI logic and storage elements, have demanded the study of multiprocessors and multiprocessing techniques, problems, and potentialities. Evaluated are three previous systems studies for a space ultrareliable modular computer multiprocessing system, and a new multiprocessing system is proposed that is flexibly configured with up to four central processors, four 1/0 processors, and 16 main memory units, plus auxiliary memory and peripheral devices. This multiprocessor system features a multilevel interrupt, qualified S/360 compatibility for ground-based generation of programs, virtual memory management of a storage hierarchy through 1/0 processors, and multiport access to multiple and shared memory units

    Redesigning Transaction Processing Systems for Non-Volatile Memory

    Get PDF
    Department of Computer Science and EngineeringTransaction Processing Systems are widely used because they make the user be able to manage their data more efficiently. However, they suffer performance bottleneck due to the redundant I/O for guaranteeing data consistency. In addition to the redundant I/O, slow storage device makes the performance more degraded. Leveraging non-volatile memory is one of the promising solutions the performance bottleneck in Transaction Processing Systems. However, since the I/O granularity of legacy storage devices and non-volatile memory is not equal, traditional Transaction Processing System cannot fully exploit the performance of persistent memory. The goal of this dissertation is to fully exploit non-volatile memory for improving the performance of Transaction Processing Systems. Write amplification between Transaction Processing System is pointed out as a performance bottleneck. As first approach, we redesigned Transaction Processing Systems to minimize the redundant I/O between the Transaction Processing Systems. We present LS-MVBT that integrates recovery information into the main database file to remove temporary files for recovery. The LS-MVBT also employs five optimizations to reduce the write traffics in single fsync() calls. We also exploit the persistent memory to reduce the performance bottleneck from slow storage devices. However, since the traditional recovery method is for slow storage devices, we develop byte-addressable differential logging, user-level heap manager, and transaction-aware persistence to fully exploit the persistent memory. To minimize the redundant I/O for guarantee data consistency, we present the failure-atomic slotted paging with persistent buffer cache. Redesigning indexing structure is the second approach to exploit the non-volatile memory fully. Since the B+-tree is originally designed for block granularity, It generates excessive I/O traffics in persistent memory. To mitigate this traffic, we develop cache line friendly B+-tree which aligns its node size to cache line size. It can minimize the write traffic. Moreover, with hardware transactional memory, it can update its single node atomically without any additional redundant I/O for guaranteeing data consistency. It can also adapt Failure-Atomic Shift and Failure-Atomic In-place Rebalancing to eliminate unnecessary I/O. Furthermore, We improved the persistent memory manager that exploit traditional memory heap structure with free-list instead of segregated lists for small memory allocations to minimize the memory allocation overhead. Our performance evaluation shows that our improved version that consider I/O granularity of non-volatile memory can efficiently reduce the redundant I/O traffic and improve the performance by large of a margin.ope

    The Utility Of Verbal Display Redundancy In Managing Pilot\u27s Cognitive Load During Controller-pilot Voice Communications

    Get PDF
    Miscommunication between controllers and pilots, potentially resulting from a high pilot cognitive load, has been a causal or contributing factor in a large number of aviation accidents. In this context, failure to communicate can be attributed, among other factors, to an inadequate human-system interface design, the related high cognitive load imposed on the pilot, and poor performance reflected by a higher error rate. To date, voice radio remains in service without any means for managing pilot cognitive load by design (as opposed to training or procedures). Such an oversight is what prompted this dissertation. The goals of this study were (a) to investigate the utility of a voice-to-text transcription (V-T-T) of ATC clearances in managing pilot\u27s cognitive load during controller-pilot communications within the context of a modern flight deck environment, and (b) to validate whether a model of variable relationships which is generated in the domain of learning and instruction would transfer , and to what extend, to an operational domain. First, within the theoretical framework built for this dissertation, all the pertaining factors were analyzed. Second, by using the process of synthesis, and based on guidelines generated from that theoretical framework, a redundant verbal display of ATC clearances (i.e., a V-T-T) was constructed. Third, the synthesized device was empirically examined. Thirty four pilots participated in the study – seventeen pilots with 100-250 total flight hours and seventeen with \u3e 500 total flight hours. All participants had flown within sixty days prior to attending the study. The experiment was conducted one pilot at a time in 2.5-hour blocks. A 2 Verbal Display Redundancy (no-redundancy and redundancy) X 2 Verbal Input Complexity (low and high) X 2 Level of Expertise (novices and experts) mixed-model design was used for the study with 5 IFR clearances in each Redundancy X Complexity condition. The results showed that the amounts of iii reduction of cognitive load and improvement of performance, when verbal display redundancy was provided, were in the range of about 20%. These results indicated that V-T-T is a device which has a tremendous potential to serve as (a) a pilot memory aid, (b) a way to verify a clearance has been captured correctly without having to make a Say again call, and (c) to ultimately improve the margin of safety by reducing the propensity for human error for the majority of pilot populations including those with English as a second language. Fourth, the results from the validation of theoretical models transfer showed that although cognitive load remained as a significant predictor of performance, both complexity and redundancy also had unique significant effects on performance. Furthermore, these results indicated that the relationship between these variables was not as clear-cut in the operational domain investigated here as the models from the domain of learning and instruction suggested. Until further research is conducted, (a) to investigate how changes in the operational task settings via adding additional coding (e.g., permanent record of clearances which can serve as both a memory aid and a way to verify a clearance is captured correctly) affect performance through mechanisms other than cognitive load; and (b) unless the theoretical models are modified to reflect how changes in the input variables impact the outcome in a variety of ways; a degree of prudence should be exercised when the results from the model transfer validation are applied to operational environments similar to the one investigated in this dissertation research
    corecore