11,970 research outputs found

    Dependable Computing on Inexact Hardware through Anomaly Detection.

    Full text link
    Reliability of transistors is on the decline as transistors continue to shrink in size. Aggressive voltage scaling is making the problem even worse. Scaled-down transistors are more susceptible to transient faults as well as permanent in-field hardware failures. In order to continue to reap the benefits of technology scaling, it has become imperative to tackle the challenges risen due to the decreasing reliability of devices for the mainstream commodity market. Along with the worsening reliability, achieving energy efficiency and performance improvement by scaling is increasingly providing diminishing marginal returns. More than any other time in history, the semiconductor industry faces the crossroad of unreliability and the need to improve energy efficiency. These challenges of technology scaling can be tackled by categorizing the target applications in the following two categories: traditional applications that have relatively strict correctness requirement on outputs and emerging class of soft applications, from various domains such as multimedia, machine learning, and computer vision, that are inherently inaccuracy tolerant to a certain degree. Traditional applications can be protected against hardware failures by low-cost detection and protection methods while soft applications can trade off quality of outputs to achieve better performance or energy efficiency. For traditional applications, I propose an efficient, software-only application analysis and transformation solution to detect data and control flow transient faults. The intelligence of the data flow solution lies in the use of dynamic application information such as control flow, memory and value profiling. The control flow protection technique achieves its efficiency by simplifying signature calculations in each basic block and by performing checking at a coarse-grain level. For soft applications, I develop a quality control technique. The quality control technique employs continuous, light-weight checkers to ensure that the approximation is controlled and application output is acceptable. Overall, I show that the use of low-cost checkers to produce dependable results on commodity systems---constructed from inexact hardware components---is efficient and practical.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113341/1/dskhudia_1.pd

    Exploring auditory-motor interactions in normal and disordered speech

    Full text link
    Auditory feedback plays an important role in speech motor learning and in the online correction of speech movements. Speakers can detect and correct auditory feedback errors at the segmental and suprasegmental levels during ongoing speech. The frontal brain regions that contribute to these corrective movements have also been shown to be more active during speech in persons who stutter (PWS) compared to fluent speakers. Further, various types of altered auditory feedback can temporarily improve the fluency of PWS, suggesting that atypical auditory-motor interactions during speech may contribute to stuttering disfluencies. To investigate this possibility, we have developed and improved Audapter, a software that enables configurable dynamic perturbation of the spatial and temporal content of the speech auditory signal in real time. Using Audapter, we have measured the compensatory responses of PWS to static and dynamic perturbations of the formant content of auditory feedback and compared these responses with those from matched fluent controls. Our findings indicate deficient utilization of auditory feedback by PWS for short-latency online control of the spatial and temporal parameters of articulation during vowel production and during running speech. These findings provide further evidence that stuttering is associated with aberrant auditory-motor integration during speech.Published versio

    The mind's eye in blindfold chess

    Get PDF
    Visual imagery plays an important role in problem solving, and research into blindfold chess has provided a wealth of empirical data on this question. We show how a recent theory of expert memory (the template theory, Gobet & Simon, 1996, 2000) accounts for most of these data. However, how the mind’s eye filters out relevant from irrelevant information is still underspecified in the theory. We describe two experiments addressing this question, in which chess games are presented visually, move by move, on a board that contains irrelevant information (static positions, semi-static positions, and positions changing every move). The results show that irrelevant information affects chess masters only when it changes during the presentation of the target game. This suggests that novelty information is used by the mind’s eye to select incoming visual information and separate “figure” and “ground.” Mechanisms already present in the template theory can be used to account for this novelty effect

    Energy Efficient Load Latency Tolerance: Single-Thread Performance for the Multi-Core Era

    Get PDF
    Around 2003, newly activated power constraints caused single-thread performance growth to slow dramatically. The multi-core era was born with an emphasis on explicitly parallel software. Continuing to grow single-thread performance is still important in the multi-core context, but it must be done in an energy efficient way. One significant impediment to performance growth in both out-of-order and in-order processors is the long latency of last-level cache misses. Prior work introduced the idea of load latency tolerance---the ability to dynamically remove miss-dependent instructions from critical execution structures, continue execution under the miss, and re-execute miss-dependent instructions after the miss returns. However, previously proposed designs were unable to improve performance in an energy-efficient way---they introduced too many new large, complex structures and re-executed too many instructions. This dissertation describes a new load latency tolerant design that is both energy-efficient, and applicable to both in-order and out-of-order cores. Key novel features include formulation of slice re-execution as an alternative use of multi-threading support, efficient schemes for register and memory state management, and new pruning mechanisms for drastically reducing load latency tolerance\u27s dynamic execution overheads. Area analysis shows that energy-efficient load latency tolerance increases the footprint of an out-of-order core by a few percent, while cycle-level simulation shows that it significantly improves the performance of memory-bound programs. Energy-efficient load latency tolerance is more energy-efficient than---and synergistic with---existing performance technique like dynamic voltage and frequency scaling (DVFS)

    A survey on pseudonym changing strategies for Vehicular Ad-Hoc Networks

    Full text link
    The initial phase of the deployment of Vehicular Ad-Hoc Networks (VANETs) has begun and many research challenges still need to be addressed. Location privacy continues to be in the top of these challenges. Indeed, both of academia and industry agreed to apply the pseudonym changing approach as a solution to protect the location privacy of VANETs'users. However, due to the pseudonyms linking attack, a simple changing of pseudonym shown to be inefficient to provide the required protection. For this reason, many pseudonym changing strategies have been suggested to provide an effective pseudonym changing. Unfortunately, the development of an effective pseudonym changing strategy for VANETs is still an open issue. In this paper, we present a comprehensive survey and classification of pseudonym changing strategies. We then discuss and compare them with respect to some relevant criteria. Finally, we highlight some current researches, and open issues and give some future directions

    Dynamic resource management in SDN-based virtualized networks

    Get PDF
    Network virtualization allows for an abstraction between user and physical resources by letting a given physical infrastructure to be shared by multiple service providers. However, network virtualization presents some challenges, such as, efficient resource management, fast provisioning and scalability. By separating a network's control logic from the underlying routers and switches, software defined networking (SDN) promises an unprecedented simplification in network programmability, management and innovation by service providers, and hence, its control model presents itself as a candidate solution to the challenges in network virtualization. In this paper, we use the SDN control plane to efficiently manage resources in virtualized networks by dynamically adjusting the virtual network (VN) to substrate network (SN) mappings based on network status. We extend an SDN controller to monitor the resource utilisation of VNs, as well as the average loading of SN links and switches, and use this information to proactively add or remove flow rules from the switches. Simulations show that, compared with three state-of-art approaches, our proposal improves the VN acceptance ratio by about 40% and reduces VN resource costs by over 10%

    Replicode: A Constructivist Programming Paradigm and Language

    Get PDF
    Replicode is a language designed to encode short parallel programs and executable models, and is centered on the notions of extensive pattern-matching and dynamic code production. The language is domain independent and has been designed to build systems that are modelbased and model-driven, as production systems that can modify their own code. More over, Replicode supports the distribution of knowledge and computation across clusters of computing nodes. This document describes Replicode and its executive, i.e. the system that executes Replicode constructions. The Replicode executive is meant to run on Linux 64 bits and Windows 7 32/64 bits platforms and interoperate with custom C++ code. The motivations for the Replicode language, the constructivist paradigm it rests on, and the higher-level AI goals targeted by its construction, are described by ThĂłrisson (2012), Nivel and ThĂłrisson (2009), and ThĂłrisson and Nivel (2009a, 2009b). An overview presents the main concepts of the language. Section 3 describes the general structure of Replicode objects and describes pattern matching. Section 4 describes the execution model of Replicode and section 5 describes how computation and knowledge are structured and controlled. Section 6 describes the high-level reasoning facilities offered by the system. Finally, section 7 describes how the computation is distributed over a cluster of computing nodes. Consult Annex 1 for a formal definition of Replicode, Annex 2 for a specification of the executive, Annex 3 for the specification of the executable code format (r-code) and its C++ API, and Annex 4 for the definition of the Replicode Extension C++ API

    Event-based simulation of neutron experiments: interference, entanglement and uncertainty relations

    Get PDF
    We discuss a discrete-event simulation approach, which has been shown to give a unified cause-and-effect description of many quantum optics and single-neutron interferometry experiments. The event-based simulation algorithm does not require the knowledge of the solution of a wave equation of the whole system, yet reproduces the corresponding statistical distributions by generating detection events one-by-one. It is showm that single-particle interference and entanglement, two important quantum phenomena, emerge via information exchange between individual particles and devices such as beam splitters, polarizers and detectors. We demonstrate this by reproducing the results of several single-neutron interferometry experiments, including one that demonstrates interference and one that demonstrates the violation of a Bell-type inequality. We also present event-based simulation results of a single neutron experiment designed to test the validity of Ozawa's universally valid error-disturbance relation, an uncertainty relation derived using the theory of general quantum measurements.Comment: Invited paper presented at the EmQM13 Workshop on Emergent Quantum Mechanics, Austrian Academy of Sciences (October 3-6, 2013, Vienna

    Fault-tolerant computer study

    Get PDF
    A set of building block circuits is described which can be used with commercially available microprocessors and memories to implement fault tolerant distributed computer systems. Each building block circuit is intended for VLSI implementation as a single chip. Several building blocks and associated processor and memory chips form a self checking computer module with self contained input output and interfaces to redundant communications buses. Fault tolerance is achieved by connecting self checking computer modules into a redundant network in which backup buses and computer modules are provided to circumvent failures. The requirements and design methodology which led to the definition of the building block circuits are discussed
    • …
    corecore