824 research outputs found

    Quantum teleportation on a photonic chip

    Full text link
    Quantum teleportation is a fundamental concept in quantum physics which now finds important applications at the heart of quantum technology including quantum relays, quantum repeaters and linear optics quantum computing (LOQC). Photonic implementations have largely focussed on achieving long distance teleportation due to its suitability for decoherence-free communication. Teleportation also plays a vital role in the scalability of photonic quantum computing, for which large linear optical networks will likely require an integrated architecture. Here we report the first demonstration of quantum teleportation in which all key parts - entanglement preparation, Bell-state analysis and quantum state tomography - are performed on a reconfigurable integrated photonic chip. We also show that a novel element-wise characterisation method is critical to mitigate component errors, a key technique which will become increasingly important as integrated circuits reach higher complexities necessary for quantum enhanced operation.Comment: Originally submitted version - refer to online journal for accepted manuscript; Nature Photonics (2014

    Applying Prolog to Develop Distributed Systems

    Get PDF
    Development of distributed systems is a difficult task. Declarative programming techniques hold a promising potential for effectively supporting programmer in this challenge. While Datalog-based languages have been actively explored for programming distributed systems, Prolog received relatively little attention in this application area so far. In this paper we present a Prolog-based programming system, called DAHL, for the declarative development of distributed systems. DAHL extends Prolog with an event-driven control mechanism and built-in networking procedures. Our experimental evaluation using a distributed hash-table data structure, a protocol for achieving Byzantine fault tolerance, and a distributed software model checker - all implemented in DAHL - indicates the viability of the approach

    Self-stabilizing wormhole routing in hypercubes

    Full text link
    Wormhole routing is an efficient technique used to communicate message packets between processors when they are not completely connected. To the best of our knowledge, this is the first attempt at designing a self-stabilizing wormhole routing algorithm for hypercubes. Our first algorithm handles all types of faults except for node/link failures. This algorithm achieves optimality in terms of routing path length by following only the preferred dimensions. In an n-dimensional hypercube, those dimensions in which source and destination address bits differ are called preferred dimensions. Our second algorithm handles topological changes. We propose an efficient scheme of rerouting flits in case of node/link failures. Similar to the first algorithm, this algorithm also tries to follow preferred dimensions if they are nonfaulty at the time of transmitting the flits. However, due to topological faults it is necessary to take non-preferred dimensions resulting in suboptimality of path selection. Formal proof of correctness for both solutions is given. (Abstract shortened by UMI.)

    Towards Efficient and Trustworthy AI Through Hardware-Algorithm-Communication Co-Design

    Full text link
    Artificial intelligence (AI) algorithms based on neural networks have been designed for decades with the goal of maximising some measure of accuracy. This has led to two undesired effects. First, model complexity has risen exponentially when measured in terms of computation and memory requirements. Second, state-of-the-art AI models are largely incapable of providing trustworthy measures of their uncertainty, possibly `hallucinating' their answers and discouraging their adoption for decision-making in sensitive applications. With the goal of realising efficient and trustworthy AI, in this paper we highlight research directions at the intersection of hardware and software design that integrate physical insights into computational substrates, neuroscientific principles concerning efficient information processing, information-theoretic results on optimal uncertainty quantification, and communication-theoretic guidelines for distributed processing. Overall, the paper advocates for novel design methodologies that target not only accuracy but also uncertainty quantification, while leveraging emerging computing hardware architectures that move beyond the traditional von Neumann digital computing paradigm to embrace in-memory, neuromorphic, and quantum computing technologies. An important overarching principle of the proposed approach is to view the stochasticity inherent in the computational substrate and in the communication channels between processors as a resource to be leveraged for the purpose of representing and processing classical and quantum uncertainty

    Deep Learning-Based, Passive Fault Tolerant Control Facilitated by a Taxonomy of Cyber-Attack Effects

    Get PDF
    In the interest of improving the resilience of cyber-physical control systems to better operate in the presence of various cyber-attacks and/or faults, this dissertation presents a novel controller design based on deep-learning networks. This research lays out a controller design that does not rely on fault or cyber-attack detection. Being passive, the controller’s routine operating process is to take in data from the various components of the physical system, holistically assess the state of the physical system using deep-learning networks and decide the subsequent round of commands from the controller. This use of deep-learning methods in passive fault tolerant control (FTC) is unique in the research literature. The proposed controller is applied to both linear and nonlinear systems. Additionally, the application and testing are accomplished with both actuators and sensors being affected by attacks and /or faults

    Towards Quantum Repeaters with Solid-State Qubits: Spin-Photon Entanglement Generation using Self-Assembled Quantum Dots

    Full text link
    In this chapter we review the use of spins in optically-active InAs quantum dots as the key physical building block for constructing a quantum repeater, with a particular focus on recent results demonstrating entanglement between a quantum memory (electron spin qubit) and a flying qubit (polarization- or frequency-encoded photonic qubit). This is a first step towards demonstrating entanglement between distant quantum memories (realized with quantum dots), which in turn is a milestone in the roadmap for building a functional quantum repeater. We also place this experimental work in context by providing an overview of quantum repeaters, their potential uses, and the challenges in implementing them.Comment: 51 pages. Expanded version of a chapter to appear in "Engineering the Atom-Photon Interaction" (Springer-Verlag, 2015; eds. A. Predojevic and M. W. Mitchell

    ENHANCEMENT OF MARKOV RANDOM FIELD MECHANISM TO ACHIEVE FAULT-TOLERANCE IN NANOSCALE CIRCUIT DESIGN

    Get PDF
    As the MOSFET dimensions scale down towards nanoscale level, the reliability of circuits based on these devices decreases. Hence, designing reliable systems using these nano-devices is becoming challenging. Therefore, a mechanism has to be devised that can make the nanoscale systems perform reliably using unreliable circuit components. The solution is fault-tolerant circuit design. Markov Random Field (MRF) is an effective approach that achieves fault-tolerance in integrated circuit design. The previous research on this technique suffers from limitations at the design, simulation and implementation levels. As improvements, the MRF fault-tolerance rules have been validated for a practical circuit example. The simulation framework is extended from thermal to a combination of thermal and random telegraph signal (RTS) noise sources to provide a more rigorous noise environment for the simulation of circuits build on nanoscale technologies. Moreover, an architecture-level improvement has been proposed in the design of previous MRF gates. The redesigned MRF is termed as Improved-MRF. The CMOS, MRF and Improved-MRF designs were simulated under application of highly noisy inputs. On the basis of simulations conducted for several test circuits, it is found that Improved-MRF circuits are 400 whereas MRF circuits are only 10 times more noise-tolerant than the CMOS alternatives. The number of transistors, on the other hand increased from a factor of 9 to 15 from MRF to Improved-MRF respectively (as compared to the CMOS). Therefore, in order to provide a trade-off between reliability and the area overhead required for obtaining a fault-tolerant circuit, a novel parameter called as ‘Reliable Area Index’ (RAI) is introduced in this research work. The value of RAI exceeds around 1.3 and 40 times for MRF and Improved-MRF respectively as compared to CMOS design which makes Improved- MRF to be still 30 times more efficient circuit design than MRF in terms of maintaining a suitable trade-off between reliability and area-consumption of the circuit

    The Contemporary Affirmation of Taxonomy and Recent Literature on Workflow Scheduling and Management in Cloud Computing

    Get PDF
    The Cloud computing systemspreferred over the traditional forms of computing such as grid computing, utility computing, autonomic computing is attributed forits ease of access to computing, for its QoS preferences, SLA2019;s conformity, security and performance offered with minimal supervision. A cloud workflow schedule when designed efficiently achieves optimalre source sage, balance of workloads, deadline specific execution, cost control according to budget specifications, efficient consumption of energy etc. to meet the performance requirements of today2019; svast scientific and business requirements. The businesses requirements under recent technologies like pervasive computing are motivating the technology of cloud computing for further advancements. In this paper we discuss some of the important literature published on cloud workflow scheduling

    On Fault Tolerance Methods for Networks-on-Chip

    Get PDF
    Technology scaling has proceeded into dimensions in which the reliability of manufactured devices is becoming endangered. The reliability decrease is a consequence of physical limitations, relative increase of variations, and decreasing noise margins, among others. A promising solution for bringing the reliability of circuits back to a desired level is the use of design methods which introduce tolerance against possible faults in an integrated circuit. This thesis studies and presents fault tolerance methods for network-onchip (NoC) which is a design paradigm targeted for very large systems-onchip. In a NoC resources, such as processors and memories, are connected to a communication network; comparable to the Internet. Fault tolerance in such a system can be achieved at many abstraction levels. The thesis studies the origin of faults in modern technologies and explains the classification to transient, intermittent and permanent faults. A survey of fault tolerance methods is presented to demonstrate the diversity of available methods. Networks-on-chip are approached by exploring their main design choices: the selection of a topology, routing protocol, and flow control method. Fault tolerance methods for NoCs are studied at different layers of the OSI reference model. The data link layer provides a reliable communication link over a physical channel. Error control coding is an efficient fault tolerance method especially against transient faults at this abstraction level. Error control coding methods suitable for on-chip communication are studied and their implementations presented. Error control coding loses its effectiveness in the presence of intermittent and permanent faults. Therefore, other solutions against them are presented. The introduction of spare wires and split transmissions are shown to provide good tolerance against intermittent and permanent errors and their combination to error control coding is illustrated. At the network layer positioned above the data link layer, fault tolerance can be achieved with the design of fault tolerant network topologies and routing algorithms. Both of these approaches are presented in the thesis together with realizations in the both categories. The thesis concludes that an optimal fault tolerance solution contains carefully co-designed elements from different abstraction levelsSiirretty Doriast
    • …
    corecore