4,188 research outputs found
A Photonic Implementation for the Topological Cluster State Quantum Computer
A new implementation of the topological cluster state quantum computer is
suggested, in which the basic elements are linear optics, measurements, and a
two-dimensional array of quantum dots. This overcomes the need for non-linear
devices to create a lattice of entangled photons. We give estimates of the
minimum efficiencies needed for the detectors, fusion gates and quantum dots,
from a numerical simulation
Overview of Hydra: a concurrent language for synchronous digital circuit design
Hydra is a computer hardware description language that integrates several kinds of software tool (simulation, netlist generation and timing analysis) within a single circuit specification. The design language is inherently concurrent, and it offers black box abstraction and general design patterns that simplify the design of circuits with regular structure. Hydra specifications are concise, allowing the complete design of a computer system as a digital circuit within a few pages. This paper discusses the motivations behind Hydra, and illustrates the system with a significant portion of the design of a basic RISC processor
Focusing of Intense Subpicosecond Laser Pulses in Wedge Targets
Two dimensional particle-in-cell simulations characterizing the interaction
of ultraintense short pulse lasers in the range 10^{18} \leq I \leq 10^{20}
W/cm^{2} with converging target geometries are presented. Seeking to examine
intensity amplification in high-power laser systems, where focal spots are
typically non-diffraction limited, we describe key dynamical features as the
injected laser intensity and convergence angle of the target are systematically
varied. We find that laser pulses are focused down to a wavelength with the
peak intensity amplified by an order of magnitude beyond its vacuum value, and
develop a simple model for how the peak location moves back towards the
injection plane over time. This performance is sustained over hundreds of
femtoseconds and scales to laser intensities beyond 10^{20} W/cm^{2} at 1 \mu m
wavelength.Comment: 5 pages, 6 figures, accepted for publication in Physics of Plasma
Spheromak Experiment Using Separate Guns For Formation And Sustainment
An experiment is described that incorporates the use of separate magnetized plasma guns for formation and sustainment of a spheromak. It is shown that energy coupling efficiency approaches unity if the gun and spheromak are of comparable size. A large gun should be able to operate at lower current and therefore lower voltage. In addition, it is expected that a gun matched to the size of the spheromak will cause less perturbation to the equilibrium. It is proposed to use a smaller gun for spheromak formation and a large, efficient gun for sustainment. The theoretical basis for the experiment is developed, and the details of the experiment are described. A prediction of the equilibrium magnetic flux surfaces using the EFIT code is presented
Explicit representation and parametrised impacts of under ice shelf seas in the z∗ coordinate ocean model NEMO 3.6
Ice-shelf-ocean interactions are a major source of freshwater on the Antarctic continental shelf and have a strong impact on ocean properties, ocean circulation and sea ice. However, climate models based on the ocean-sea ice model NEMO (Nucleus for European Modelling of the Ocean) currently do not include these interactions in any detail. The capability of explicitly simulating the circulation beneath ice shelves is introduced in the non-linear free surface model NEMO. Its implementation into the NEMO framework and its assessment in an idealised and realistic circum-Antarctic configuration is described in this study. Compared with the current prescription of ice shelf melting (i.e. at the surface), inclusion of open sub-ice-shelf cavities leads to a decrease in sea ice thickness along the coast, a weakening of the ocean stratification on the shelf, a decrease in salinity of high-salinity shelf water on the Ross and Weddell sea shelves and an increase in the strength of the gyres that circulate within the over-deepened basins on the West Antarctic continental shelf. Mimicking the overturning circulation under the ice shelves by introducing a prescribed meltwater flux over the depth range of the ice shelf base, rather than at the surface, is also assessed. It yields similar improvements in the simulated ocean properties and circulation over the Antarctic continental shelf to those from the explicit ice shelf cavity representation. With the ice shelf cavities opened, the widely used "three equation" ice shelf melting formulation, which enables an interactive computation of melting, is tested. Comparison with observational estimates of ice shelf melting indicates realistic results for most ice shelves. However, melting rates for the Amery, Getz and George VI ice shelves are considerably overestimated
Lazy Evaluation: From natural semantics to a machine-checked compiler transformation
In order to solve a long-standing problem with list fusion, a new compiler transformation, \u27Call Arity\u27 is developed and implemented in the Haskell compiler GHC. It is formally proven to not degrade program performance; the proof is machine-checked using the interactive theorem prover Isabelle. To that end, a formalization of Launchbury`s Natural Semantics for Lazy Evaluation is modelled in Isabelle, including a correctness and adequacy proof
Hardware Acceleration Using Functional Languages
Cílem této práce je prozkoumat možnosti využití funkcionálního paradigmatu pro hardwarovou akceleraci, konkrétně pro datově paralelní úlohy. Úroveň abstrakce tradičních jazyků pro popis hardwaru, jako VHDL a Verilog, přestáví stačit. Pro popis na algoritmické či behaviorální úrovni se rozmáhají jazyky původně navržené pro vývoj softwaru a modelování, jako C/C++, SystemC nebo MATLAB. Funkcionální jazyky se s těmi imperativními nemůžou měřit v rozšířenosti a oblíbenosti mezi programátory, přesto je předčí v mnoha vlastnostech, např. ve verifikovatelnosti, schopnosti zachytit inherentní paralelismus a v kompaktnosti kódu. Pro akceleraci datově paralelních výpočtů se často používají jednotky FPGA, grafické karty (GPU) a vícejádrové procesory. Praktická část této práce rozšiřuje existující knihovnu Accelerate pro počítání na grafických kartách o výstup do VHDL. Accelerate je možno chápat jako doménově specifický jazyk vestavěný do Haskellu s backendem pro prostředí NVIDIA CUDA. Rozšíření pro vysokoúrovňovou syntézu obvodů ve VHDL představené v této práci používá stejný jazyk a frontend.The aim of this thesis is to research how the functional paradigm can be used for hardware acceleration with an emphasis on data-parallel tasks. The level of abstraction of the traditional hardware description languages, such as VHDL or Verilog, is becoming to low. High-level languages from the domains of software development and modeling, such as C/C++, SystemC or MATLAB, are experiencing a boom for hardware description on the algorithmic or behavioral level. Functional Languages are not so commonly used, but they outperform imperative languages in verification, the ability to capture inherent paralellism and the compactness of code. Data-parallel task are often accelerated on FPGAs, GPUs and multicore processors. In this thesis, we use a library for general-purpose GPU programs called Accelerate and extend it to produce VHDL. Accelerate is a domain-specific language embedded into Haskell with a backend for the NVIDIA CUDA platform. We use the language and its frontend, and create a new backend for high-level synthesis of circuits in VHDL.
- …