227,371 research outputs found

    NLSEmagic: Nonlinear Schr\"odinger Equation Multidimensional Matlab-based GPU-accelerated Integrators using Compact High-order Schemes

    Full text link
    We present a simple to use, yet powerful code package called NLSEmagic to numerically integrate the nonlinear Schr\"odinger equation in one, two, and three dimensions. NLSEmagic is a high-order finite-difference code package which utilizes graphic processing unit (GPU) parallel architectures. The codes running on the GPU are many times faster than their serial counterparts, and are much cheaper to run than on standard parallel clusters. The codes are developed with usability and portability in mind, and therefore are written to interface with MATLAB utilizing custom GPU-enabled C codes with the MEX-compiler interface. The packages are freely distributed, including user manuals and set-up files.Comment: 37 pages, 13 figure

    The Metric-FF Planning System: Translating "Ignoring Delete Lists" to Numeric State Variables

    Full text link
    Planning with numeric state variables has been a challenge for many years, and was a part of the 3rd International Planning Competition (IPC-3). Currently one of the most popular and successful algorithmic techniques in STRIPS planning is to guide search by a heuristic function, where the heuristic is based on relaxing the planning task by ignoring the delete lists of the available actions. We present a natural extension of ``ignoring delete lists'' to numeric state variables, preserving the relevant theoretical properties of the STRIPS relaxation under the condition that the numeric task at hand is ``monotonic''. We then identify a subset of the numeric IPC-3 competition language, ``linear tasks'', where monotonicity can be achieved by pre-processing. Based on that, we extend the algorithms used in the heuristic planning system FF to linear tasks. The resulting system Metric-FF is, according to the IPC-3 results which we discuss, one of the two currently most efficient numeric planners

    Programmable neural logic

    Get PDF
    Circuits of threshold elements (Boolean input, Boolean output neurons) have been shown to be surprisingly powerful. Useful functions such as XOR, ADD and MULTIPLY can be implemented by such circuits more efficiently than by traditional AND/OR circuits. In view of that, we have designed and built a programmable threshold element. The weights are stored on polysilicon floating gates, providing long-term retention without refresh. The weight value is increased using tunneling and decreased via hot electron injection. A weight is stored on a single transistor allowing the development of dense arrays of threshold elements. A 16-input programmable neuron was fabricated in the standard 2 Ī¼m double-poly, analog process available from MOSIS. We also designed and fabricated the multiple threshold element introduced in [5]. It presents the advantage of reducing the area of the layout from O(n^2) to O(n); (n being the number of variables) for a broad class of Boolean functions, in particular symmetric Boolean functions such as PARITY. A long term goal of this research is to incorporate programmable single/multiple threshold elements, as building blocks in field programmable gate arrays

    Readiness of Quantum Optimization Machines for Industrial Applications

    Full text link
    There have been multiple attempts to demonstrate that quantum annealing and, in particular, quantum annealing on quantum annealing machines, has the potential to outperform current classical optimization algorithms implemented on CMOS technologies. The benchmarking of these devices has been controversial. Initially, random spin-glass problems were used, however, these were quickly shown to be not well suited to detect any quantum speedup. Subsequently, benchmarking shifted to carefully crafted synthetic problems designed to highlight the quantum nature of the hardware while (often) ensuring that classical optimization techniques do not perform well on them. Even worse, to date a true sign of improved scaling with the number of problem variables remains elusive when compared to classical optimization techniques. Here, we analyze the readiness of quantum annealing machines for real-world application problems. These are typically not random and have an underlying structure that is hard to capture in synthetic benchmarks, thus posing unexpected challenges for optimization techniques, both classical and quantum alike. We present a comprehensive computational scaling analysis of fault diagnosis in digital circuits, considering architectures beyond D-wave quantum annealers. We find that the instances generated from real data in multiplier circuits are harder than other representative random spin-glass benchmarks with a comparable number of variables. Although our results show that transverse-field quantum annealing is outperformed by state-of-the-art classical optimization algorithms, these benchmark instances are hard and small in the size of the input, therefore representing the first industrial application ideally suited for testing near-term quantum annealers and other quantum algorithmic strategies for optimization problems.Comment: 22 pages, 12 figures. Content updated according to Phys. Rev. Applied versio

    From discretization to regularization of composite discontinuous functions

    Get PDF
    Discontinuities between distinct regions, described by different equation sets, cause difficulties for PDE/ODE solvers. We present a new algorithm that eliminates integrator discontinuities through regularizing discontinuities. First, the algorithm determines the optimum switch point between two functions spanning adjacent or overlapping domains. The optimum switch point is determined by searching for a ā€œjump pointā€ that minimizes a discontinuity between adjacent/overlapping functions. Then, discontinuity is resolved using an interpolating polynomial that joins the two discontinuous functions. This approach eliminates the need for conventional integrators to either discretize and then link discontinuities through generating interpolating polynomials based on state variables or to reinitialize state variables when discontinuities are detected in an ODE/DAE system. In contrast to conventional approaches that handle discontinuities at the state variable level only, the new approach tackles discontinuity at both state variable and the constitutive equations level. Thus, this approach eliminates errors associated with interpolating polynomials generated at a state variable level for discontinuities occurring in the constitutive equations. Computer memory space requirements for this approach exponentially increase with the dimension of the discontinuous function hence there will be limitations for functions with relatively high dimensions. Memory availability continues to increase with price decreasing so this is not expected to be a major limitation
    • ā€¦
    corecore