18,489 research outputs found

    Overfrustrated and Underfrustrated Spin-Glasses in d=3 and 2: Evolution of Phase Diagrams and Chaos Including Spin-Glass Order in d=2

    Full text link
    In spin-glass systems, frustration can be adjusted continuously and considerably, without changing the antiferromagnetic bond probability p, by using locally correlated quenched randomness, as we demonstrate here on hypercubic lattices and hierarchical lattices. Such overfrustrated and underfrustrated Ising systems on hierarchical lattices in d=3 and 2 are studied. With the removal of just 51 % of frustration, a spin-glass phase occurs in d=2. With the addition of just 33 % frustration, the spin-glass phase disappears in d=3. Sequences of 18 different phase diagrams for different levels of frustration are calculated in both dimensions. In general, frustration lowers the spin-glass ordering temperature. At low temperatures, increased frustration favors the spin-glass phase (before it disappears) over the ferromagnetic phase and symmetrically the antiferromagnetic phase. When any amount, including infinitesimal, frustration is introduced, the chaotic rescaling of local interactions occurs in the spin-glass phase. Chaos increases with increasing frustration, as seen from the increased positive value of the calculated Lyapunov exponent λ\lambda, starting from λ=0\lambda =0 when frustration is absent. The calculated runaway exponent yRy_R of the renormalization-group flows decreases with increasing frustration to yR=0y_R=0 when the spin-glass phase disappears. From our calculations of entropy and specific heat curves in d=3, it is seen that frustration lowers in temperature the onset of both long- and short-range order in spin-glass phases, but is more effective on the former. From calculations of the entropy as a function of antiferromagnetic bond concentration p, it is seen that the ground-state and low-temperature entropy already mostly sets in within the ferromagnetic and antiferromagnetic phases, before the spin-glass phase is reached.Comment: Published version, 18 phase diagrams, 12 figures, 10 page

    A High performance and low cost hardware arcitecture for H.264 transform and quantization algorithms

    Get PDF
    In this paper, we present a high performance and low cost hardware architecture for real-time implementation of forward transform and quantization and inverse transform and quantization algorithms used in H.264 / MPEG4 Part 10 video coding standard. The hard-ware architecture is based on a reconfigurable datapath with only one multiplier. This hardware is designed to be used as part of a complete low power H.264 video coding system for portable appli-cations. The proposed architecture is implemented in Verilog HDL. The Verilog RTL code is verified to work at 81 MHz in a Xilinx Virtex II FPGA and it is verified to work at 210 MHz in a 0.18´ ASIC implementation. The FPGA and ASIC implementations can code 27 and 70 VGA frames (640x480) per second respectively

    An efficient hardware architecture for H.264 intra prediction algorithm

    Get PDF
    In this paper, we present an efficient hardware architecture for real-time implementation of intra prediction algorithm used in H.264 / MPEG4 Part 10 video coding standard. The hardware design is based on a novel organization of the intra prediction equations. This hardware is designed to be used as part of a complete H.264 video coding system for portable applications. The proposed architecture is implemented in Verilog HDL. The Verilog RTL code is verified to work at 90 MHz in a Xilinx Virtex II FPGA. The FPGA implementation can process 27 VGA frames (640x480) per second

    A reconfigurable frame interpolation hardware architecture for high definition video

    Get PDF
    Since Frame Rate Up-Conversion (FRC) is started to be used in recent consumer electronics products like High Definition TV, real-time and low cost implementation of FRC algorithms has become very important. Therefore, in this paper, we propose a low cost hardware architecture for realtime implementation of frame interpolation algorithms. The proposed hardware architecture is reconfigurable and it allows adaptive selection of frame interpolation algorithms for each Macroblock. The proposed hardware architecture is implemented in VHDL and mapped to a low cost Xilinx XC3SD1800A-4 FPGA device. The implementation results show that the proposed hardware can run at 101 MHz on this FPGA and consumes 32 BRAMs and 15384 slices

    A Note on Derived Geometric Interpretation of Classical Field Theories

    Full text link
    In this note, we would like to provide a conceptional introduction to the interaction between derived geometry and physics based on the formalism that has been heavily studied by Kevin Costello. Main motivations of our current attempt are as follows: (i) to provide a brief introduction to derived algebraic geometry, which can be, roughly speaking, thought of as a higher categorical refinement of an ordinary algebraic geometry, (ii) to understand how certain derived objects naturally appear in a theory describing a particular physical phenomenon and give rise to a formal mathematical treatment, such as redefining a perturbative classical field theory (or its quantum counterpart) by using the language of derived algebraic geometry, and (iii) how the notion of factorization algebra together with certain higher categorical structures come into play to encode the structure of so-called observables in those theories by employing certain cohomological/homotopical methods. Adopting such a heavy and relatively enriched language allows us to formalize the notion of quantization and observables in quantum field theory as well.Comment: 14 pages. This note serves as an introductory survey on certain mathematical structures encoding the essence of Costello's approach to derived-geometric formulation of field theories and the structure of observables in an expository manne

    Implementation of a fixing strategy and parallelization in a recent global optimization method

    Get PDF
    Electromagnetism-like Mechanism (EM) heuristic is a population-based stochastic global optimization method inspired by the attraction-repulsion mechanism of the electromagnetism theory. EM was originally proposed for solving continuous global optimization problems with bound constraints and it has been shown that the algorithm performs quite well compared to some other global optimization methods. In this work, we propose two extensions to improve the performance of the original algorithm: First, we introduce a fixing strategy that provides a mechanism for not being trapped in local minima, and thus, improves the effectiveness of the search. Second, we use the proposed fixing strategy to parallelize the algorithm and utilize a cooperative parallel search on the solution space. We then evaluate the performance of our study under three criteria: the quality of the solutions, the number of function evaluations and the number of local minima obtained. Test problems are generated by an algorithm suggested in the literature that builds test problems with varying degrees of difficulty. Finally, we benchmark our results with that of the Knitro solver with the multistart option set

    A symmetric rank-one Quasi-Newton line-search method using negative curvature directions

    Get PDF
    We propose a quasi-Newton line-search method that uses negative curvature directions for solving unconstrained optimization problems. In this method, the symmetric rank-one (SR1) rule is used to update the Hessian approximation. The SR1 update rule is known to have a good numerical performance; however, it does not guarantee positive definiteness of the updated matrix. We first discuss the details of the proposed algorithm and then concentrate on its numerical efficiency. Our extensive computational study shows the potential of the proposed method from different angles, such as; its second order convergence behavior, its exceeding performance when compared to two other existing packages, and its computation profile illustrating the possible bottlenecks in the execution time. We then conclude the paper with the convergence analysis of the proposed method
    corecore