286 research outputs found

    Monte Carlo methods in PageRank computation: When one iteration is sufficient

    Get PDF
    PageRank is one of the principle criteria according to which Google ranks Web pages. PageRank can be interpreted as a frequency of visiting a Web page by a random surfer and thus it reflects the popularity of a Web page. Google computes the PageRank using the power iteration method which requires about one week of intensive computations. In the present work we propose and analyze Monte Carlo type methods for the PageRank computation. There are several advantages of the probabilistic Monte Carlo methods over the deterministic power iteration method: Monte Carlo methods provide good estimation of the PageRank for relatively important pages already after one iteration; Monte Carlo methods have natural parallel implementation; and finally, Monte Carlo methods allow to perform continuous update of the PageRank as the structure of the Web changes

    Noise signal as input data in self-organized neural networks

    Get PDF
    Self-organizing neural networks are used to analyze uncorrelated white noises of different distribution types (normal, triangular, and uniform). The artificially generated noises are analyzed by clustering the measured time signal sequence samples without its preprocessing. Using this approach, we analyze, for the first time, the current noise produced by a sliding "Wigner-crystal "-like structure in the insulating phase of a 2D electron system in silicon. The possibilities of using the method for analyzing and comparing experimental data obtained by observing various effects in solid-state physics and numerical data simulated using theoretical models are discussed. Published under an exclusive license by AIP Publishing

    Quantum \v{C}erenkov Radiation: Spectral Cutoffs and the Role of Spin and Orbital Angular Momentum

    Get PDF
    We show that the well-known \v{C}erenkov Effect contains new phenomena arising from the quantum nature of charged particles. The \v{C}erenkov transition amplitudes allow coupling between the charged particle and the emitted photon through their orbital angular momentum (OAM) and spin, by scattering into preferred angles and polarizations. Importantly, the spectral response reveals a discontinuity immediately below a frequency cutoff that can occur in the optical region. Specifically, with proper shaping of electron beams (ebeams), we predict that the traditional \v{C}erenkov radiation angle splits into two distinctive cones of photonic shockwaves. One of the shockwaves can move along a backward cone, otherwise considered impossible for \v{C}erenkov radiation in ordinary matter. Our findings are observable for ebeams with realistic parameters, offering new applications including novel quantum optics sources, and open a new realm for \v{C}erenkov detectors involving the spin and orbital angular momentum of charged particles.Comment: 27 pages, 3 figure

    Quantum diffusion on a cyclic one dimensional lattice

    Full text link
    The quantum diffusion of a particle in an initially localized state on a cyclic lattice with N sites is studied. Diffusion and reconstruction time are calculated. Strong differences are found for even or odd number of sites and the limit N->infinit is studied. The predictions of the model could be tested with micro - and nanotechnology devices.Comment: 17 pages, 5 figure

    The Effect of Neutral Atoms on Capillary Discharge Z-pinch

    Get PDF
    We study the effect of neutral atoms on the dynamics of a capillary discharge Z-pinch, in a regime for which a large soft-x-ray amplification has been demonstrated. We extended the commonly used one-fluid magneto-hydrodynamics (MHD) model by separating out the neutral atoms as a second fluid. Numerical calculations using this extended model yield new predictions for the dynamics of the pinch collapse, and better agreement with known measured data.Comment: 4 pages, 4 postscript figures, to be published in Phys. Rev. Let

    On interconnecting and orchestrating components in disaggregated data centers:The dReDBox project vision

    Get PDF
    Computing systems servers-low-or high-end ones have been traditionally designed and built using a main-board and its hardware components as a 'hard' monolithic building block; this formed the base unit on which the system hardware and software stack design build upon. This hard deployment and management border on compute, memory, network and storage resources is either fixed or quite limited in expandability during design time and in practice remains so throughout machine lifetime as subsystem upgrades are seldomely employed. The impact of this rigidity has well known ramifications in terms of lower system resource utilization, costly upgrade cycles and degraded energy proportionality. In the dReDBox project we take on the challenge of breaking the server boundaries through materialization of the concept of disaggregation. The basic idea of the dReDBox architecture is to use a core of high-speed, low-latency opto-electronic fabric that will bring physically distant components more closely in terms of latency and bandwidth. We envision a powerful software-defined control plane that will match the flexibility of the system to the resource needs of the applications (or VMs) running in the system. Together the hardware, interconnect, and software architectures will enable the creation of a modular, vertically-integrated system that will form a datacenter-in-a-box

    A new picture of the Lifshitz critical behavior

    Full text link
    New field theoretic renormalization group methods are developed to describe in a unified fashion the critical exponents of an m-fold Lifshitz point at the two-loop order in the anisotropic (m not equal to d) and isotropic (m=d close to 8) situations. The general theory is illustrated for the N-vector phi^4 model describing a d-dimensional system. A new regularization and renormalization procedure is presented for both types of Lifshitz behavior. The anisotropic cases are formulated with two independent renormalization group transformations. The description of the isotropic behavior requires only one type of renormalization group transformation. We point out the conceptual advantages implicit in this picture and show how this framework is related to other previous renormalization group treatments for the Lifshitz problem. The Feynman diagrams of arbitrary loop-order can be performed analytically provided these integrals are considered to be homogeneous functions of the external momenta scales. The anisotropic universality class (N,d,m) reduces easily to the Ising-like (N,d) when m=0. We show that the isotropic universality class (N,m) when m is close to 8 cannot be obtained from the anisotropic one in the limit d --> m near 8. The exponents for the uniaxial case d=3, N=m=1 are in good agreement with recent Monte Carlo simulations for the ANNNI model.Comment: 48 pages, no figures, two typos fixe

    New Lower Bounds on the Self-Avoiding-Walk Connective Constant

    Full text link
    We give an elementary new method for obtaining rigorous lower bounds on the connective constant for self-avoiding walks on the hypercubic lattice ZdZ^d. The method is based on loop erasure and restoration, and does not require exact enumeration data. Our bounds are best for high dd, and in fact agree with the first four terms of the 1/d1/d expansion for the connective constant. The bounds are the best to date for dimensions d≥3d \geq 3, but do not produce good results in two dimensions. For d=3,4,5,6d=3,4,5,6, respectively, our lower bound is within 2.4\%, 0.43\%, 0.12\%, 0.044\% of the value estimated by series extrapolation.Comment: 35 pages, 388480 bytes Postscript, NYU-TH-93/02/0
    • …
    corecore