39 research outputs found

    Application of Minimal Subtraction Renormalization to Crossover Behavior near the 3^3He Liquid-Vapor Critical Point

    Full text link
    Parametric expressions are used to calculate the isothermal susceptibility, specific heat, order parameter, and correlation length along the critical isochore and coexistence curve from the asymptotic region to crossover region. These expressions are based on the minimal-subtraction renormalization scheme within the ϕ4\phi^4 model. Using two adjustable parameters in these expressions, we fit the theory globally to recently obtained experimental measurements of isothermal susceptibility and specific heat along the critical isochore and coexistence curve, and early measurements of coexistence curve and light scattering intensity along the critical isochore of 3^3He near its liquid-vapor critical point. The theory provides good agreement with these experimental measurements within the reduced temperature range t2×102|t| \le 2\times 10^{-2}

    Crossover phenomena in spin models with medium-range interactions and self-avoiding walks with medium-range jumps

    Full text link
    We study crossover phenomena in a model of self-avoiding walks with medium-range jumps, that corresponds to the limit N0N\to 0 of an NN-vector spin system with medium-range interactions. In particular, we consider the critical crossover limit that interpolates between the Gaussian and the Wilson-Fisher fixed point. The corresponding crossover functions are computed using field-theoretical methods and an appropriate mean-field expansion. The critical crossover limit is accurately studied by numerical Monte Carlo simulations, which are much more efficient for walk models than for spin systems. Monte Carlo data are compared with the field-theoretical predictions concerning the critical crossover functions, finding a good agreement. We also verify the predictions for the scaling behavior of the leading nonuniversal corrections. We determine phenomenological parametrizations that are exact in the critical crossover limit, have the correct scaling behavior for the leading correction, and describe the nonuniversal lscrossover behavior of our data for any finite range.Comment: 43 pages, revte

    Product of four Hadamard matrices

    No full text

    Capped colloids as light-mills in optical traps

    No full text

    Optimization of isopolar microtubule arrays.

    No full text
    Isopolar arrays of aligned cytoskeletal filaments are components in a number of designs of hybrid nanodevices incorporating biomolecular motors. For example, a combination of filament arrays and motor arrays can form an actuator or a molecular engine resembling an artificial muscle. Here, isopolar arrays of microtubules are fabricated by flow alignment, and their quality is characterized by their degree of alignment. We find, in agreement with our analytical models, that the degree of alignment is ultimately limited by thermal forces, while the kinetics of the alignment process are influenced by the flow strength, the microtubule stiffness, the gliding velocity, and the tip length. Strong flows remove microtubules from the surface and reduce the filament density, suggesting that there is an optimal flow strength for the fabrication of ordered arrays

    Алгоритми FLARS та виділення аномалій часових рядів

    No full text
    As a rule, algorithms of recognition of time series anomalies are based on time frequency or statistical analysis . This article is devoted to detailed formal description of new fuzzy set based algorithm FLARS (Fuzzy Logic Algorithm for Recognition of Signals). It recognizes time series anomalies by means "smooth" modelling (in fuzzy mathematics sense) of interpreter's logic, which searches for anomalies at the record.Общепринятые алгоритмы выделения аномалий временных рядов основываются, в основном, на частотно временном или статистическом анализе. Статья посвящена строгому построению нового алгоритма FLARS. Его можно рассматривать как результат "мягкого" (на основе нечеткой математики) моделирования логики интерпретатора, ищущего аномалии на записи.Загальноприйняті алгоритми виділення аномалій часових рядів базуються, як правило, на частотно часовому або статистичному аналізі. Стаття присвячена строгій побудові нового алгоритму FLARS. Його можна розглядати як результат "м’якого" (на основі нечіткої математики) моделювання логіки інтерпретатора, який шукає аномалії на запису

    Development of the Algorithmic Basis of the FCAZ Method for Earthquake-Prone Area Recognition

    No full text
    The present paper continues the series of publications by the authors devoted to solving the problem of recognition regions with potential high seismicity. It is aimed at the development of the mathematical apparatus and the algorithmic base of the FCAZ method, designed for effective recognition of earthquake-prone areas. A detailed description of both the mathematical algorithms included in the FCAZ in its original form and those developed in this paper is given. Using California as an example, it is shown that a significantly developed algorithmic FCAZ base makes it possible to increase the reliability and accuracy of FCAZ recognition. In particular, a number of small zones located at a fairly small distance from each other but having a close “internal” connection are being connected into single large, high-seismicity areas

    DPS Clustering: New Results

    No full text
    The results presented in this paper are obtained as part of the continued development and research of clustering algorithms based on the discrete mathematical analysis. The article briefly describes the theory of Discrete Perfect Sets (DPS-sets) that is the basis for the construction of DPS-clustering algorithms. The main task of the previously constructed DPS-algorithms is to search for clusters in multidimensional arrays with noise. DPS-algorithms have two stages: the first stage is the recognition of the maximum perfect set of a given density level from the initial array, the second stage is the partitioning of the result of the first stage into connected components, which are considered to be clusters. Study of qualities of DPS-algorithms showed that, in a number of situations in the first stage, the result does not include all clusters which have practical sense. In the second stage, partitioning into connected components can lead to unnecessarily small clusters. Simple variation of parameters in DPS-algorithms does not allow for eliminating these drawbacks. The present paper is devoted to the construction on the basis of DPS-algorithms of their new versions, more free from these drawbacks
    corecore