4,450 research outputs found

    Generalized Calibrations

    Get PDF
    We present a generalization of calibrations in which the calibration form is not closed. We use this to examine a class of supersymmetric p-brane worldvolume solitons.As an example we consider M5-brane worldvolume solitons in an AdS background.Comment: 4 pages, Latex, uses cargese.cls (included). To appear in the gong show section of the Proceedings of the Cargese `99 ASI "Progress in String Theory and M-Theory

    Plane Graphs are Facially-non-repetitively 104â‹…10710^{4 \cdot10^7}-Choosable

    Full text link
    A sequence (x1,x2,…,x2n)\left(x_1,x_2,\ldots,x_{2n}\right) of even length is a repetition if (x1,…,xn)=(xn+1,…,x2n)\left(x_1,\ldots,x_n\right) = \left(x_{n+1},\ldots,x_{2n}\right). We prove existence of a constant C<104⋅107C < 10^{4 \cdot 10^7} such that given any planar drawing of a graph GG, and a list L(v)L(v) of CC permissible colors for each vertex vv in GG, there is a choice of a permissible color for each vertex such that the sequence of colors of the vertices on any facial simple path in GG is not a repetition

    L\'evy flights as an underlying mechanism for global optimization algorithms

    Full text link
    In this paper we propose and advocate the use of the so called L\'evy flights as a driving mechanism for a class of stochastic optimization computations. This proposal, for some reasons overlooked until now, is - in author's opinion - very appropriate to satisfy the need for algorithm, which is capable of generating trial steps of very different length in the search space. The required balance between short and long steps can be easily and fully controlled. A simple example of approximated L\'evy distribution, implemented in FORTRAN 77, is given. We also discuss the physical grounds of presented methods.Comment: 8 pages, 3 figures, LaTeX 2.09, requires kaeog.sty style file (included). Presented on V Domestic Conference "Evolutionary Algorithms and Global Optimization", May 30th - June 1st, 2001, Jastrz\c{e}bia G\'ora (Poland

    Interval straight line fitting

    Full text link
    I consider the task of experimental data fitting. Unlike the traditional approach I do not try to minimize any functional based on available experimental information, instead the minimization problem is replaced with constraint satisfaction procedure, which produces the interval hull of solutions of desired type. The method, called 'box slicing algorithm', is described in details. The results obtained this way need not to be labeled with confidence level of any kind, they are simply certain (guaranteed). The method easily handles the case with uncertainties in one or both variables. There is no need for, always more or less arbitrary, weighting the experimental data. The approach is directly applicable to other experimental data processing problems like outliers detection or finding the straight line, which is tangent to the experimental curve.Comment: 21 pages, LaTEX2e, 4 figures, submitted to Reliable Computin

    Reliable uncertainties in indirect measurements

    Full text link
    In this article we present very intuitive, easy to follow, yet mathematically rigorous, approach to the so called data fitting process. Rather than minimizing the distance between measured and simulated data points, we prefer to find such an area in searched parameters' space that generates simulated curve crossing as many acquired experimental points as possible, but at least half of them. Such a task is pretty easy to attack with interval calculations. The problem is, however, that interval calculations operate on guaranteed intervals, that is on pairs of numbers determining minimal and maximal values of measured quantity while in vast majority of cases our measured quantities are expressed rather as a pair of two other numbers: the average value and its standard deviation. Here we propose the combination of interval calculus with basic notions from probability and statistics. This approach makes possible to obtain the results in familiar form as reliable values of searched parameters, their standard deviations, and their correlations as well. There are no assumptions concerning the probability density distributions of experimental values besides the obvious one that their variances are finite. Neither the symmetry of uncertainties of experimental distributions is required (assumed) nor those uncertainties have to be `small.' As a side effect, outliers are quietly and safely ignored, even if numerous.Comment: 9 pages, 4 figures, PACS numbers: 07.05.Kf; 02.60.Ed; 02.70.R

    Power and beauty of interval methods

    Full text link
    Interval calculus is a relatively new branch of mathematics. Initially understood as a set of tools to assess the quality of numerical calculations (rigorous control of rounding errors), it became a discipline in its own rights today. Interval methods are usefull whenever we have to deal with uncertainties, which can be rigorously bounded. Fuzzy sets, rough sets and probability calculus can perform similar tasks, yet only the interval methods are able to (dis)prove, with mathematical rigor, the (non)existence of desired solution(s). Known are several problems, not presented here, which cannot be effectively solved by any other means. This paper presents basic notions and main ideas of interval calculus and two examples of useful algorithms.Comment: Short, yet highly informative introduction into interval methods with immediate application to experimental data analysis. To be presented on May 26-29, 2003, VI Domestic Conference on Evolutionary Algorithms and Global Optimization, Poland (invited talk). 8 pages, no figures, LaTex2e. Improved layout, simplified notation, keyword list extende

    Aging, double helix and small world property in genetic algorithms

    Full text link
    Over a quarter of century after the invention of genetic algorithms and miriads of their modifications, as well as successful implementations, we are still lacking many essential details of thorough analysis of it's inner working. One of such fundamental questions is: how many generations do we need to solve the optimization problem? This paper tries to answer this question, albeit in a fuzzy way, making use of the double helix concept. As a byproduct we gain better understanding of the ways, in which the genetic algorithm may be fine tuned.Comment: Submitted to the workshop on evolutionary algorithms, Krakow (Cracow), Poland, Sept. 30, 2002, 6 pages, no figures, LaTeX 2.09 requires kaeog.sty (included

    Breakthrough in Interval Data Fitting I. The Role of Hausdorff Distance

    Full text link
    This is the first of two papers describing the process of fitting experimental data under interval uncertainty. Here I present the methodology, designed from the very beginning as an interval-oriented tool, meant to replace to the large extent the famous Least Squares (LSQ) and other slightly less popular methods. Contrary to its classical counterparts, the presented method does not require any poorly justified prior assumptions, like smallness of experimental uncertainties or their normal (Gaussian) distribution. Using interval approach, we are able to fit rigorously and reliably not only the simple functional dependencies, with no extra effort when both variables are uncertain, but also the cases when the constitutive equation exists in implicit rather than explicit functional form. The magic word and a key to success of interval approach appears the Hausdorff distance.Comment: No figures, submitted to XII Conference on Evolutionary Algorithms and Global Optimization (XII KAEiOG), to be held on June 1-3 in Zawoja (Poland

    Where is magnetic anisotropy field pointing to?

    Full text link
    The desired result of magnetic anisotropy investigations is the determination of value(s) of various anisotropy constant(s). This is sometimes difficult, especially when the precise knowledge of saturation magnetization is required, as it happens in ferromagnetic resonance (FMR) studies. In such cases we usually resort to `trick' and fit our experimental data to the quantity called \emph{anisotropy field}, which is strictly proportional to the ratio of the searched anisotropy constant and saturation magnetization. Yet, this quantity is scalar, simply a number, and is therefore of little value for modeling or simulations of the magnetostatic or micromagnetic structures. Here we show how to `translate' the values of magnetic anisotropy constants into the complete vector of magnetic anisotropy field. Our derivation is rigorous and covers the most often encountered cases, from uniaxial to cubic anisotropy.Comment: 3 pages, no figure

    Classifying extrema using intervals

    Full text link
    We present a straightforward and verified method of deciding whether the n-dimensional point x (n>=1), such that \nabla f(x)=0, is the local minimizer, maximizer or just a saddle point of a real-valued function f. The method scales linearly with dimensionality of the problem and never produces false results.Comment: LaTeX, 7 pages, no figure
    • …
    corecore