205 research outputs found

    A polynomial training algorithm for calculating perceptrons of optimal stability

    Full text link
    Recomi (REpeated COrrelation Matrix Inversion) is a polynomially fast algorithm for searching optimally stable solutions of the perceptron learning problem. For random unbiased and biased patterns it is shown that the algorithm is able to find optimal solutions, if any exist, in at worst O(N^4) floating point operations. Even beyond the critical storage capacity alpha_c the algorithm is able to find locally stable solutions (with negative stability) at the same speed. There are no divergent time scales in the learning process. A full proof of convergence cannot yet be given, only major constituents of a proof are shown.Comment: 11 pages, Latex, 4 EPS figure

    On-Line AdaTron Learning of Unlearnable Rules

    Full text link
    We study the on-line AdaTron learning of linearly non-separable rules by a simple perceptron. Training examples are provided by a perceptron with a non-monotonic transfer function which reduces to the usual monotonic relation in a certain limit. We find that, although the on-line AdaTron learning is a powerful algorithm for the learnable rule, it does not give the best possible generalization error for unlearnable problems. Optimization of the learning rate is shown to greatly improve the performance of the AdaTron algorithm, leading to the best possible generalization error for a wide range of the parameter which controls the shape of the transfer function.)Comment: RevTeX 17 pages, 8 figures, to appear in Phys.Rev.

    Generalizing with perceptrons in case of structured phase- and pattern-spaces

    Full text link
    We investigate the influence of different kinds of structure on the learning behaviour of a perceptron performing a classification task defined by a teacher rule. The underlying pattern distribution is permitted to have spatial correlations. The prior distribution for the teacher coupling vectors itself is assumed to be nonuniform. Thus classification tasks of quite different difficulty are included. As learning algorithms we discuss Hebbian learning, Gibbs learning, and Bayesian learning with different priors, using methods from statistics and the replica formalism. We find that the Hebb rule is quite sensitive to the structure of the actual learning problem, failing asymptotically in most cases. Contrarily, the behaviour of the more sophisticated methods of Gibbs and Bayes learning is influenced by the spatial correlations only in an intermediate regime of α\alpha, where α\alpha specifies the size of the training set. Concerning the Bayesian case we show, how enhanced prior knowledge improves the performance.Comment: LaTeX, 32 pages with eps-figs, accepted by J Phys

    Diffusion with random distribution of static traps

    Full text link
    The random walk problem is studied in two and three dimensions in the presence of a random distribution of static traps. An efficient Monte Carlo method, based on a mapping onto a polymer model, is used to measure the survival probability P(c,t) as a function of the trap concentration c and the time t. Theoretical arguments are presented, based on earlier work of Donsker and Varadhan and of Rosenstock, why in two dimensions one expects a data collapse if -ln[P(c,t)]/ln(t) is plotted as a function of (lambda t)^{1/2}/ln(t) (with lambda=-ln(1-c)), whereas in three dimensions one expects a data collapse if -t^{-1/3}ln[P(c,t)] is plotted as a function of t^{2/3}lambda. These arguments are supported by the Monte Carlo results. Both data collapses show a clear crossover from the early-time Rosenstock behavior to Donsker-Varadhan behavior at long times.Comment: 4 pages, 6 figure

    Aircraft study of the impact of lake-breeze circulations on trace gases and particles during BAQS-Met 2007

    Get PDF
    High time-resolved aircraft data, concurrent surface measurements and air quality model simulations were explored to diagnose the processes influencing aerosol chemistry under the influence of lake-breeze circulations in a polluted region of southwestern Ontario, Canada. The analysis was based upon horizontal aircraft transects conducted at multiple altitudes across an entire lake-breeze circulation. Air mass boundaries due to lake-breeze fronts were identified in the aircraft meteorological and chemical data, which were consistent with the frontal locations determined from surface analyses. Observations and modelling support the interpretation of a lake-breeze circulation where pollutants were lofted at a lake-breeze front, transported in the synoptic flow, caught in a downdraft over the lake, and then confined by onshore flow. The detailed analysis led to the development of conceptual models that summarize the complex 3-D circulation patterns and their interaction with the synoptic flow. The identified air mass boundaries, the interpretation of the lake-breeze circulation, and the air parcel circulation time in the lake-breeze circulation (3.0 to 5.0 h) enabled formation rates of organic aerosol (OA/ΔCO) and SO<sub>4</sub><sup>2−</sup> to be determined. The formation rate for OA (relative to excess CO in ppmv) was found to be 11.6–19.4 μg m<sup>−3</sup> ppmv<sup>−1</sup> h<sup>−1</sup> and the SO<sub>4</sub><sup>2−</sup> formation rate was 5.0–8.8% h<sup>−1</sup>. The formation rates are enhanced relative to regional background rates implying that lake-breeze circulations are an important dynamic in the formation of SO<sub>4</sub><sup>2−</sup> and secondary organic aerosol. The presence of cumulus clouds associated with the lake-breeze fronts suggests that these enhancements could be due to cloud processes. Additionally, the effective confinement of pollutants along the shoreline may have limited pollutant dilution leading to elevated oxidant concentrations

    Storage capacity of a constructive learning algorithm

    Full text link
    Upper and lower bounds for the typical storage capacity of a constructive algorithm, the Tilinglike Learning Algorithm for the Parity Machine [M. Biehl and M. Opper, Phys. Rev. A {\bf 44} 6888 (1991)], are determined in the asymptotic limit of large training set sizes. The properties of a perceptron with threshold, learning a training set of patterns having a biased distribution of targets, needed as an intermediate step in the capacity calculation, are determined analytically. The lower bound for the capacity, determined with a cavity method, is proportional to the number of hidden units. The upper bound, obtained with the hypothesis of replica symmetry, is close to the one predicted by Mitchinson and Durbin [Biol. Cyber. {\bf 60} 345 (1989)].Comment: 13 pages, 1 figur

    Long-term changes in tropospheric ozone

    Get PDF
    Tropospheric ozone changes are investigated using a selected network of surface and ozonesonde sites to give a broad geographic picture of long-term variations. The picture of long-term tropospheric ozone changes is a varied one in terms of both the sign and magnitude of trends and in the possible causes for the changes. At mid latitudes of the S.H. three time series of ∼20 years in length agree in showing increases that are strongest in the austral spring (August–October). Profile measurements show this increase extending through the mid troposphere but not into the highest levels of the troposphere. In the N.H. in the Arctic a period of declining ozone in the troposphere through the 1980s into the mid-1990s has reversed and the overall change is small. The decadal-scale variations in the troposphere in this region are related in part to changes in the lowermost stratosphere. At mid latitudes in the N.H., continental Europe and Japan showed significant increases in the 1970s and 1980s. Over North America rises in the 1970s are less than those seen in Europe and Japan, suggesting significant regional differences. In all three of these mid latitude, continental regions tropospheric ozone amounts appear to have leveled off or in some cases declined in the more recent decades. Over the North Atlantic three widely separated sites show significant increases since the late-1990s that may have peaked in recent years. In the N.H. tropics both the surface record and the ozonesondes in Hawaii show a significant increase in the autumn months in the most recent decade compared to earlier periods that drives the overall increase seen in the 30-year record. This appears to be related to a shift in the transport pattern during this season with more frequent flow from higher latitudes in the latest decade

    Order statistics of the trapping problem

    Full text link
    When a large number N of independent diffusing particles are placed upon a site of a d-dimensional Euclidean lattice randomly occupied by a concentration c of traps, what is the m-th moment of the time t_{j,N} elapsed until the first j are trapped? An exact answer is given in terms of the probability Phi_M(t) that no particle of an initial set of M=N, N-1,..., N-j particles is trapped by time t. The Rosenstock approximation is used to evaluate Phi_M(t), and it is found that for a large range of trap concentracions the m-th moment of t_{j,N} goes as x^{-m} and its variance as x^{-2}, x being ln^{2/d} (1-c) ln N. A rigorous asymptotic expression (dominant and two corrective terms) is given for for the one-dimensional lattice.Comment: 11 pages, 7 figures, to be published in Phys. Rev.
    • …
    corecore