178 research outputs found

    On-Line AdaTron Learning of Unlearnable Rules

    Full text link
    We study the on-line AdaTron learning of linearly non-separable rules by a simple perceptron. Training examples are provided by a perceptron with a non-monotonic transfer function which reduces to the usual monotonic relation in a certain limit. We find that, although the on-line AdaTron learning is a powerful algorithm for the learnable rule, it does not give the best possible generalization error for unlearnable problems. Optimization of the learning rate is shown to greatly improve the performance of the AdaTron algorithm, leading to the best possible generalization error for a wide range of the parameter which controls the shape of the transfer function.)Comment: RevTeX 17 pages, 8 figures, to appear in Phys.Rev.

    bsγb \to s \gamma Decay and Right-handed Top-bottom Charged Current

    Full text link
    We introduce an anomalous top quark coupling (right-handed current) into Standard Model Lagrangian. Based on this, a more complete calculation of bsγb \to s\gamma decay including leading log QCD corrections from mtopm_{top} to MWM_W in addition to corrections from MWM_{W} to mbm_b is given. The inclusive decay rate is found to be suppressed comparing with the case without QCD running from mtm_t to MWM_W except at the time of small values of fRtb|f_R^{tb}|. e.g. when fRtb=0.08f_R^{tb}=-0.08, it is only 1/101/10 of the value given before. As fRtb|f_R^{tb}| goes smaller, this contribution is an enhancement like standard model case. From the newly experiment of CLEO Collaboration, strict restrictions to parameters of this top-bottom quark coupling are found.Comment: 20 Pages, 2 figures( ps file uuencoded)

    Generalizing with perceptrons in case of structured phase- and pattern-spaces

    Full text link
    We investigate the influence of different kinds of structure on the learning behaviour of a perceptron performing a classification task defined by a teacher rule. The underlying pattern distribution is permitted to have spatial correlations. The prior distribution for the teacher coupling vectors itself is assumed to be nonuniform. Thus classification tasks of quite different difficulty are included. As learning algorithms we discuss Hebbian learning, Gibbs learning, and Bayesian learning with different priors, using methods from statistics and the replica formalism. We find that the Hebb rule is quite sensitive to the structure of the actual learning problem, failing asymptotically in most cases. Contrarily, the behaviour of the more sophisticated methods of Gibbs and Bayes learning is influenced by the spatial correlations only in an intermediate regime of α\alpha, where α\alpha specifies the size of the training set. Concerning the Bayesian case we show, how enhanced prior knowledge improves the performance.Comment: LaTeX, 32 pages with eps-figs, accepted by J Phys

    Aircraft study of the impact of lake-breeze circulations on trace gases and particles during BAQS-Met 2007

    Get PDF
    High time-resolved aircraft data, concurrent surface measurements and air quality model simulations were explored to diagnose the processes influencing aerosol chemistry under the influence of lake-breeze circulations in a polluted region of southwestern Ontario, Canada. The analysis was based upon horizontal aircraft transects conducted at multiple altitudes across an entire lake-breeze circulation. Air mass boundaries due to lake-breeze fronts were identified in the aircraft meteorological and chemical data, which were consistent with the frontal locations determined from surface analyses. Observations and modelling support the interpretation of a lake-breeze circulation where pollutants were lofted at a lake-breeze front, transported in the synoptic flow, caught in a downdraft over the lake, and then confined by onshore flow. The detailed analysis led to the development of conceptual models that summarize the complex 3-D circulation patterns and their interaction with the synoptic flow. The identified air mass boundaries, the interpretation of the lake-breeze circulation, and the air parcel circulation time in the lake-breeze circulation (3.0 to 5.0 h) enabled formation rates of organic aerosol (OA/ΔCO) and SO<sub>4</sub><sup>2−</sup> to be determined. The formation rate for OA (relative to excess CO in ppmv) was found to be 11.6–19.4 μg m<sup>−3</sup> ppmv<sup>−1</sup> h<sup>−1</sup> and the SO<sub>4</sub><sup>2−</sup> formation rate was 5.0–8.8% h<sup>−1</sup>. The formation rates are enhanced relative to regional background rates implying that lake-breeze circulations are an important dynamic in the formation of SO<sub>4</sub><sup>2−</sup> and secondary organic aerosol. The presence of cumulus clouds associated with the lake-breeze fronts suggests that these enhancements could be due to cloud processes. Additionally, the effective confinement of pollutants along the shoreline may have limited pollutant dilution leading to elevated oxidant concentrations

    Order statistics of the trapping problem

    Full text link
    When a large number N of independent diffusing particles are placed upon a site of a d-dimensional Euclidean lattice randomly occupied by a concentration c of traps, what is the m-th moment of the time t_{j,N} elapsed until the first j are trapped? An exact answer is given in terms of the probability Phi_M(t) that no particle of an initial set of M=N, N-1,..., N-j particles is trapped by time t. The Rosenstock approximation is used to evaluate Phi_M(t), and it is found that for a large range of trap concentracions the m-th moment of t_{j,N} goes as x^{-m} and its variance as x^{-2}, x being ln^{2/d} (1-c) ln N. A rigorous asymptotic expression (dominant and two corrective terms) is given for for the one-dimensional lattice.Comment: 11 pages, 7 figures, to be published in Phys. Rev.

    Trans-Pacific Transport of Saharan Dust to Western North America: A Case Study

    Get PDF
    The first documented case of long range transport of Saharan dust over a pathway spanning Asia and the Pacific to Western North America is described. Crustal material generated by North African dust storms during the period 28 February - 3 March 2005 reached western Canada on 13-14 March 2005 and was observed by lidar and sunphotometer in the Vancouver region and by high altitude aerosol instrumentation at Whistler Peak. Global chemical models (GEOS-CHEM and NRL NAAPS) confirm the transport pathway and suggest source attribution was simplified in this case by the distinct, and somewhat unusual, lack of dust activity over Eurasia (Gobi and Takla Makan deserts) at this time. Over western North America, the dust layer, although subsiding close to the boundary layer, did not appear to contribute to boundary layer particulate matter concentrations. Furthermore, sunphotometer observations (and associated inversion products) suggest that the dust layer had only subtle optical impact (Aerosol Optical Thickness (Tau(sub a500)) and Angstrom exponent (Alpha(sub 440-870) were 0.1 and 1.2 respectively) and was dominated by fine particulate matter (modes in aerodynamic diameter at 0.3 and 2.5microns). High Altitude observations at Whistler BC, confirm the crustal origin of the layer (rich in Ca(++) ions) and the bi-modal size distribution. Although a weak event compared to the Asian Trans-Pacific dust events of 1998 and 2001, this novel case highlights the possibility that Saharan sources may contribute episodically to the aerosol burden in western North America

    Diffusion with random distribution of static traps

    Full text link
    The random walk problem is studied in two and three dimensions in the presence of a random distribution of static traps. An efficient Monte Carlo method, based on a mapping onto a polymer model, is used to measure the survival probability P(c,t) as a function of the trap concentration c and the time t. Theoretical arguments are presented, based on earlier work of Donsker and Varadhan and of Rosenstock, why in two dimensions one expects a data collapse if -ln[P(c,t)]/ln(t) is plotted as a function of (lambda t)^{1/2}/ln(t) (with lambda=-ln(1-c)), whereas in three dimensions one expects a data collapse if -t^{-1/3}ln[P(c,t)] is plotted as a function of t^{2/3}lambda. These arguments are supported by the Monte Carlo results. Both data collapses show a clear crossover from the early-time Rosenstock behavior to Donsker-Varadhan behavior at long times.Comment: 4 pages, 6 figure

    Storage capacity of a constructive learning algorithm

    Full text link
    Upper and lower bounds for the typical storage capacity of a constructive algorithm, the Tilinglike Learning Algorithm for the Parity Machine [M. Biehl and M. Opper, Phys. Rev. A {\bf 44} 6888 (1991)], are determined in the asymptotic limit of large training set sizes. The properties of a perceptron with threshold, learning a training set of patterns having a biased distribution of targets, needed as an intermediate step in the capacity calculation, are determined analytically. The lower bound for the capacity, determined with a cavity method, is proportional to the number of hidden units. The upper bound, obtained with the hypothesis of replica symmetry, is close to the one predicted by Mitchinson and Durbin [Biol. Cyber. {\bf 60} 345 (1989)].Comment: 13 pages, 1 figur

    Role of beam polarization in the determination of WWγWW\gamma and WWZWWZ couplings from e+eW+We^+e^-\to W^+W^-

    Full text link
    We evaluate the constraints on anomalous trilinear gauge-boson couplings that can be obtained from the study of electron-positron annihilation into WW pairs at a facility with either the electron beam longitudinally polarized or both electron and positron beams transversely polarized. The energy ranges considered in the analysis are the ones relevant to the next-linear collider and to LEP~200. We discuss the possibilities of a model independent analysis of the general CPCP conserving anomalous effective Lagrangian, as well as its restriction to some specific models with reduced number of independent couplings. The combination of observables with initial and final state polarizations allows to separately constrain the different couplings and to improve the corresponding numerical bounds.Comment: 24 pages, LaTeX, 9 figures (available on request from the authors

    bsγb\to s\gamma Constraints on the Minimal Supergravity Model with Large tanβ\tan\beta

    Get PDF
    In the minimal supergravity model (mSUGRA), as the parameter tanβ\tan\beta increases, the charged Higgs boson and light bottom squark masses decrease, which can potentially increase contributions from tH±tH^\pm, \tg\tb_j and \tz_i\tb_j loops in the decay bsγb\to s\gamma. We update a previous QCD improved bsγb\to s\gamma decay calculation to include in addition the effects of gluino and neutralino loops. We find that in the mSUGRA model, loops involving charginos also increase, and dominate over tWtW, tH±tH^\pm, \tg\tq and \tz_i\tq contributions for \tan\beta\agt 5-10. We find for large values of tanβ35\tan\beta \sim 35 that most of the parameter space of the mSUGRA model for μ<0\mu <0 is ruled out due to too large a value of branching ratio B(bsγ)B(b\to s\gamma). For μ>0\mu >0 and large tanβ\tan\beta, most of parameter space is allowed, although the regions with the least fine-tuning (low m0m_0 and m1/2m_{1/2}) are ruled out due to too low a value of B(bsγ)B(b\to s\gamma). We compare the constraints from bsγb\to s\gamma to constraints from the neutralino relic density, and to expectations for sparticle discovery at LEP2 and the Fermilab Tevatron ppˉp\bar p colliders. Finally, we show that non-universal GUT scale soft breaking squark mass terms can enhance gluino loop contributions to bsγb\to s\gamma decay rate even if these are diagonal.Comment: 14 page REVTEX file plus 6 PS figure
    corecore