270 research outputs found

    Probing the Top-Quark Electric Dipole Moment at a Photon Linear Collider

    Get PDF
    We probe the top-quark electric dipole moment (EDM) in top-quark pair production via photon-photon fusion at a photon linear collider. We show how linearly-polarized photon beams can be used to extract information on the top-quark EDM without the use of complicated angular correlations of top-quark decay products. If the luminosity of the laser back-scattered photon-photon collisions is comparable to that of the e+e−e^+e^- collisions, then the measurement of the top-quark EDM obtained by counting top-quark-production events in photon fusion can be as accurate as the measurement obtained by studying the ttˉt\bar{t} decay correlations in e+e−e^+e^- collisions using a perfect detector.Comment: Latex, 11 pages, 1 figure (not included). One compressed postscript file of the paper available at ftp://ftp.kek.jp/kek/preprints/TH/TH-443/kekth443.ps.g

    A polynomial training algorithm for calculating perceptrons of optimal stability

    Full text link
    Recomi (REpeated COrrelation Matrix Inversion) is a polynomially fast algorithm for searching optimally stable solutions of the perceptron learning problem. For random unbiased and biased patterns it is shown that the algorithm is able to find optimal solutions, if any exist, in at worst O(N^4) floating point operations. Even beyond the critical storage capacity alpha_c the algorithm is able to find locally stable solutions (with negative stability) at the same speed. There are no divergent time scales in the learning process. A full proof of convergence cannot yet be given, only major constituents of a proof are shown.Comment: 11 pages, Latex, 4 EPS figure

    On-Line AdaTron Learning of Unlearnable Rules

    Full text link
    We study the on-line AdaTron learning of linearly non-separable rules by a simple perceptron. Training examples are provided by a perceptron with a non-monotonic transfer function which reduces to the usual monotonic relation in a certain limit. We find that, although the on-line AdaTron learning is a powerful algorithm for the learnable rule, it does not give the best possible generalization error for unlearnable problems. Optimization of the learning rate is shown to greatly improve the performance of the AdaTron algorithm, leading to the best possible generalization error for a wide range of the parameter which controls the shape of the transfer function.)Comment: RevTeX 17 pages, 8 figures, to appear in Phys.Rev.

    SYNAPSE-1: A High-Speed General Purpose Parallel Neurocomputer System

    Full text link
    This paper describes the general purpose neurocomputer SYNAPSE-1 which has been developed in cooperation between Siemens Munich and the University of Mannheim. This system contains one of the most powerful processors available for neural algorithms, the neuro signal processor MA16. The prototype system executes a test algorithm 8,000 times as fast as a Sparc-2 workstation. This processing speed has been achieved by using a system architecture which is optimally adapted to the general structure of neural algorithms. It is a systolic array of MA16 processors embedded in a multiprocessor system of general purpose microprocessors

    b→sγb \to s \gamma Decay and Right-handed Top-bottom Charged Current

    Full text link
    We introduce an anomalous top quark coupling (right-handed current) into Standard Model Lagrangian. Based on this, a more complete calculation of b→sγb \to s\gamma decay including leading log QCD corrections from mtopm_{top} to MWM_W in addition to corrections from MWM_{W} to mbm_b is given. The inclusive decay rate is found to be suppressed comparing with the case without QCD running from mtm_t to MWM_W except at the time of small values of ∣fRtb∣|f_R^{tb}|. e.g. when fRtb=−0.08f_R^{tb}=-0.08, it is only 1/101/10 of the value given before. As ∣fRtb∣|f_R^{tb}| goes smaller, this contribution is an enhancement like standard model case. From the newly experiment of CLEO Collaboration, strict restrictions to parameters of this top-bottom quark coupling are found.Comment: 20 Pages, 2 figures( ps file uuencoded)

    Aircraft study of the impact of lake-breeze circulations on trace gases and particles during BAQS-Met 2007

    Get PDF
    High time-resolved aircraft data, concurrent surface measurements and air quality model simulations were explored to diagnose the processes influencing aerosol chemistry under the influence of lake-breeze circulations in a polluted region of southwestern Ontario, Canada. The analysis was based upon horizontal aircraft transects conducted at multiple altitudes across an entire lake-breeze circulation. Air mass boundaries due to lake-breeze fronts were identified in the aircraft meteorological and chemical data, which were consistent with the frontal locations determined from surface analyses. Observations and modelling support the interpretation of a lake-breeze circulation where pollutants were lofted at a lake-breeze front, transported in the synoptic flow, caught in a downdraft over the lake, and then confined by onshore flow. The detailed analysis led to the development of conceptual models that summarize the complex 3-D circulation patterns and their interaction with the synoptic flow. The identified air mass boundaries, the interpretation of the lake-breeze circulation, and the air parcel circulation time in the lake-breeze circulation (3.0 to 5.0 h) enabled formation rates of organic aerosol (OA/ΔCO) and SO<sub>4</sub><sup>2−</sup> to be determined. The formation rate for OA (relative to excess CO in ppmv) was found to be 11.6–19.4 μg m<sup>−3</sup> ppmv<sup>−1</sup> h<sup>−1</sup> and the SO<sub>4</sub><sup>2−</sup> formation rate was 5.0–8.8% h<sup>−1</sup>. The formation rates are enhanced relative to regional background rates implying that lake-breeze circulations are an important dynamic in the formation of SO<sub>4</sub><sup>2−</sup> and secondary organic aerosol. The presence of cumulus clouds associated with the lake-breeze fronts suggests that these enhancements could be due to cloud processes. Additionally, the effective confinement of pollutants along the shoreline may have limited pollutant dilution leading to elevated oxidant concentrations

    Diffusion with random distribution of static traps

    Full text link
    The random walk problem is studied in two and three dimensions in the presence of a random distribution of static traps. An efficient Monte Carlo method, based on a mapping onto a polymer model, is used to measure the survival probability P(c,t) as a function of the trap concentration c and the time t. Theoretical arguments are presented, based on earlier work of Donsker and Varadhan and of Rosenstock, why in two dimensions one expects a data collapse if -ln[P(c,t)]/ln(t) is plotted as a function of (lambda t)^{1/2}/ln(t) (with lambda=-ln(1-c)), whereas in three dimensions one expects a data collapse if -t^{-1/3}ln[P(c,t)] is plotted as a function of t^{2/3}lambda. These arguments are supported by the Monte Carlo results. Both data collapses show a clear crossover from the early-time Rosenstock behavior to Donsker-Varadhan behavior at long times.Comment: 4 pages, 6 figure

    Generalizing with perceptrons in case of structured phase- and pattern-spaces

    Full text link
    We investigate the influence of different kinds of structure on the learning behaviour of a perceptron performing a classification task defined by a teacher rule. The underlying pattern distribution is permitted to have spatial correlations. The prior distribution for the teacher coupling vectors itself is assumed to be nonuniform. Thus classification tasks of quite different difficulty are included. As learning algorithms we discuss Hebbian learning, Gibbs learning, and Bayesian learning with different priors, using methods from statistics and the replica formalism. We find that the Hebb rule is quite sensitive to the structure of the actual learning problem, failing asymptotically in most cases. Contrarily, the behaviour of the more sophisticated methods of Gibbs and Bayes learning is influenced by the spatial correlations only in an intermediate regime of α\alpha, where α\alpha specifies the size of the training set. Concerning the Bayesian case we show, how enhanced prior knowledge improves the performance.Comment: LaTeX, 32 pages with eps-figs, accepted by J Phys

    Storage capacity of a constructive learning algorithm

    Full text link
    Upper and lower bounds for the typical storage capacity of a constructive algorithm, the Tilinglike Learning Algorithm for the Parity Machine [M. Biehl and M. Opper, Phys. Rev. A {\bf 44} 6888 (1991)], are determined in the asymptotic limit of large training set sizes. The properties of a perceptron with threshold, learning a training set of patterns having a biased distribution of targets, needed as an intermediate step in the capacity calculation, are determined analytically. The lower bound for the capacity, determined with a cavity method, is proportional to the number of hidden units. The upper bound, obtained with the hypothesis of replica symmetry, is close to the one predicted by Mitchinson and Durbin [Biol. Cyber. {\bf 60} 345 (1989)].Comment: 13 pages, 1 figur
    • …
    corecore