10,341 research outputs found

    The Stochastic Advance-Retreat Course: An Approach to Analyse Social-Economic Evolution

    Get PDF
    The paper presents the basic theory and conceptual model for advance-retreat course and provides the analytic model the stochastic advance-retreat course and the solving method of it, discusses the relations between the endogenous resistance and subject interests increase with the periodic vibration in an advance-retreat course, gets some results, like heightening appropriately the risk-free interest rate will be favorable to subject interests’ increasing in stable, interests increasing in high-speed will result in the fast increase of resistance, the subject progress in a appropriate pace may bring the conclusion such as lasting interests increase and return with higher-level interests, etc. Finally, the empirical researches empirical, on data of USA GDP (chained) price index, has been made to the stochastic advance-retreat model, and the results show that the stochastic advance-retreat model can describe USA economic development process in recent 65 years.Economic process; advance-retreat course; the basic theory; analytic model

    Some Remarks on the Final State Interactions in B→πKB\to \pi K Decays

    Get PDF
    Careful discussions are made on some points which are met in studying B decay final state interactions, taking the B0→π+K−B^0\to \pi^+ K^- process as an example. We point out that π\pi--exchange rescatterings are not important, whereas for D∗D^* and D∗∗D^{**} exchanges, since the B0→D+Ds−B^0\to D^+D_s^- decay has a large branching ratio their contributions may be large enough to enhance the B→πKB\to \pi K branching ratio significantly. However our estimates fail to predict a large enhancement.Comment: 5 pages, use elsart.sty; The previous version is erroneous in explaining the "charm peguin" effects. No large enhancement to B→πKB\to \pi K is found through D+Ds−D^+D_s^- intermediate stat

    Generalized Batch Normalization: Towards Accelerating Deep Neural Networks

    Full text link
    Utilizing recently introduced concepts from statistics and quantitative risk management, we present a general variant of Batch Normalization (BN) that offers accelerated convergence of Neural Network training compared to conventional BN. In general, we show that mean and standard deviation are not always the most appropriate choice for the centering and scaling procedure within the BN transformation, particularly if ReLU follows the normalization step. We present a Generalized Batch Normalization (GBN) transformation, which can utilize a variety of alternative deviation measures for scaling and statistics for centering, choices which naturally arise from the theory of generalized deviation measures and risk theory in general. When used in conjunction with the ReLU non-linearity, the underlying risk theory suggests natural, arguably optimal choices for the deviation measure and statistic. Utilizing the suggested deviation measure and statistic, we show experimentally that training is accelerated more so than with conventional BN, often with improved error rate as well. Overall, we propose a more flexible BN transformation supported by a complimentary theoretical framework that can potentially guide design choices.Comment: accepted at AAAI-1
    • …
    corecore