10,005 research outputs found

    Mind The Gap

    Full text link
    We discuss an optimisation criterion for the exact renormalisation group based on the inverse effective propagator, which displays a gap. We show that a simple extremisation of the gap stabilises the flow, leading to better convergence of approximate solutions towards the physical theory. This improves the reliability of truncations, most relevant for any high precision computation. These ideas are closely linked to the removal of a spurious scheme dependence and a minimum sensitivity condition. The issue of predictive power and a link to the Polchinski RG are discussed as well. We illustrate our findings by computing critical exponents for the Ising universality class.Comment: 6 pages, Talk presented at 2nd Conference on Exact Renormalization Group (ERG2000), Rome, Italy, 18-22 Sep 200

    Effect of Multiphase Radiation on Coal Combustion in a Pulverized Coal jet Flame

    Get PDF
    The accurate modeling of coal combustion requires detailed radiative heat transfer models for both gaseous combustion products and solid coal particles. A multiphase Monte Carlo ray tracing (MCRT) radiation solver is developed in this work to simulate a laboratory-scale pulverized coal flame. The MCRT solver considers radiative interactions between coal particles and three major combustion products (CO2, H2O, and CO). A line-by-line spectral database for the gas phase and a size-dependent nongray correlation for the solid phase are employed to account for the nongray effects. The flame structure is significantly altered by considering nongray radiation and the lift-off height of the flame increases by approximately 35%, compared to the simulation without radiation. Radiation is also found to affect the evolution of coal particles considerably as it takes over as the dominant mode of heat transfer for medium-to-large coal particles downstream of the flame. To investigate the respective effects of spectral models for the gas and solid phases, a Planck-mean-based gray gas model and a size-independent gray particle model are applied in a frozen-field analysis of a steady-state snapshot of the flame. The gray gas approximation considerably underestimates the radiative source terms for both the gas phase and the solid phase. The gray coal approximation also leads to under-prediction of the particle emission and absorption. However, the level of under-prediction is not as significant as that resulting from the employment of the gray gas model. Finally, the effect of the spectral property of ash on radiation is also investigated and found to be insignificant for the present target flame

    A Fast Integrated Planning and Control Framework for Autonomous Driving via Imitation Learning

    Full text link
    For safe and efficient planning and control in autonomous driving, we need a driving policy which can achieve desirable driving quality in long-term horizon with guaranteed safety and feasibility. Optimization-based approaches, such as Model Predictive Control (MPC), can provide such optimal policies, but their computational complexity is generally unacceptable for real-time implementation. To address this problem, we propose a fast integrated planning and control framework that combines learning- and optimization-based approaches in a two-layer hierarchical structure. The first layer, defined as the "policy layer", is established by a neural network which learns the long-term optimal driving policy generated by MPC. The second layer, called the "execution layer", is a short-term optimization-based controller that tracks the reference trajecotries given by the "policy layer" with guaranteed short-term safety and feasibility. Moreover, with efficient and highly-representative features, a small-size neural network is sufficient in the "policy layer" to handle many complicated driving scenarios. This renders online imitation learning with Dataset Aggregation (DAgger) so that the performance of the "policy layer" can be improved rapidly and continuously online. Several exampled driving scenarios are demonstrated to verify the effectiveness and efficiency of the proposed framework

    Data-Efficient Reinforcement Learning with Probabilistic Model Predictive Control

    Full text link
    Trial-and-error based reinforcement learning (RL) has seen rapid advancements in recent times, especially with the advent of deep neural networks. However, the majority of autonomous RL algorithms require a large number of interactions with the environment. A large number of interactions may be impractical in many real-world applications, such as robotics, and many practical systems have to obey limitations in the form of state space or control constraints. To reduce the number of system interactions while simultaneously handling constraints, we propose a model-based RL framework based on probabilistic Model Predictive Control (MPC). In particular, we propose to learn a probabilistic transition model using Gaussian Processes (GPs) to incorporate model uncertainty into long-term predictions, thereby, reducing the impact of model errors. We then use MPC to find a control sequence that minimises the expected long-term cost. We provide theoretical guarantees for first-order optimality in the GP-based transition models with deterministic approximate inference for long-term planning. We demonstrate that our approach does not only achieve state-of-the-art data efficiency, but also is a principled way for RL in constrained environments.Comment: Accepted at AISTATS 2018
    • …
    corecore