1,012 research outputs found

    Senior Recital

    Get PDF

    On the Steady Nature of Line-Driven Disk Winds: Application to Cataclysmic Variables

    Full text link
    We apply the semi-analytical analysis of the steady nature of line-driven winds presented in two earlier papers to disk winds driven by the flux distribution of a standard Shakura & Sunyaev (1973) disk for typical cataclysmic variable (CV) parameters. We find that the wind critical point tends to be closer to the disk surface towards the inner disk regions. Our main conclusion, however, is that a line-driven wind, arising from a steady disk flux distribution of a standard Shakura-Sunyaev disk capable of locally supplying the corresponding mass flow, is steady. These results confirm the findings of an earlier paper that studied "simple" flux distributions that are more readily analyzable than those presented here. These results are consistent with the steady velocity nature of outflows observationally inferred for both CVs and quasi-stellar objects (QSOs). We find good agreement with the 2.5D CV disk wind models of Pereyra and collaborators. These results suggest that the likely scenario to account for the wind outflows commonly observed in CVs is the line-driven accretion disk wind scenario, as suggested early-on by Cordova & Mason (1982). For QSOs, these results show that the line-driven accretion disk wind continues to be a promising scenario to account for the outflows detected in broad absorption line (BAL) QSOs, as suggested early-on by Turnshek (1984), and analyzed in detail by Murray et al. (1995).Comment: 35 pages, 20 figure

    APRIL: Active Preference-learning based Reinforcement Learning

    Get PDF
    This paper focuses on reinforcement learning (RL) with limited prior knowledge. In the domain of swarm robotics for instance, the expert can hardly design a reward function or demonstrate the target behavior, forbidding the use of both standard RL and inverse reinforcement learning. Although with a limited expertise, the human expert is still often able to emit preferences and rank the agent demonstrations. Earlier work has presented an iterative preference-based RL framework: expert preferences are exploited to learn an approximate policy return, thus enabling the agent to achieve direct policy search. Iteratively, the agent selects a new candidate policy and demonstrates it; the expert ranks the new demonstration comparatively to the previous best one; the expert's ranking feedback enables the agent to refine the approximate policy return, and the process is iterated. In this paper, preference-based reinforcement learning is combined with active ranking in order to decrease the number of ranking queries to the expert needed to yield a satisfactory policy. Experiments on the mountain car and the cancer treatment testbeds witness that a couple of dozen rankings enable to learn a competent policy

    Arabidopsis Root-Type Ferredoxin: NADP(H) Oxidoreductase 2 is Involved in Detoxification of Nitrite in Roots

    Get PDF
    This work was supported by RIKEN [Special Postdoctoral Researchers (SPDR) fellowship to T.H.]
    corecore