18,376 research outputs found

    Flux Modulation from the Rossby Wave Instability in microquasars accretion disks: toward a HFQPO model

    Full text link
    Context. There have been a long string of efforts to understand the source of the variability observed in microquasars, especially concerning the elusive High-Frequency Quasi-Periodic Oscillation. These oscillations are among the fastest phenomena that affect matter in the vicinity of stellar black holes and therefore could be used as probes of strong-field general relativity. Nevertheless, no model has yet gained wide acceptance. Aims. The aim of this article is to investigate the model derived from the occurrence of the Rossby wave instability at the inner edge of the accretion disk. In particular, our goal here is to demonstrate the capacity of this instability to modulate the observed flux in agreement with the observed results. Methods. We use the AMRVAC hydrodynamical code to model the instability in a 3D optically thin disk. The GYOTO ray-tracing code is then used to compute the associated light curve. Results. We show that the 3D Rossby wave instability is able to modulate the flux well within the observed limits.We highlight that 2D simulations allow us to obtain the same general characteristics of the light curve as 3D calculations. With the time resolution we adopted in this work, three dimensional simulations do not give rise to any new observable features that could be detected by current instrumentation or archive data.Comment: 10 pages, 10 figures, accepted by A&

    Finite-Blocklength Bounds for Wiretap Channels

    Full text link
    This paper investigates the maximal secrecy rate over a wiretap channel subject to reliability and secrecy constraints at a given blocklength. New achievability and converse bounds are derived, which are shown to be tighter than existing bounds. The bounds also lead to the tightest second-order coding rate for discrete memoryless and Gaussian wiretap channels.Comment: extended version of a paper submitted to ISIT 201

    QDQD-Learning: A Collaborative Distributed Strategy for Multi-Agent Reinforcement Learning Through Consensus + Innovations

    Full text link
    The paper considers a class of multi-agent Markov decision processes (MDPs), in which the network agents respond differently (as manifested by the instantaneous one-stage random costs) to a global controlled state and the control actions of a remote controller. The paper investigates a distributed reinforcement learning setup with no prior information on the global state transition and local agent cost statistics. Specifically, with the agents' objective consisting of minimizing a network-averaged infinite horizon discounted cost, the paper proposes a distributed version of QQ-learning, QD\mathcal{QD}-learning, in which the network agents collaborate by means of local processing and mutual information exchange over a sparse (possibly stochastic) communication network to achieve the network goal. Under the assumption that each agent is only aware of its local online cost data and the inter-agent communication network is \emph{weakly} connected, the proposed distributed scheme is almost surely (a.s.) shown to yield asymptotically the desired value function and the optimal stationary control policy at each network agent. The analytical techniques developed in the paper to address the mixed time-scale stochastic dynamics of the \emph{consensus + innovations} form, which arise as a result of the proposed interactive distributed scheme, are of independent interest.Comment: Submitted to the IEEE Transactions on Signal Processing, 33 page

    Distributed Linear Parameter Estimation: Asymptotically Efficient Adaptive Strategies

    Full text link
    The paper considers the problem of distributed adaptive linear parameter estimation in multi-agent inference networks. Local sensing model information is only partially available at the agents and inter-agent communication is assumed to be unpredictable. The paper develops a generic mixed time-scale stochastic procedure consisting of simultaneous distributed learning and estimation, in which the agents adaptively assess their relative observation quality over time and fuse the innovations accordingly. Under rather weak assumptions on the statistical model and the inter-agent communication, it is shown that, by properly tuning the consensus potential with respect to the innovation potential, the asymptotic information rate loss incurred in the learning process may be made negligible. As such, it is shown that the agent estimates are asymptotically efficient, in that their asymptotic covariance coincides with that of a centralized estimator (the inverse of the centralized Fisher information rate for Gaussian systems) with perfect global model information and having access to all observations at all times. The proof techniques are mainly based on convergence arguments for non-Markovian mixed time scale stochastic approximation procedures. Several approximation results developed in the process are of independent interest.Comment: Submitted to SIAM Journal on Control and Optimization journal. Initial Submission: Sept. 2011. Revised: Aug. 201
    corecore