36,357 research outputs found

    A smartphone agent for QoE evaluation and user classification over mobile networks

    Get PDF
    The continuous growth of mobile users and bandwidth-consuming applications and the shortage of radio resources put a serious challenge on how to efficiently exploit existing networks and contemporary improve Quality of Experience. One of the most relevant problem for network operators is thus to find an explicit relationship between QoS and QoE, for the purpose of maximizing the latter while saving precious resources. In order to accomplish this challenging task, we present TeleAbarth, an innovative Android application entirely developed at TelecomItalia Laboratories, able to contemporary collect network measurements and end-users quality feedback regarding the use of smartphone applications. We deployed TeleAbarth in a field experimentation in order to study the relationship between QoS and QoE for video streaming applications, in terms of downstream bandwidth and video loading time. On the basis of the results obtained, we propose a technique to classify user behavior through his or her reliability, sensibility and fairness

    Digitizing signals - a short tutorial guide

    No full text
    Converting the analogue signal, as captured from a patient, into digital format is known as digitizing, or analogue to digital conversion. This is a vital first step in for digital signal processing. The acquisition of high-quality data requires appropriate choices of system and parameters (sampling rate, anti-alias filter, amplification, number of ‘bits’). Thus tutorial aims to provide a practical guide to making these choices, and explains the underlying principles (rather than the mathematical theory and proofs) and potential pitfalls. Illustrative examples from different physiological signals are provided

    Steady nearly incompressible vector fields in 2D: chain rule and renormalization

    Full text link
    Given bounded vector field b:Rd→Rdb : \mathbb R^d \to \mathbb R^d, scalar field u:Rd→Ru : \mathbb R^d \to \mathbb R and a smooth function β:R→R\beta : \mathbb R \to \mathbb R we study the characterization of the distribution div(β(u)b)\mathrm{div}(\beta(u)b) in terms of div b\mathrm{div}\, b and div(ub)\mathrm{div}(u b). In the case of BVBV vector fields bb (and under some further assumptions) such characterization was obtained by L. Ambrosio, C. De Lellis and J. Mal\'y, up to an error term which is a measure concentrated on so-called \emph{tangential set} of bb. We answer some questions posed in their paper concerning the properties of this term. In particular we construct a nearly incompressible BVBV vector field bb and a bounded function uu for which this term is nonzero. For steady nearly incompressible vector fields bb (and under some further assumptions) in case when d=2d=2 we provide complete characterization of div(β(u)b)\mathrm{div}(\beta(u) b) in terms of div b\mathrm{div}\, b and div(ub)\mathrm{div}(u b). Our approach relies on the structure of level sets of Lipschitz functions on R2\mathrm R^2 obtained by G. Alberti, S. Bianchini and G. Crippa. Extending our technique we obtain new sufficient conditions when any bounded weak solution uu of ∂tu+b⋅∇u=0\partial_t u + b \cdot \nabla u=0 is \emph{renormalized}, i.e. also solves ∂tβ(u)+b⋅∇β(u)=0\partial_t \beta(u) + b \cdot \nabla \beta(u)=0 for any smooth function β:R→R\beta : \mathbb R \to \mathbb R. As a consequence we obtain new uniqueness result for this equation.Comment: 50 pages, 8 figure

    Jet grooming through reinforcement learning

    Get PDF
    We introduce a novel implementation of a reinforcement learning (RL) algorithm which is designed to find an optimal jet grooming strategy, a critical tool for collider experiments. The RL agent is trained with a reward function constructed to optimize the resulting jet properties, using both signal and background samples in a simultaneous multi-level training. We show that the grooming algorithm derived from the deep RL agent can match state-of-the-art techniques used at the Large Hadron Collider, resulting in improved mass resolution for boosted objects. Given a suitable reward function, the agent learns how to train a policy which optimally removes soft wide-angle radiation, allowing for a modular grooming technique that can be applied in a wide range of contexts. These results are accessible through the corresponding GroomRL framework.Comment: 11 pages, 10 figures, code available at https://github.com/JetsGame/GroomRL, updated to match published versio
    • …
    corecore