381 research outputs found

    Content Based Status Updates

    Get PDF
    Consider a stream of status updates generated by a source, where each update is of one of two types: high priority or ordinary (low priority). These updates are to be transmitted through a network to a monitor. However, the transmission policy of each packet depends on the type of stream it belongs to. For the low priority stream, we analyze and compare the performances of two transmission schemes: (i) Ordinary updates are served in a First-Come-First-Served (FCFS) fashion, whereas, in (ii), the ordinary updates are transmitted according to an M/G/1/1 with preemption policy. In both schemes, high priority updates are transmitted according to an M/G/1/1 with preemption policy and receive preferential treatment. An arriving priority update discards and replaces any currently-in-service high priority update, and preempts (with eventual resume for scheme (i)) any ordinary update. We model the arrival processes of the two kinds of updates, in both schemes, as independent Poisson processes. For scheme (i), we find the arrival and service rates under which the system is stable and give closed-form expressions for average peak age and a lower bound on the average age of the ordinary stream. For scheme (ii), we derive closed-form expressions for the average age and average peak age of the high priority and low priority streams. We finally show that, if the service time is exponentially distributed, the M/M/1/1 with preemption policy leads to an average age of the low priority stream higher than the one achieved using the FCFS scheme. Therefore, the M/M//1/1 with preemption policy, when applied on the low priority stream of updates and in the presence of a higher priority scheme, is not anymore the optimal transmission policy from an age point of view

    Probabilistic Cross-Identification of Cosmic Events

    Full text link
    We discuss a novel approach to identifying cosmic events in separate and independent observations. In our focus are the true events, such as supernova explosions, that happen once, hence, whose measurements are not repeatable. Their classification and analysis have to make the best use of all the available data. Bayesian hypothesis testing is used to associate streams of events in space and time. Probabilities are assigned to the matches by studying their rates of occurrence. A case study of Type Ia supernovae illustrates how to use lightcurves in the cross-identification process. Constraints from realistic lightcurves happen to be well-approximated by Gaussians in time, which makes the matching process very efficient. Model-dependent associations are computationally more demanding but can further boost our confidence.Comment: 5 pages, 2 figures, accepted to Ap

    Investment under ambiguity with the best and worst in mind

    Get PDF
    Recent literature on optimal investment has stressed the difference between the impact of risk and the impact of ambiguity - also called Knightian uncertainty - on investors' decisions. In this paper, we show that a decision maker's attitude towards ambiguity is similarly crucial for investment decisions. We capture the investor's individual ambiguity attitude by applying alpha-MEU preferences to a standard investment problem. We show that the presence of ambiguity often leads to an increase in the subjective project value, and entrepreneurs are more eager to invest. Thereby, our investment model helps to explain differences in investment behavior in situations which are objectively identical

    Relation lifting, with an application to the many-valued cover modality

    Get PDF
    We introduce basic notions and results about relation liftings on categories enriched in a commutative quantale. We derive two necessary and sufficient conditions for a 2-functor T to admit a functorial relation lifting: one is the existence of a distributive law of T over the "powerset monad" on categories, one is the preservation by T of "exactness" of certain squares. Both characterisations are generalisations of the "classical" results known for set functors: the first characterisation generalises the existence of a distributive law over the genuine powerset monad, the second generalises preservation of weak pullbacks. The results presented in this paper enable us to compute predicate liftings of endofunctors of, for example, generalised (ultra)metric spaces. We illustrate this by studying the coalgebraic cover modality in this setting.Comment: 48 pages, accepted for publication in LMC

    Revisiting Synthesis for One-Counter Automata

    Full text link
    We study the (parameter) synthesis problem for one-counter automata with parameters. One-counter automata are obtained by extending classical finite-state automata with a counter whose value can range over non-negative integers and be tested for zero. The updates and tests applicable to the counter can further be made parametric by introducing a set of integer-valued variables called parameters. The synthesis problem for such automata asks whether there exists a valuation of the parameters such that all infinite runs of the automaton satisfy some omega-regular property. Lechner showed that (the complement of) the problem can be encoded in a restricted one-alternation fragment of Presburger arithmetic with divisibility. In this work (i) we argue that said fragment, called AERPADPLUS, is unfortunately undecidable. Nevertheless, by a careful re-encoding of the problem into a decidable restriction of AERPADPLUS, (ii) we prove that the synthesis problem is decidable in general and in N2EXP for several fixed omega-regular properties. Finally, (iii) we give a polynomial-space algorithm for the special case of the problem where parameters can only be used in tests, and not updates, of the counter

    Sensitivity of the Eisenberg-Noe clearing vector to individual interbank liabilities

    Get PDF
    We quantify the sensitivity of the Eisenberg-Noe clearing vector to estimation errors in the bilateral liabilities of a financial system in a stylized setting. The interbank liabilities matrix is a crucial input to the computation of the clearing vector. However, in practice central bankers and regulators must often estimate this matrix because complete information on bilateral liabilities is rarely available. As a result, the clearing vector may suffer from estimation errors in the liabilities matrix. We quantify the clearing vector's sensitivity to such estimation errors and show that its directional derivatives are, like the clearing vector itself, solutions of fixed point equations. We describe estimation errors utilizing a basis for the space of matrices representing permissible perturbations and derive analytical solutions to the maximal deviations of the Eisenberg-Noe clearing vector. This allows us to compute upper bounds for the worst case perturbations of the clearing vector in our simple setting. Moreover, we quantify the probability of observing clearing vector deviations of a certain magnitude, for uniformly or normally distributed errors in the relative liability matrix. Applying our methodology to a dataset of European banks, we find that perturbations to the relative liabilities can result in economically sizeable differences that could lead to an underestimation of the risk of contagion. Our results are a first step towards allowing regulators to quantify errors in their simulations.Comment: 37 page

    Interactive Programs and Weakly Final Coalgebras in Dependent Type Theory (Extended Version)

    Get PDF
    We reconsider the representation of interactive programs in dependent type theory that the authors proposed in earlier papers. Whereas in previous versions the type of interactive programs was introduced in an ad hoc way, it is here defined as a weakly final coalgebra for a general form of polynomial functor. The are two versions: in the first the interface with the real world is fixed, while in the second the potential interactions can depend on the history of previous interactions. The second version may be appropriate for working with specifications of interactive programs. We focus on command-response interfaces, and consider both client and server programs, that run on opposite sides such an interface. We give formation/introduction/elimination/equality rules for these coalgebras. These are explored in two dimensions: coiterative versus corecursive, and monadic versus non-monadic. We also comment upon the relationship of the corresponding rules with guarded induction. It turns out that the introduction rules are nothing but a slightly restricted form of guarded induction. However, the form in which we write guarded induction is not recursive equations (which would break normalisation -- we show that type checking becomes undecidable), but instead involves an elimination operator in a crucial way
    • …
    corecore