35 research outputs found

    Multilayer parking with screening on a random tree

    Full text link
    In this paper we present a multilayer particle deposition model on a random tree. We derive the time dependent densities of the first and second layer analytically and show that in all trees the limiting density of the first layer exceeds the density in the second layer. We also provide a procedure to calculate higher layer densities and prove that random trees have a higher limiting density in the first layer than regular trees. Finally, we compare densities between the first and second layer and between regular and random trees.Comment: 15 pages, 2 figure

    A second row Parking Paradox

    Get PDF
    We consider two variations of the discrete car parking problem where at every vertex of the integers a car arrives with rate one, now allowing for parking in two lines. a) The car parks in the first line whenever the vertex and all of its nearest neighbors are not occupied yet. It can reach the first line if it is not obstructed by cars already parked in the second line (screening). b) The car parks according to the same rules, but parking in the first line can not be obstructed by parked cars in the second line (no screening). In both models, a car that can not park in the first line will attempt to park in the second line. If it is obstructed in the second line as well, the attempt is discarded. We show that both models are solvable in terms of finite-dimensional ODEs. We compare numerically the limits of first and second line densities, with time going to infinity. While it is not surprising that model a) exhibits an increase of the density in the second line from the first line, more remarkably this is also true for model b), albeit in a less pronounced way.Comment: 11 pages, 4 figure

    Daniel Bernoulli and the St. Petersburg paradox

    No full text

    Universal approximation in p-mean by neural networks

    No full text
    A feedforward neural net with d input neurons and with a single hidden layer of n neurons is given by[GRAPHICS]where a(j), theta(j), w(ji) is an element of R. In this paper we study the approximation of arbitrary functions f: R-d --&gt; R by a neural net in an L-p(mu) norm for some finite measure mu on R-d. We prove that under natural moment conditions, a neural net with non-polynomial function can approximate any given function. (C) 1998 Elsevier Science Ltd. All rights reserved.</p
    corecore