330 research outputs found

    Infinite Dimensional Pathwise Volterra Processes Driven by Gaussian Noise -- Probabilistic Properties and Applications

    Full text link
    We investigate the probabilistic and analytic properties of Volterra processes constructed as pathwise integrals of deterministic kernels with respect to the H\"older continuous trajectories of Hilbert-valued Gaussian processes. To this end, we extend the Volterra sewing lemma from \cite{HarangTindel} to the two dimensional case, in order to construct two dimensional operator-valued Volterra integrals of Young type. We prove that the covariance operator associated to infinite dimensional Volterra processes can be represented by such a two dimensional integral, which extends the current notion of representation for such covariance operators. We then discuss a series of applications of these results, including the construction of a rough path associated to a Volterra process driven by Gaussian noise with possibly irregular covariance structures, as well as a description of the irregular covariance structure arising from Gaussian processes time-shifted along irregular trajectories. Furthermore, we consider an infinite dimensional fractional Ornstein-Uhlenbeck process driven by Gaussian noise, which can be seen as an extension of the volatility model proposed by Rosenbaum et al. in \cite{ElEuchRosenbaum}.Comment: 38 page

    Advocacy groups in the wake of Hurricane Katrina: who shapes coverage of wetlands loss

    Get PDF
    Louisiana’s coastal wetlands provide a habitat for diverse wildlife, recreational opportunities for Louisiana residents and tourists, and an important natural buffer between communities and powerful hurricanes. Because they are disappearing at a rapid rate, coastal wetlands issues have been prominent in south Louisiana for decades. The catastrophic hurricanes of 2005 and 2008 have given the discussion an increased sense of urgency. Through this paper, I explore coverage of wetlands loss in local south Louisiana daily newspapers. Specifically, I try to determine how these papers frame the issue and illuminate how sources present in these stories participate in the construction of those frames. I then discuss the advocacy group America’s WETLAND’s role as a newspaper source, how the group developed and maintains its message, and the relationship between that message and the group’s sponsors. Finally, I interview journalists who cover the issue for newspapers in south Louisiana and the managing director of America’s WETLAND

    Das Bildnis der Friederike Voß und seine Umdeutung zu Christiane Vulpius : untersucht anhand der Quellen

    Get PDF
    Fast keine Publikation über Goethes Leben, seine Familie, seine Frau, sein Kind und seine Enkel ist bisher ohne die Abbildung eines Damenbildnisses ausgekommen, das seit 1885 als das der Christiane Vulpius ausgegeben wird, in Wirklichkeit aber die Weimarer Schauspielerin Friederike Voß darstellt. Dabei war es kein Versehen und keine Verwechslung, auch keine fehlerhafte Auswertung von Quellen, sondern einfach eine bewußte Umdeutung. Sie vollzog sich im letzten Viertel des 19. Jahrhunderts und entsprach dem Willen der Carl-Alexander-Zeit, das Überlieferte, Ererbte in den Dienst einer Idee zu stellen. [...] Das auf diese Weise erfundene Doppelbildnis prägte im 20. Jahrhundert die optische Vorstellung von der Lebensgemeinschaft Goethes und Christianes nachhaltig. Es ist an der Zeit, dem überlieferten Porträt der Friederike Margarete Voß seine Identität zurückzugeben

    Magnificent Minified Models

    Full text link
    This paper concerns itself with the task of taking a large trained neural network and 'compressing' it to be smaller by deleting parameters or entire neurons, with minimal decreases in the resulting model accuracy. We compare various methods of parameter and neuron selection: dropout-based neuron damage estimation, neuron merging, absolute-value based selection, random selection, OBD (Optimal Brain Damage). We also compare a variation on the classic OBD method that slightly outperformed all other parameter and neuron selection methods in our tests with substantial pruning, which we call OBD-SD. We compare these methods against quantization of parameters. We also compare these techniques (all applied to a trained neural network), with neural networks trained from scratch (random weight initialization) on various pruned architectures. Our results are only barely consistent with the Lottery Ticket Hypothesis, in that fine-tuning a parameter-pruned model does slightly better than retraining a similarly pruned model from scratch with randomly initialized weights. For neuron-level pruning, retraining from scratch did much better in our experiments.Comment: We wrote this in 2021 but didn't get around to putting it up on arXiv. State of the art has advanced a bit since then, but I think the experiments we ran are still quite interesting and usefu
    • …
    corecore