10,725 research outputs found

    A Distributed Procedure for Computing Stochastic Expansions with Mathematica

    Full text link
    The solution of a (stochastic) differential equation can be locally approximated by a (stochastic) expansion. If the vector field of the differential equation is a polynomial, the corresponding expansion is a linear combination of iterated integrals of the drivers and can be calculated using Picard Iterations. However, such expansions grow exponentially fast in their number of terms, due to their specific algebra, rendering their practical use limited. We present a Mathematica procedure that addresses this issue by re-parametrising the polynomials and distributing the load in as small as possible parts that can be processed and manipulated independently, thus alleviating large memory requirements and being perfectly suited for parallelized computation. We also present an iterative implementation of the shuffle product (as opposed to a recursive one, more usually implemented) as well as a fast way for calculating the expectation of iterated Stratonovich integrals for Brownian Motion.Comment: 15 pages, 2 figures. Submitte

    Focal Plane Wavefront Sensing using Residual Adaptive Optics Speckles

    Get PDF
    Optical imperfections, misalignments, aberrations, and even dust can significantly limit sensitivity in high-contrast imaging systems such as coronagraphs. An upstream deformable mirror (DM) in the pupil can be used to correct or compensate for these flaws, either to enhance Strehl ratio or suppress residual coronagraphic halo. Measurement of the phase and amplitude of the starlight halo at the science camera is essential for determining the DM shape that compensates for any non-common-path (NCP) wavefront errors. Using DM displacement ripples to create a series of probe and anti-halo speckles in the focal plane has been proposed for space-based coronagraphs and successfully demonstrated in the lab. We present the theory and first on-sky demonstration of a technique to measure the complex halo using the rapidly-changing residual atmospheric speckles at the 6.5m MMT telescope using the Clio mid-IR camera. The AO system's wavefront sensor (WFS) measurements are used to estimate the residual wavefront, allowing us to approximately compute the rapidly-evolving phase and amplitude of speckle halo. When combined with relatively-short, synchronized science camera images, the complex speckle estimates can be used to interferometrically analyze the images, leading to an estimate of the static diffraction halo with NCP effects included. In an operational system, this information could be collected continuously and used to iteratively correct quasi-static NCP errors or suppress imperfect coronagraphic halos.Comment: Astrophysical Journal (accepted). 26 pages, 21 figure

    Linear Estimation of Location and Scale Parameters Using Partial Maxima

    Full text link
    Consider an i.i.d. sample X^*_1,X^*_2,...,X^*_n from a location-scale family, and assume that the only available observations consist of the partial maxima (or minima)sequence, X^*_{1:1},X^*_{2:2},...,X^*_{n:n}, where X^*_{j:j}=max{X^*_1,...,X^*_j}. This kind of truncation appears in several circumstances, including best performances in athletics events. In the case of partial maxima, the form of the BLUEs (best linear unbiased estimators) is quite similar to the form of the well-known Lloyd's (1952, Least-squares estimation of location and scale parameters using order statistics, Biometrika, vol. 39, pp. 88-95) BLUEs, based on (the sufficient sample of) order statistics, but, in contrast to the classical case, their consistency is no longer obvious. The present paper is mainly concerned with the scale parameter, showing that the variance of the partial maxima BLUE is at most of order O(1/log n), for a wide class of distributions.Comment: This article is devoted to the memory of my six-years-old, little daughter, Dionyssia, who leaved us on August 25, 2010, at Cephalonia isl. (26 pages, to appear in Metrika

    Predicting plankton net community production in the Atlantic Ocean

    Get PDF
    We present, test and implement two contrasting models to predict euphotic zone net community production (NCP), which are based on 14C primary production (PO14CP) to NCP relationships over two latitudinal (ca. 30°S–45°N) transects traversing highly productive and oligotrophic provinces of the Atlantic Ocean (NADR, CNRY, BENG, NAST-E, ETRA and SATL, Longhurst et al., 1995 [An estimation of global primary production in the ocean from satellite radiometer data. Journal of Plankton Research 17, 1245–1271]). The two models include similar ranges of PO14CP and community structure, but differ in the relative influence of allochthonous organic matter in the oligotrophic provinces. Both models were used to predict NCP from PO14CP measurements obtained during 11 local and three seasonal studies in the Atlantic, Pacific and Indian Oceans, and from satellite-derived estimates of PO14CP. Comparison of these NCP predictions with concurrent in situ measurements and geochemical estimates of NCP showed that geographic and annual patterns of NCP can only be predicted when the relative trophic importance of local vs. distant processes is similar in both modeled and predicted ecosystems. The system-dependent ability of our models to predict NCP seasonality suggests that trophic-level dynamics are stronger than differences in hydrodynamic regime, taxonomic composition and phytoplankton growth. The regional differences in the predictive power of both models confirm the existence of biogeographic differences in the scale of trophic dynamics, which impede the use of a single generalized equation to estimate global marine plankton NCP. This paper shows the potential of a systematic empirical approach to predict plankton NCP from local and satellite-derived P estimates

    Algorithmic and Statistical Perspectives on Large-Scale Data Analysis

    Full text link
    In recent years, ideas from statistics and scientific computing have begun to interact in increasingly sophisticated and fruitful ways with ideas from computer science and the theory of algorithms to aid in the development of improved worst-case algorithms that are useful for large-scale scientific and Internet data analysis problems. In this chapter, I will describe two recent examples---one having to do with selecting good columns or features from a (DNA Single Nucleotide Polymorphism) data matrix, and the other having to do with selecting good clusters or communities from a data graph (representing a social or information network)---that drew on ideas from both areas and that may serve as a model for exploiting complementary algorithmic and statistical perspectives in order to solve applied large-scale data analysis problems.Comment: 33 pages. To appear in Uwe Naumann and Olaf Schenk, editors, "Combinatorial Scientific Computing," Chapman and Hall/CRC Press, 201
    • …
    corecore