316 research outputs found

    THE NUMÉRAIRE PROPERTY AND LONG-TERM GROWTH OPTIMALITY FOR DRAWDOWN-CONSTRAINED INVESTMENTS

    Get PDF
    © 2014 Wiley Periodicals, Inc. We consider the portfolio choice problem for a long-run investor in a general continuous semimartingale model. We combine the decision criterion of pathwise growth optimality with a flexible specification of attitude toward risk, encoded by a linear drawdown constraint imposed on admissible wealth processes. We define the constrained numéraire property through the notion of expected relative return and prove that drawdown-constrained numéraire portfolio exists and is unique, but may depend on the investment horizon. However, when sampled at the times of its maximum and asymptotically as the time-horizon becomes distant, the drawdown-constrained numéraire portfolio is given explicitly through a model-independent transformation of the unconstrained numéraire portfolio. The asymptotically growth-optimal strategy is obtained as limit of numéraire strategies on finite horizons

    On the existence of sure profits via flash strategies

    Get PDF
    © Applied Probability Trust 2019. We introduce and study the notion of sure profits via flash strategies, consisting of a high-frequency limit of buy-and-hold trading strategies. In a fully general setting, without imposing any semimartingale restriction, we prove that there are no sure profits via flash strategies if and only if asset prices do not exhibit predictable jumps. This result relies on the general theory of processes and provides the most general formulation of the well-known fact that, in an arbitrage-free financial market, asset prices (including dividends) should not exhibit jumps of a predictable direction or magnitude at predictable times. We furthermore show that any price process is always right-continuous in the absence of sure profits. Our results are robust under small transaction costs and imply that, under minimal assumptions, price changes occurring at scheduled dates should only be due to unanticipated information releases

    A compact ultrahigh-vacuum system for the in situ investigation of III/V semiconductor surfaces

    Get PDF
    A compact ultrahigh vacuum (UHV) system has been built to study growth and properties of III/V semiconductor surfaces and nanostructures. The system allows one to grow III/V semiconductor surfaces by molecular beam epitaxy (MBE) and analyze their surface by a variety of surface analysis techniques. The geometric structure is examined by scanning tunneling microscopy (STM), low-energy electron diffraction and reflection high-energy electron diffraction. The electronic properties of the surfaces are studied by angular resolved photoemission either in the laboratory using a helium discharge lamp or at the Berlin Synchrotron Radiation Facility BESSY. In order to meet the space restriction at BESSY the system dimensions are kept very small. A detailed description of the apparatus and the sample handling system is given. For the UHV-STM (Park Scientific Instruments, VP2) a new, versatile tip handling mechanism has been developed. It allows the transfer of tips out of the chamber and furthermore, the in situ tip cleaning by electron annealing. In addition, another more reliable in situ tip-preparation technique operating the STM in the field emission regime is described. The ability of the system is shown by an atomically resolved STM image of the c(4×4) reconstructed GaAs(001) surface

    Reduction and reconstruction of stochastic differential equations via symmetries

    Full text link
    An algorithmic method to exploit a general class of infinitesimal symmetries for reducing stochastic differential equations is presented and a natural definition of reconstruction, inspired by the classical reconstruction by quadratures, is proposed. As a side result the well-known solution formula for linear one-dimensional stochastic differential equations is obtained within this symmetry approach. The complete procedure is applied to several examples with both theoretical and applied relevance

    Origami constraints on the initial-conditions arrangement of dark-matter caustics and streams

    Full text link
    In a cold-dark-matter universe, cosmological structure formation proceeds in rough analogy to origami folding. Dark matter occupies a three-dimensional 'sheet' of free- fall observers, non-intersecting in six-dimensional velocity-position phase space. At early times, the sheet was flat like an origami sheet, i.e. velocities were essentially zero, but as time passes, the sheet folds up to form cosmic structure. The present paper further illustrates this analogy, and clarifies a Lagrangian definition of caustics and streams: caustics are two-dimensional surfaces in this initial sheet along which it folds, tessellating Lagrangian space into a set of three-dimensional regions, i.e. streams. The main scientific result of the paper is that streams may be colored by only two colors, with no two neighbouring streams (i.e. streams on either side of a caustic surface) colored the same. The two colors correspond to positive and negative parities of local Lagrangian volumes. This is a severe restriction on the connectivity and therefore arrangement of streams in Lagrangian space, since arbitrarily many colors can be necessary to color a general arrangement of three-dimensional regions. This stream two-colorability has consequences from graph theory, which we explain. Then, using N-body simulations, we test how these caustics correspond in Lagrangian space to the boundaries of haloes, filaments and walls. We also test how well outer caustics correspond to a Zel'dovich-approximation prediction.Comment: Clarifications and slight changes to match version accepted to MNRAS. 9 pages, 5 figure

    Quantifying distortions of the Lagrangian dark-matter mesh in cosmology

    Full text link
    We examine the Lagrangian divergence of the displacement field, arguably a more natural object than the density in a Lagrangian description of cosmological large-scale structure. This quantity, which we denote \psi, quantifies the stretching and distortion of the initially homogeneous lattice of dark-matter particles in the universe. \psi\ encodes similar information as the density, but the correspondence has subtleties. It corresponds better to the log-density A than the overdensity \delta. A Gaussian distribution in \psi\ produces a distribution in A with slight skewness; in \delta, we find that in many cases the skewness is further increased by 3. A local spherical-collapse-based (SC) fit found by Bernardeau gives a formula for \psi's particle-by-particle behavior that works quite well, better than applying Lagrangian perturbation theory (LPT) at first or second (2LPT) order. In 2LPT, there is a roughly parabolic relation between initial and final \psi\ that can give overdensities in deep voids, so low-redshift, high-resolution 2LPT realizations should be used with caution. The SC fit excels at predicting \psi\ until streams cross; then, for particles forming haloes, \psi\ plummets as in a waterfall to -3. This gives a new method for producing N-particle realizations. Compared to LPT realizations, such SC realizations give reduced stream-crossing, and better visual and 1-point-PDF correspondence to the results of full gravity. LPT, on the other hand, predicts large-scale flows and the large-scale power-spectrum amplitude better, unless an empirical correction is added to the SC formula.Comment: Changes in presentation to match MNRAS-accepted version, 14 pages, 15 figure

    Only the Lonely: H I Imaging of Void Galaxies

    Full text link
    Void galaxies, residing within the deepest underdensities of the Cosmic Web, present an ideal population for the study of galaxy formation and evolution in an environment undisturbed by the complex processes modifying galaxies in clusters and groups, as well as provide an observational test for theories of cosmological structure formation. We have completed a pilot survey for the HI imaging aspects of a new Void Galaxy Survey (VGS), imaging 15 void galaxies in HI in local (d < 100 Mpc) voids. HI masses range from 3.5 x 10^8 to 3.8 x 10^9 M_sun, with one nondetection with an upper limit of 2.1 x 10^8 M_sun. Our galaxies were selected using a structural and geometric technique to produce a sample that is purely environmentally selected and uniformly represents the void galaxy population. In addition, we use a powerful new backend of the Westerbork Synthesis Radio Telescope that allows us to probe a large volume around each targeted galaxy, simultaneously providing an environmentally constrained sample of fore- and background control sample of galaxies while still resolving individual galaxy kinematics and detecting faint companions in HI. This small sample makes up a surprisingly interesting collection of perturbed and interacting galaxies, all with small stellar disks. Four galaxies have significantly perturbed HI disks, five have previously unidentified companions at distances ranging from 50 to 200 kpc, two are in interacting systems, and one was found to have a polar HI disk. Our initial findings suggest void galaxies are a gas-rich, dynamic population which present evidence of ongoing gas accretion, major and minor interactions, and filamentary alignment despite the surrounding underdense environment.Comment: 53 pages, 18 figures, accepted for publication in AJ. High resolution available at http://www.astro.columbia.edu/~keejo/kreckel2010.pd

    The fully connected N-dimensional skeleton: probing the evolution of the cosmic web

    Full text link
    A method to compute the full hierarchy of the critical subsets of a density field is presented. It is based on a watershed technique and uses a probability propagation scheme to improve the quality of the segmentation by circumventing the discreteness of the sampling. It can be applied within spaces of arbitrary dimensions and geometry. This recursive segmentation of space yields, for a dd-dimensional space, a d1d-1 succession of nn-dimensional subspaces that fully characterize the topology of the density field. The final 1D manifold of the hierarchy is the fully connected network of the primary critical lines of the field : the skeleton. It corresponds to the subset of lines linking maxima to saddle points, and provides a definition of the filaments that compose the cosmic web as a precise physical object, which makes it possible to compute any of its properties such as its length, curvature, connectivity etc... When the skeleton extraction is applied to initial conditions of cosmological N-body simulations and their present day non linear counterparts, it is shown that the time evolution of the cosmic web, as traced by the skeleton, is well accounted for by the Zel'dovich approximation. Comparing this skeleton to the initial skeleton undergoing the Zel'dovich mapping shows that two effects are competing during the formation of the cosmic web: a general dilation of the larger filaments that is captured by a simple deformation of the skeleton of the initial conditions on the one hand, and the shrinking, fusion and disappearance of the more numerous smaller filaments on the other hand. Other applications of the N dimensional skeleton and its peak patch hierarchy are discussed.Comment: Accepted for publication in MNRA

    Non-intersecting squared Bessel paths and multiple orthogonal polynomials for modified Bessel weights

    Full text link
    We study a model of nn non-intersecting squared Bessel processes in the confluent case: all paths start at time t=0t = 0 at the same positive value x=ax = a, remain positive, and are conditioned to end at time t=Tt = T at x=0x = 0. In the limit nn \to \infty, after appropriate rescaling, the paths fill out a region in the txtx-plane that we describe explicitly. In particular, the paths initially stay away from the hard edge at x=0x = 0, but at a certain critical time tt^* the smallest paths hit the hard edge and from then on are stuck to it. For ttt \neq t^* we obtain the usual scaling limits from random matrix theory, namely the sine, Airy, and Bessel kernels. A key fact is that the positions of the paths at any time tt constitute a multiple orthogonal polynomial ensemble, corresponding to a system of two modified Bessel-type weights. As a consequence, there is a 3×33 \times 3 matrix valued Riemann-Hilbert problem characterizing this model, that we analyze in the large nn limit using the Deift-Zhou steepest descent method. There are some novel ingredients in the Riemann-Hilbert analysis that are of independent interest.Comment: 59 pages, 11 figure

    OpenDF - A Dataflow Toolset for Reconfigurable Hardware and Multicore Systems

    Get PDF
    This paper presents the OpenDF framework and recalls that dataflow programming was once invented to address the problem of parallel computing. We discuss the problems with an imperative style, von Neumann programs, and present what we believe are the advantages of using a dataflow programming model. The CAL actor language is briefly presented and its role in the ISO/MPEG standard is discussed. The Dataflow Interchange Format (DIF) and related tools can be used for analysis of actors and networks, demonstrating the advantages of a dataflow approach. Finally, an overview of a case study implementing an MPEG-4 decoder is given
    corecore