15 research outputs found

    On random number generators and practical market efficiency

    Get PDF
    Modern mainstream financial theory is underpinned by the efficient market hypothesis, which posits the rapid incorporation of relevant information into asset pricing. Limited prior studies in the operational research literature have investigated the use of tests designed for random number generators to check for these informational efficiencies. Treating binary daily returns as a hardware random number generator analogue, tests of overlapping permutations have indicated that these time series feature idiosyncratic recurrent patterns. Contrary to prior studies, we split our analysis into two streams at the annual and company level, and investigate longer-term efficiency over a larger time frame for Nasdaq-listed public companies to diminish the effects of trading noise and allow the market to realistically digest new information. Our results demonstrate that information efficiency varies across different years and reflects large-scale market impacts such as financial crises. We also show the proximity to results of a logistic map comparison, discuss the distinction between theoretical and practical market efficiency, and find that the statistical qualification of stock-separated returns in support of the efficient market hypothesis is dependent on the driving factor of small inefficient subsets that skew market assessments.Comment: Preprint, accepted for publication in Journal of the Operational Research Societ

    Gaussbock:Fast parallel-iterative cosmological parameter estimation with Bayesian nonparametrics

    Get PDF
    We present and apply Gaussbock, a new embarrassingly parallel iterative algorithm for cosmological parameter estimation designed for an era of cheap parallel computing resources. Gaussbock uses Bayesian nonparametrics and truncated importance sampling to accurately draw samples from posterior distributions with an orders-of-magnitude speed-up in wall time over alternative methods. Contemporary problems in this area often suffer from both increased computational costs due to high-dimensional parameter spaces and consequent excessive time requirements, as well as the need for fine tuning of proposal distributions or sampling parameters. Gaussbock is designed specifically with these issues in mind. We explore and validate the performance and convergence of the algorithm on a fast approximation to the Dark Energy Survey Year 1 (DES Y1) posterior, finding reasonable scaling behavior with the number of parameters. We then test on the full DES Y1 posterior using large-scale supercomputing facilities, and recover reasonable agreement with previous chains, although the algorithm can underestimate the tails of poorly-constrained parameters. Additionally, we discuss and demonstrate how Gaussbock recovers complex posterior shapes very well at lower dimensions, but faces challenges to perform well on such distributions in higher dimensions. In addition, we provide the community with a user-friendly software tool for accelerated cosmological parameter estimation based on the methodology described in this paper.Comment: 19 pages, 10 figures, accepted for publication in Ap

    Physics-informed neural networks in the recreation of hydrodynamic simulations from dark matter

    Full text link
    Physics-informed neural networks have emerged as a coherent framework for building predictive models that combine statistical patterns with domain knowledge. The underlying notion is to enrich the optimization loss function with known relationships to constrain the space of possible solutions. Hydrodynamic simulations are a core constituent of modern cosmology, while the required computations are both expensive and time-consuming. At the same time, the comparatively fast simulation of dark matter requires fewer resources, which has led to the emergence of machine learning algorithms for baryon inpainting as an active area of research; here, recreating the scatter found in hydrodynamic simulations is an ongoing challenge. This paper presents the first application of physics-informed neural networks to baryon inpainting by combining advances in neural network architectures with physical constraints, injecting theory on baryon conversion efficiency into the model loss function. We also introduce a punitive prediction comparison based on the Kullback-Leibler divergence, which enforces scatter reproduction. By simultaneously extracting the complete set of baryonic properties for the Simba suite of cosmological simulations, our results demonstrate improved accuracy of baryonic predictions based on dark matter halo properties, successful recovery of the fundamental metallicity relation, and retrieve scatter that traces the target simulation's distribution

    Predictive intraday correlations in stable and volatile market environments:Evidence from deep learning

    Get PDF
    Standard methods and theories in finance can be ill-equipped to capture highly non-linear interactions in financial prediction problems based on large-scale datasets, with deep learning offering a way to gain insights into correlations in markets as complex systems. In this paper, we apply deep learning to econometrically constructed gradients to learn and exploit lagged correlations among S&P 500 stocks to compare model behaviour in stable and volatile market environments, and under the exclusion of target stock information for predictions. In order to measure the effect of time horizons, we predict intraday and daily stock price movements in varying interval lengths and gauge the complexity of the problem at hand with a modification of our model architecture. Our findings show that accuracies, while remaining significant and demonstrating the exploitability of lagged correlations in stock markets, decrease with shorter prediction horizons. We discuss implications for modern finance theory and our work's applicability as an investigative tool for portfolio managers. Lastly, we show that our model's performance is consistent in volatile markets by exposing it to the environment of the recent financial crisis of 2007/2008.Comment: 15 pages, 6 figures, preprint submitted to Physica

    Physics-informed neural networks in the recreation of hydrodynamic simulations from dark matter

    Get PDF
    Physics-informed neural networks have emerged as a coherent framework for building predictive models that combine statistical patterns with domain knowledge. The underlying notion is to enrich the optimization loss function with known relationships to constrain the space of possible solutions. Hydrodynamic simulations are a core constituent of modern cosmology, while the required computations are both expensive and time-consuming. At the same time, the comparatively fast simulation of dark matter requires fewer resources, which has led to the emergence of machine learning algorithms for baryon inpainting as an active area of research; here, recreating the scatter found in hydrodynamic simulations is an ongoing challenge. This paper presents the first application of physics-informed neural networks to baryon inpainting by combining advances in neural network architectures with physical constraints, injecting theory on baryon conversion efficiency into the model loss function. We also introduce a punitive prediction comparison based on the Kullback-Leibler divergence, which enforces scatter reproduction. By simultaneously extracting the complete set of baryonic properties for the simba suite of cosmological simulations, our results demonstrate improved accuracy of baryonic predictions based on dark matter halo properties and successful recovery of the fundamental metallicity relation, and retrieve scatter that traces the target simulation's distribution.</p

    Filaments of crime: Informing policing via thresholded ridge estimation

    Full text link
    Objectives: We introduce a new method for reducing crime in hot spots and across cities through ridge estimation. In doing so, our goal is to explore the application of density ridges to hot spots and patrol optimization, and to contribute to the policing literature in police patrolling and crime reduction strategies. Methods: We make use of the subspace-constrained mean shift algorithm, a recently introduced approach for ridge estimation further developed in cosmology, which we modify and extend for geospatial datasets and hot spot analysis. Our experiments extract density ridges of Part I crime incidents from the City of Chicago during the year 2018 and early 2019 to demonstrate the application to current data. Results: Our results demonstrate nonlinear mode-following ridges in agreement with broader kernel density estimates. Using early 2019 incidents with predictive ridges extracted from 2018 data, we create multi-run confidence intervals and show that our patrol templates cover around 94% of incidents for 0.1-mile envelopes around ridges, quickly rising to near-complete coverage. We also develop and provide researchers, as well as practitioners, with a user-friendly and open-source software for fast geospatial density ridge estimation. Conclusions: We show that ridges following crime report densities can be used to enhance patrolling capabilities. Our empirical tests show the stability of ridges based on past data, offering an accessible way of identifying routes within hot spots instead of patrolling epicenters. We suggest further research into the application and efficacy of density ridges for patrolling.Comment: 17 pages, 3 figure

    Photometric Redshift Uncertainties in Weak Gravitational Lensing Shear Analysis: Models and Marginalization

    Full text link
    Recovering credible cosmological parameter constraints in a weak lensing shear analysis requires an accurate model that can be used to marginalize over nuisance parameters describing potential sources of systematic uncertainty, such as the uncertainties on the sample redshift distribution n(z)n(z). Due to the challenge of running Markov Chain Monte-Carlo (MCMC) in the high dimensional parameter spaces in which the n(z)n(z) uncertainties may be parameterized, it is common practice to simplify the n(z)n(z) parameterization or combine MCMC chains that each have a fixed n(z)n(z) resampled from the n(z)n(z) uncertainties. In this work, we propose a statistically-principled Bayesian resampling approach for marginalizing over the n(z)n(z) uncertainty using multiple MCMC chains. We self-consistently compare the new method to existing ones from the literature in the context of a forecasted cosmic shear analysis for the HSC three-year shape catalog, and find that these methods recover similar cosmological parameter constraints, implying that using the most computationally efficient of the approaches is appropriate. However, we find that for datasets with the constraining power of the full HSC survey dataset (and, by implication, those upcoming surveys with even tighter constraints), the choice of method for marginalizing over n(z)n(z) uncertainty among the several methods from the literature may significantly impact the statistical uncertainties on cosmological parameters, and a careful model selection is needed to ensure credible parameter intervals.Comment: 15 pages, 8 figures, submitted to mnra

    On the road to percent accuracy II:calibration of the non-linear matter power spectrum for arbitrary cosmologies

    Get PDF
    We introduce an emulator approach to predict the non-linear matter power spectrum for broad classes of beyond-Λ\LambdaCDM cosmologies, using only a suite of Λ\LambdaCDM NN-body simulations. By including a range of suitably modified initial conditions in the simulations, and rescaling the resulting emulator predictions with analytical `halo model reactions', accurate non-linear matter power spectra for general extensions to the standard Λ\LambdaCDM model can be calculated. We optimise the emulator design by substituting the simulation suite with non-linear predictions from the standard {\sc halofit} tool. We review the performance of the emulator for artificially generated departures from the standard cosmology as well as for theoretically motivated models, such as f(R)f (R) gravity and massive neutrinos. For the majority of cosmologies we have tested, the emulator can reproduce the matter power spectrum with errors ≲1%\lesssim 1\% deep into the highly non-linear regime. This work demonstrates that with a well-designed suite of Λ\LambdaCDM simulations, extensions to the standard cosmological model can be tested in the non-linear regime without any reliance on expensive beyond-Λ\LambdaCDM simulations.Comment: 16 pages, 13 figures, accepted for publication in MNRA

    Stress testing the dark energy equation of state imprint on supernova data

    Get PDF
    International audienceThis work determines the degree to which a traditional analysis of the standard model of cosmology (ΛCDM) based on type Ia supernovae can identify deviations from a cosmological constant in the form of a redshift-dependent dark energy equation of state w(z). We introduce and apply a novel random curve generator to simulate instances of w(z) from constraint families with increasing distinction from a cosmological constant. After producing a series of mock catalogs of binned type Ia supernovae corresponding to each w(z) curve, we perform a standard ΛCDM analysis to estimate the corresponding posterior densities of the absolute magnitude of type Ia supernovae, the present-day matter density, and the equation of state parameter. Using the Kullback-Leibler divergence between posterior densities as a difference measure, we demonstrate that a standard type Ia supernova cosmology analysis has limited sensitivity to extensive redshift dependencies of the dark energy equation of state. In addition, we report that larger redshift-dependent departures from a cosmological constant do not necessarily manifest easier-detectable incompatibilities with the ΛCDM model. Our results suggest that physics beyond the standard model may simply be hidden in plain sight
    corecore