286 research outputs found

    Variational Inference for GARCH-family Models

    Get PDF
    The Bayesian estimation of GARCH-family models has been typically addressed through Monte Carlo sampling. Variational Inference is gaining popularity and attention as a robust approach for Bayesian inference in complex machine learning models; however, its adoption in econometrics and finance is limited. This paper discusses the extent to which Variational Inference constitutes a reliable and feasible alternative to Monte Carlo sampling for Bayesian inference in GARCH-like models. Through a large-scale experiment involving the constituents of the S& P 500 index, several Variational Inference optimizers, a variety of volatility models, and a case study, we show that Variational Inference is an attractive, remarkably well-calibrated, and competitive method for Bayesian learning

    Bayesian bilinear neural network for predicting the mid-price dynamics in limit-order book markets

    Get PDF
    The prediction of financial markets is a challenging yet important task. In modern electronically driven markets, traditional time-series econometric methods often appear incapable of capturing the true complexity of the multilevel interactions driving the price dynamics. While recent research has established the effectiveness of traditional machine learning (ML) models in financial applications, their intrinsic inability to deal with uncertainties, which is a great concern in econometrics research and real business applications, constitutes a major drawback. Bayesian methods naturally appear as a suitable remedy conveying the predictive ability of ML methods with the probabilistically oriented practice of econometric research. By adopting a state-of-the-art second-order optimization algorithm, we train a Bayesian bilinear neural network with temporal attention, suitable for the challenging time-series task of predicting mid-price movements in ultra-high-frequency limit-order book markets. We thoroughly compare our Bayesian model with traditional ML alternatives by addressing the use of predictive distributions to analyze errors and uncertainties associated with the estimated parameters and model forecasts. Our results underline the feasibility of the Bayesian deep-learning approach and its predictive and decisional advantages in complex econometric tasks, prompting future research in this direction

    Predictive modeling of suitable habitats for threatened marine invertebrates and implications for conservation assessment in Brazil

    Get PDF
    Neste estudo foram utilizadas análises espaciais e ferramentas de modelagem para predizer a distribuição dos hábitats adequados aos invertebrados marinhos ameaçados e estimar a sobreposição destas áreas em relação às áreas marinhas protegidas existentes. Registros de ocorrência das espécies foram obtidos das coleções incluídas no Ocean Biogeographic Information System (OBIS-Brasil) e de dados provenientes da literatura. Dados de distribuição de 16 das 33 espécies ameaçadas, com pelo menos 10 registros de ocorrência, foram selecionados para modelagem utilizando o algoritmo Maxent (Maximum Entropy Modeling) e variáveis ambientais (temperatura, salinidade, batimetria e derivados). Os mapas resultantes foram filtrados para obtenção de áreas altamente adequadas, através de um limiar de corte de 0.5, e sobrepostos com o mapa digital de áreas protegidas. O algoritmo apresentou modelos de predição satisfatórios, mostrando que os padrões previstos no modelo são coerentes com o conhecimento atual sobre as espécies. A distribuição das áreas altamente adequadas mostrou baixa sobreposição com as áreas protegidas brasileiras. Este estudo indicou como a adequabilidade de hábitats para espécies ameaçadas pode ser realizada, utilizando aplicações em SIG e ferramentas de modelagem.Spatial analysis and modeling tools were employed to predict suitable habitat distribution for threatened marine invertebrates and estimate the overlap between highly suitable areas for these species and the Brazilian marine protected areas (MPAs). Records of the occurrence of species were obtained from the collections included in the Ocean Biogeographic Information System (OBIS-Brazil), with additional records culled from the literature. The distribution data of 16 out of 33 threatened species, with at least ten occurrences in the available records, were selected for modeling by Maxent algorithm (Maximum Entropy Modeling) based on environmental variables (temperature, salinity, bathymetry and their derivatives). The resulting maps were filtered with a fixed threshold of 0.5 (to distinguish only the highly suitable areas) and superimposed on MPA digital maps. The algorithm produced reasonable predictions of the species' potential distributions, showing that the patterns predicted by the model are largely consistent with current knowledge of the species. The distribution of the highly suitable areas showed little overlapping with Brazilian MPAs. This study showed how the habitat suitability for threatened species can be assessed using GIS applications and modeling tools

    Quantifying cross-scale patch contributions to spatial connectivity

    Get PDF
    Context: Connectivity between habitat patches is vital for ecological processes at multiple scales. Traditional metrics do not measure the scales at which individual habitat patches contribute to the overall ecological connectivity of the landscape. Connectivity has previously been evaluated at several different scales based on the dispersal capabilities of particular organisms, but these approaches are data-heavy and conditioned on just a few species. Objectives: Our objective was to improve cross-scale measurement of connectivity by developing and testing a new landscape metric, cross-scale centrality. Methods: Cross-scale centrality (CSC) integrates over measurements of patch centrality at different scales (hypothetical dispersal distances) to quantify the cross-scale contribution of each individual habitat patch to overall landscape or seascape connectivity. We tested CSC against an independent metapopulation simulation model and demonstrated its potential application in conservation planning by comparison to an alternative approach that used individual dispersal data. Results: CSC correlated significantly with total patch occupancy across the entire landscape in our metapopulation simulation, while being much faster and easier to calculate. Standard conservation planning software (Marxan) using dispersal data was weaker than CSC at capturing locations with high cross-scale connectivity. Conclusions: Metrics that measure pattern across multiple scales are much faster and more efficient than full simulation models and more rigorous and interpretable than ad hoc incorporation of connectivity into conservation plans. In reality, connectivity matters for many different organisms across many different scales. Metrics like CSC that quantify landscape pattern across multiple different scales can make a valuable contribution to multi-scale landscape measurement, planning, and management

    Cumulative human impacts on coral reefs: assessing risk and management implications for Brazilian coral reefs

    Get PDF
    Effective management of coral reefs requires strategies tailored to cope with cumulative disturbances from human activities. In Brazil, where coral reefs are a priority for conservation, intensifying threats from local and global stressors are of paramount concern to management agencies. Using a cumulative impact assessment approach, our goal was to inform management actions for coral reefs in Brazil by assessing their exposure to multiple stressors (fishing, land-based activities, coastal development, mining, aquaculture, shipping, and global warming). We calculated an index of the risk to cumulative impacts: (i) assuming uniform sensitivity of coral reefs to stressors; and (ii) using impact weights to reflect varying tolerance levels of coral reefs to each stressor. We also predicted the index in both the presence and absence of global warming. We found that 16% and 37% of coral reefs had high to very high risk of cumulative impacts, without and with information on sensitivity respectively, and 42% of reefs had low risk to cumulative impacts from both local and global stressors. Our outputs are the first comprehensive spatial dataset of cumulative impact on coral reefs in Brazil, and show that areas requiring attention mostly corresponded to those closer to population centres. We demonstrate how the relationships between risks from local and global stressors can be used to derive strategic management actions

    Predicting the state of synchronization of financial time series using cross recurrence plots

    Get PDF
    Cross-correlation analysis is a powerful tool for understanding the mutual dynamics of time series. This study introduces a new method for predicting the future state of synchronization of the dynamics of two financial time series. To this end, we use the cross recurrence plot analysis as a nonlinear method for quantifying the multidimensional coupling in the time domain of two time series and for determining their state of synchronization. We adopt a deep learning framework for methodologically addressing the prediction of the synchronization state based on features extracted from dynamically sub-sampled cross recurrence plots. We provide extensive experiments on several stocks, major constituents of the S &P100 index, to empirically validate our approach. We find that the task of predicting the state of synchronization of two time series is in general rather difficult, but for certain pairs of stocks attainable with very satisfactory performance (84% F1-score, on average)

    Characterizing SL2S galaxy groups using the Einstein radius

    Full text link
    We analyzed the Einstein radius, θE\theta_E, in our sample of SL2S galaxy groups, and compared it with RAR_A (the distance from the arcs to the center of the lens), using three different approaches: 1.- the velocity dispersion obtained from weak lensing assuming a Singular Isothermal Sphere profile (θE,I\theta_{E,I}), 2.- a strong lensing analytical method (θE,II\theta_{E,II}) combined with a velocity dispersion-concentration relation derived from numerical simulations designed to mimic our group sample, 3.- strong lensing modeling (θE,III\theta_{E,III}) of eleven groups (with four new models presented in this work) using HST and CFHT images. Finally, RAR_A was analyzed as a function of redshift zz to investigate possible correlations with L, N, and the richness-to-luminosity ratio (N/L). We found a correlation between θE\theta_{E} and RAR_A, but with large scatter. We estimate θE,I\theta_{E,I} = (2.2 ±\pm 0.9) + (0.7 ±\pm 0.2)RAR_A, θE,II\theta_{E,II} = (0.4 ±\pm 1.5) + (1.1 ±\pm 0.4)RAR_A, and θE,III\theta_{E,III} = (0.4 ±\pm 1.5) + (0.9 ±\pm 0.3)RAR_A for each method respectively. We found a weak evidence of anti-correlation between RAR_A and zz, with LogRAR_A = (0.58±\pm0.06) - (0.04±\pm0.1)zz, suggesting a possible evolution of the Einstein radius with zz, as reported previously by other authors. Our results also show that RAR_A is correlated with L and N (more luminous and richer groups have greater RAR_A), and a possible correlation between RAR_A and the N/L ratio. Our analysis indicates that RAR_A is correlated with θE\theta_E in our sample, making RAR_A useful to characterize properties like L and N (and possible N/L) in galaxy groups. Additionally, we present evidence suggesting that the Einstein radius evolves with zz.Comment: Accepted for publication in Astronomy & Astrophysics. Typos correcte

    Galaxy properties from J-PAS narrow-band photometry

    Full text link
    We study the consistency of the physical properties of galaxies retrieved from SED-fitting as a function of spectral resolution and signal-to-noise ratio (SNR). Using a selection of physically motivated star formation histories, we set up a control sample of mock galaxy spectra representing observations of the local universe in high-resolution spectroscopy, and in 56 narrow-band and 5 broad-band photometry. We fit the SEDs at these spectral resolutions and compute their corresponding the stellar mass, the mass- and luminosity-weighted age and metallicity, and the dust extinction. We study the biases, correlations, and degeneracies affecting the retrieved parameters and explore the r\^ole of the spectral resolution and the SNR in regulating these degeneracies. We find that narrow-band photometry and spectroscopy yield similar trends in the physical properties derived, the former being considerably more precise. Using a galaxy sample from the SDSS, we compare more realistically the results obtained from high-resolution and narrow-band SEDs (synthesized from the same SDSS spectra) following the same spectral fitting procedures. We use results from the literature as a benchmark to our spectroscopic estimates and show that the prior PDFs, commonly adopted in parametric methods, may introduce biases not accounted for in a Bayesian framework. We conclude that narrow-band photometry yields the same trend in the age-metallicity relation in the literature, provided it is affected by the same biases as spectroscopy; albeit the precision achieved with the latter is generally twice as large as with the narrow-band, at SNR values typical of the different kinds of data.Comment: 26 pages, 15 figures. Accepted for publication in MNRA
    • …
    corecore