829 research outputs found

    Big Data as a Technology-to-think-with for Scientific Literacy

    Get PDF
    This research aimed to identify indications of scientific literacy resulting from a didactic and investigative interaction with Google Trends Big Data software by first-year students from a high-school in Novo Hamburgo, Southern Brazil. Both teaching strategies and research interpretations lie on four theoretical backgrounds. Firstly, Bunge's epistemology, which provides a thorough characterization of Science that was central to our study. Secondly, the conceptual framework of scientific literacy of Fives et al. that makes our teaching focus precise and concise, as well as supports one of our methodological tool: the SLA (scientific literacy assessment). Thirdly, the "crowdledge" construct from dos Santos, which gives meaning to our study when as it makes the development of scientific literacy itself versatile for paying attention on sociotechnological and epistemological contemporary phenomena. Finally, the learning principles from Papert's Constructionism inspired our educational activities. Our educational actions consisted of students, divided into two classes, investigating phenomena chose by them. A triangulation process to integrate quantitative and qualitative methods on the assessments results was done. The experimental design consisted in post-tests only and the experimental variable was the way of access to the world. The experimental group interacted with the world using analyses of temporal and regional plots of interest of terms or topics searched on Google. The control class did 'placebo' interactions with the world through on-site observations of bryophytes, fungus or whatever in the schoolyard. As general results of our research, a constructionist environment based on Big Data analysis showed itself as a richer strategy to develop scientific literacy, compared to a free schoolyard exploration.Comment: 23 pages, 2 figures, 8 table

    Business Time and New Credit Risk Models

    Get PDF
    This paper examines a new model of credit risk measurement, the Variance Gamma- Merton one, which seems to be adequate for describing single default occurrence and default correlation in turbulent times. It is based on the notion of business time. Business time runs faster than calendar time when the market is very active and a lot of information arrives; it runs at a slower pace than calendar time when few information arrives. We report a calibration to USA spread data, which shows the accurateness of the model at the single default level; we also compare the perfeormance wrt a traditional structural model at the joint default level.

    A Generalized Normal Mean Variance Mixture for Return Processes in Finance

    Get PDF
    Time-changed Brownian motions are extensively applied as mathematical models for asset returns in Finance. Time change is interpreted as a switch to trade-related business time, different from calendar time. Time-changed Brownian motions can be generated by infinite divisible normal mixtures. The standard multivariate normal mean variance mixtures assume a common mixing variable. This corresponds to a multidimensional return process with a unique change of time for all assets under exam. The economic counterpart is uniqueness of trade or business time, which is not in line with empirical evidence. In this paper we propose a new multivariate definition of normal mean-variance mixtures with a flexible dependence structure, based on the economic intuition of both a common and an idiosyncratic component of business time. We analyze both the distribution and the related process. We use the above construction to introduce a multivariate generalized hyperbolic process with generalized hyperbolic margins. We conclude with a stock market example to show the ease of calibration of the model.multivariate normal mean variance mixtures, multivariate generalized hyperbolic distributions, Levy processes, multivariate subordinators

    Copulas in finance and insurance

    Get PDF
    Copulas provide a potential useful modeling tool to represent the dependence structure among variables and to generate joint distributions by combining given marginal distributions. Simulations play a relevant role in finance and insurance. They are used to replicate efficient frontiers or extremal values, to price options, to estimate joint risks, and so on. Using copulas, it is easy to construct and simulate from multivariate distributions based on almost any choice of marginals and any type of dependence structure. In this paper we outline recent contributions of statistical modeling using copulas in finance and insurance. We review issues related to the notion of copulas, copula families, copula-based dynamic and static dependence structure, copulas and latent factor models and simulation of copulas. Finally, we outline hot topics in copulas with a special focus on model selection and goodness-of-fit testing

    Valuation of asset and volatility derivatives using decoupled time-changed L\'evy processes

    Full text link
    In this paper we propose a general derivative pricing framework which employs decoupled time-changed (DTC) L\'evy processes to model the underlying asset of contingent claims. A DTC L\'evy process is a generalized time-changed L\'evy process whose continuous and pure jump parts are allowed to follow separate random time scalings; we devise the martingale structure for a DTC L\'evy-driven asset and revisit many popular models which fall under this framework. Postulating different time changes for the underlying L\'evy decomposition allows to introduce asset price models consistent with the assumption of a correlated pair of continuous and jump market activities; we study one illustrative DTC model having this property by assuming that the instantaneous activity rates follow the the so-called Wishart process. The theory developed is applied to the problem of pricing claims depending not only on the price or the volatility of an underlying asset, but also to more sophisticated derivatives that pay-off on the joint performance of these two financial variables, like the target volatility option (TVO). We solve the pricing problem through a Fourier-inversion method; numerical computations validating our technique are provided.Comment: 30 Pages, 5 Tables, 3 figures. Third revised version: numerical analysis extende

    No-Arbitrage Semi-Martingale Restrictions for Continuous-Time Volatility Models subject to Leverage Effects, Jumps and i.i.d. Noise: Theory and Testable Distributional Implications

    Get PDF
    We develop a sequential procedure to test the adequacy of jump-diffusion models for return distributions. We rely on intraday data and nonparametric volatility measures, along with a new jump detection technique and appropriate conditional moment tests, for assessing the import of jumps and leverage effects. A novel robust-to-jumps approach is utilized to alleviate microstructure frictions for realized volatility estimation. Size and power of the procedure are explored through Monte Carlo methods. Our empirical findings support the jump-diffusive representation for S&P500 futures returns but reveal it is critical to account for leverage effects and jumps to maintain the underlying semi-martingale assumption.

    Copulas in finance and insurance

    Get PDF
    Copulas provide a potential useful modeling tool to represent the dependence structure among variables and to generate joint distributions by combining given marginal distributions. Simulations play a relevant role in finance and insurance. They are used to replicate efficient frontiers or extremal values, to price options, to estimate joint risks, and so on. Using copulas, it is easy to construct and simulate from multivariate distributions based on almost any choice of marginals and any type of dependence structure. In this paper we outline recent contributions of statistical modeling using copulas in finance and insurance. We review issues related to the notion of copulas, copula families, copula-based dynamic and static dependence structure, copulas and latent factor models and simulation of copulas. Finally, we outline hot topics in copulas with a special focus on model selection and goodness-of-fit testing.Dependence structure, Extremal values, Copula modeling, Copula review

    "Testing Profit Rate Equalization in the U.S. Manufacturing Sector: 1947-1998"

    Get PDF
    Long-run differentials in interindustrial profitability are relevant for several areas of theoretical and applied economics because they characterize the overall nature of competition in a capitalist economy. This paper argues that the existing empirical models of competition in the industrial organization literature suffer from serious flaws. An alternative framework, based on recent advances in the econometric modeling of the long run, is developed for estimating the size of long-run profit rate differentials. It is shown that this framework generates separate, industry-specific estimates of two potential components of long-run profit rate differentials identified in economic theory. One component, the noncompetitive differential, stems from factors that do not depend directly on the state of competition; these factors are generally characterized as risk and other premia. The other component, the competitive differential, is due to factors that depend directly on the state of competition (factors such as degree of concentration and economies of scale). Estimates provided here show that during the period under study, the group of industries with statistically insignificant competitive differentials accounted for 72 percent of manufacturing profits and 75 percent of manufacturing capital stock, which is interpreted as lending support to the theories of competition advanced by the classical economists and their modern followers.

    Multifractal-spectral features enhance classification of anomalous diffusion

    Full text link
    Anomalous diffusion processes pose a unique challenge in classification and characterization. Previously (Mangalam et al., 2023, Physical Review Research 5, 023144), we established a framework for understanding anomalous diffusion using multifractal formalism. The present study delves into the potential of multifractal spectral features for effectively distinguishing anomalous diffusion trajectories from five widely used models: fractional Brownian motion, scaled Brownian motion, continuous time random walk, annealed transient time motion, and L\'evy walk. To accomplish this, we generate extensive datasets comprising 10610^6 trajectories from these five anomalous diffusion models and extract multiple multifractal spectra from each trajectory. Our investigation entails a thorough analysis of neural network performance, encompassing features derived from varying numbers of spectra. Furthermore, we explore the integration of multifractal spectra into traditional feature datasets, enabling us to assess their impact comprehensively. To ensure a statistically meaningful comparison, we categorize features into concept groups and train neural networks using features from each designated group. Notably, several feature groups demonstrate similar levels of accuracy, with the highest performance observed in groups utilizing moving-window characteristics and pp-variation features. Multifractal spectral features, particularly those derived from three spectra involving different timescales and cutoffs, closely follow, highlighting their robust discriminatory potential. Remarkably, a neural network exclusively trained on features from a single multifractal spectrum exhibits commendable performance, surpassing other feature groups. Our findings underscore the diverse and potent efficacy of multifractal spectral features in enhancing classification of anomalous diffusion.Comment: 23 pages, 6 figure
    corecore