4,898 research outputs found

    A minimal noise trader model with realistic time series properties

    Get PDF
    Simulations of agent-based models have shown that the stylized facts (unit-root, fat tails and volatility clustering) of financial markets have a possible explanation in the interactions among agents. However, the complexity, originating from the presence of non-linearity and interactions, often limits the analytical approach to the dynamics of these models. In this paper we show that even a very simple model of a financial market with heterogeneous interacting agents is capable of reproducing realistic statistical properties of returns, in close quantitative accordance with the empirical analysis. The simplicity of the system also permits some analytical insights using concepts from statistical mechanics and physics. In our model, the traders are divided into two groups : fundamentalists and chartists, and their interactions are based on a variant of the herding mechanism introduced by Kirman [22]. The statistical analysis of our simulated data shows long-term dependence in the auto-correlations of squared and absolute returns and hyperbolic decay in the tail of the distribution of the raw returns, both with estimated decay parameters in the same range like empirical data. Theoretical analysis, however, excludes the possibility of ?true? scaling behavior because of the Markovian nature of the underlying process and the finite set of possible realized returns. The model, therefore, only mimics power law behavior. Similarly as with the phenomenological volatility models analyzed in LeBaron [25], the usual statistical tests are not able to distinguish between true or pseudo-scaling laws in the dynamics of our artificial market. --Herd Behavior , Speculative Dynamics , Fat Tails , Volatility Clustering

    Comparison of data-driven uncertainty quantification methods for a carbon dioxide storage benchmark scenario

    Full text link
    A variety of methods is available to quantify uncertainties arising with\-in the modeling of flow and transport in carbon dioxide storage, but there is a lack of thorough comparisons. Usually, raw data from such storage sites can hardly be described by theoretical statistical distributions since only very limited data is available. Hence, exact information on distribution shapes for all uncertain parameters is very rare in realistic applications. We discuss and compare four different methods tested for data-driven uncertainty quantification based on a benchmark scenario of carbon dioxide storage. In the benchmark, for which we provide data and code, carbon dioxide is injected into a saline aquifer modeled by the nonlinear capillarity-free fractional flow formulation for two incompressible fluid phases, namely carbon dioxide and brine. To cover different aspects of uncertainty quantification, we incorporate various sources of uncertainty such as uncertainty of boundary conditions, of conceptual model definitions and of material properties. We consider recent versions of the following non-intrusive and intrusive uncertainty quantification methods: arbitary polynomial chaos, spatially adaptive sparse grids, kernel-based greedy interpolation and hybrid stochastic Galerkin. The performance of each approach is demonstrated assessing expectation value and standard deviation of the carbon dioxide saturation against a reference statistic based on Monte Carlo sampling. We compare the convergence of all methods reporting on accuracy with respect to the number of model runs and resolution. Finally we offer suggestions about the methods' advantages and disadvantages that can guide the modeler for uncertainty quantification in carbon dioxide storage and beyond

    A general theory of intertemporal decision-making and the perception of time

    Full text link
    Animals and humans make decisions based on their expected outcomes. Since relevant outcomes are often delayed, perceiving delays and choosing between earlier versus later rewards (intertemporal decision-making) is an essential component of animal behavior. The myriad observations made in experiments studying intertemporal decision-making and time perception have not yet been rationalized within a single theory. Here we present a theory-Training--Integrated Maximized Estimation of Reinforcement Rate (TIMERR)--that explains a wide variety of behavioral observations made in intertemporal decision-making and the perception of time. Our theory postulates that animals make intertemporal choices to optimize expected reward rates over a limited temporal window; this window includes a past integration interval (over which experienced reward rate is estimated) and the expected delay to future reward. Using this theory, we derive a mathematical expression for the subjective representation of time. A unique contribution of our work is in finding that the past integration interval directly determines the steepness of temporal discounting and the nonlinearity of time perception. In so doing, our theory provides a single framework to understand both intertemporal decision-making and time perception.Comment: 37 pages, 4 main figures, 3 supplementary figure

    A noise trader model as a generator of apparent financial power laws and long memory

    Get PDF
    In various agent-based models the stylized facts of financial markets (unit-roots, fat tails and volatility clustering) have been shown to emerge from the interactions of agents. However, the complexity of these models often limits their analytical accessibility. In this paper we show that even a very simple model of a financial market with heterogeneous interacting agents is capable of reproducing these ubiquitous statistical properties. The simplicity of our approach permits to derive some analytical insights using concepts from statistical mechanics. In our model, traders are divided into two groups: fundamentalists and chartists, and their interactions are based on a variant of the herding mechanism introduced by Kirman [1993]. The statistical analysis of simulated data points toward long-term dependence in the auto-correlations of squared and absolute returns and hyperbolic decay in the tail of the distribution of raw returns, both with estimated decay parameters in the same range like those of empirical data. Theoretical analysis, however, excludes the possibility of ‘true’ scaling behavior because of the Markovian nature of the underlying process and the boundedness of returns. The model, therefore, only mimics power law behavior. Similarly as with the phenomenological volatility models analyzed in LeBaron [2001], the usual statistical tests are not able to distinguish between true or pseudo-scaling laws in the dynamics of our artificial market --Herd Behavior,Speculative Dynamics,Fat Tails,Volatility Clustering

    Laplacian Mixture Modeling for Network Analysis and Unsupervised Learning on Graphs

    Full text link
    Laplacian mixture models identify overlapping regions of influence in unlabeled graph and network data in a scalable and computationally efficient way, yielding useful low-dimensional representations. By combining Laplacian eigenspace and finite mixture modeling methods, they provide probabilistic or fuzzy dimensionality reductions or domain decompositions for a variety of input data types, including mixture distributions, feature vectors, and graphs or networks. Provable optimal recovery using the algorithm is analytically shown for a nontrivial class of cluster graphs. Heuristic approximations for scalable high-performance implementations are described and empirically tested. Connections to PageRank and community detection in network analysis demonstrate the wide applicability of this approach. The origins of fuzzy spectral methods, beginning with generalized heat or diffusion equations in physics, are reviewed and summarized. Comparisons to other dimensionality reduction and clustering methods for challenging unsupervised machine learning problems are also discussed.Comment: 13 figures, 35 reference

    Source finding, parametrization and classification for the extragalactic Effelsberg-Bonn HI Survey

    Full text link
    Context. Source extraction for large-scale HI surveys currently involves large amounts of manual labor. For data volumes expected from future HI surveys with upcoming facilities, this approach is not feasible any longer. Aims. We describe the implementation of a fully automated source finding, parametrization, and classification pipeline for the Effelsberg-Bonn HI Survey (EBHIS). With future radio astronomical facilities in mind, we want to explore the feasibility of a completely automated approach to source extraction for large-scale HI surveys. Methods. Source finding is implemented using wavelet denoising methods, which previous studies show to be a powerful tool, especially in the presence of data defects. For parametrization, we automate baseline fitting, mask optimization, and other tasks based on well-established algorithms, currently used interactively. For the classification of candidates, we implement an artificial neural network which is trained on a candidate set comprised of false positives from real data and simulated sources. Using simulated data, we perform a thorough analysis of the algorithms implemented. Results. We compare the results from our simulations to the parametrization accuracy of the HI Parkes All-Sky Survey (HIPASS) survey. Even though HIPASS is more sensitive than EBHIS in its current state, the parametrization accuracy and classification reliability match or surpass the manual approach used for HIPASS data.Comment: 13 Pages, 13 Figures, 1 Table, accepted for publication in A&

    The Methodologies of Neuroeconomics

    Get PDF
    We critically review the methodological practices of two research programs which are jointly called 'neuroeconomics'. We defend the first of these, termed 'neurocellular economics' (NE) by Ross (2008), from an attack on its relevance by Gul and Pesendorfer (2008) (GP). This attack arbitrarily singles out some but not all processing variables as unimportant to economics, is insensitive to the realities of empirical theory testing, and ignores the central importance to economics of 'ecological rationality' (Smith 2007). GP ironically share this last attitude with advocates of 'behavioral economics in the scanner' (BES), the other, and better known, branch of neuroeconomics. We consider grounds for skepticism about the accomplishments of this research program to date, based on its methodological individualism, its ad hoc econometrics, its tolerance for invalid reverse inference, and its inattention to the difficulties involved in extracting temporally lagged data if people's anticipation of reward causes pre-emptive blood flow.

    Aerospace medicine and biology: A continuing bibliography with indexes, supplement 204

    Get PDF
    This bibliography lists 140 reports, articles, and other documents introduced into the NASA scientific and technical information system in February 1980
    corecore