647 research outputs found

    Collective behaviours in the stock market -- A maximum entropy approach

    Full text link
    Scale invariance, collective behaviours and structural reorganization are crucial for portfolio management (portfolio composition, hedging, alternative definition of risk, etc.). This lack of any characteristic scale and such elaborated behaviours find their origin in the theory of complex systems. There are several mechanisms which generate scale invariance but maximum entropy models are able to explain both scale invariance and collective behaviours. The study of the structure and collective modes of financial markets attracts more and more attention. It has been shown that some agent based models are able to reproduce some stylized facts. Despite their partial success, there is still the problem of rules design. In this work, we used a statistical inverse approach to model the structure and co-movements in financial markets. Inverse models restrict the number of assumptions. We found that a pairwise maximum entropy model is consistent with the data and is able to describe the complex structure of financial systems. We considered the existence of a critical state which is linked to how the market processes information, how it responds to exogenous inputs and how its structure changes. The considered data sets did not reveal a persistent critical state but rather oscillations between order and disorder. In this framework, we also showed that the collective modes are mostly dominated by pairwise co-movements and that univariate models are not good candidates to model crashes. The analysis also suggests a genuine adaptive process since both the maximum variance of the log-likelihood and the accuracy of the predictive scheme vary through time. This approach may provide some clue to crash precursors and may provide highlights on how a shock spreads in a financial network and if it will lead to a crash. The natural continuation of the present work could be the study of such a mechanism.Comment: 146 pages, PhD Thesi

    The application of functional data analysis to force signatures in on-water single sculling

    Get PDF
    Biomechanics as a discipline of sports science has played an important role in on-water rowing over the last 150 years. Substantial focus has been placed on understanding kinetic variables acting around the oar-boat-rower system, and how these variables interact to change boat velocity. Of these variables, propulsive force applied at the oar has received considerable attention, and rowing instrumentation systems capable of measuring propulsive force have enabled coaches and sport scientists the ability to assess and descriptively understand characteristics of rowing technique and performance. Propulsive force is observed through different continuous graphs (i.e. force-time graphs, force-oar angle graphs, etc.), with the shapes on these graphs often referred to as force profiles. Large variations are present between athletes in the shape characteristics of force profiles, and subsequently these differences have led to propulsive force patterns being referred to as a rower’s ‘signature’. The overarching aim of this thesis was to provide content that would build a more thorough understanding of differences in force profiles between rowers, using novel statistical approaches from the area of functional data analysis (FDA). Following a review of the literature, two separate FDA techniques, functional principal components analysis (fPCA) and bivariate fPCA (bfPCA) were explored as potential statistical approaches for application with force profiles. Subsequently, an experimental study applied bfPCA to force-angle profiles of highly skilled male and female sculling rowers. This demonstrated that differences in the patterns of force-angle profiles could be attributed to both rower gender and boat-side. A following experimental study controlled for gender and boat-side, and used bfPCA to explore differences between rowers relative to performance measures such as competition level and boat velocity. Different force profile patterns were attributed to each measure of performance. A final experimental chapter explored whether patterns of continuous force asymmetry were associated with better rowing performance (assessed using rower competition level). This study demonstrated that international level rowers perform the skill with what appear to be deliberate and intentional asymmetries. Biomechanics as a discipline of sports science has played an important role in on-water rowing over the last 150 years. Substantial focus has been placed on understanding kinetic variables acting around the oar-boat-rower system, and how these variables interact to change boat velocity. Of these variables, propulsive force applied at the oar has received considerable attention, and rowing instrumentation systems capable of measuring propulsive force have enabled coaches and sport scientists the ability to assess and descriptively understand characteristics of rowing technique and performance. Propulsive force is observed through different continuous graphs (i.e. force-time graphs, force-oar angle graphs, etc.), with the shapes on these graphs often referred to as force profiles. Large variations are present between athletes in the shape characteristics of force profiles, and subsequently these differences have led to propulsive force patterns being referred to as a rower’s ‘signature’. The overarching aim of this thesis was to provide content that would build a more thorough understanding of differences in force profiles between rowers, using novel statistical approaches from the area of functional data analysis (FDA). Following a review of the literature, two separate FDA techniques, functional principal components analysis (fPCA) and bivariate fPCA (bfPCA) were explored as potential statistical approaches for application with force profiles. Subsequently, an experimental study applied bfPCA to force-angle profiles of highly skilled male and female sculling rowers. This demonstrated that differences in the patterns of force-angle profiles could be attributed to both rower gender and boat-side. A following experimental study controlled for gender and boat-side, and used bfPCA to explore differences between rowers relative to performance measures such as competition level and boat velocity. Different force profile patterns were attributed to each measure of performance. A final experimental chapter explored whether patterns of continuous force asymmetry were associated with better rowing performance (assessed using rower competition level). This study demonstrated that international level rowers perform the skill with what appear to be deliberate and intentional asymmetries. Biomechanics as a discipline of sports science has played an important role in on-water rowing over the last 150 years. Substantial focus has been placed on understanding kinetic variables acting around the oar-boat-rower system, and how these variables interact to change boat velocity. Of these variables, propulsive force applied at the oar has received considerable attention, and rowing instrumentation systems capable of measuring propulsive force have enabled coaches and sport scientists the ability to assess and descriptively understand characteristics of rowing technique and performance. Propulsive force is observed through different continuous graphs (i.e. force-time graphs, force-oar angle graphs, etc.), with the shapes on these graphs often referred to as force profiles. Large variations are present between athletes in the shape characteristics of force profiles, and subsequently these differences have led to propulsive force patterns being referred to as a rower’s ‘signature’. The overarching aim of this thesis was to provide content that would build a more thorough understanding of differences in force profiles between rowers, using novel statistical approaches from the area of functional data analysis (FDA). Following a review of the literature, two separate FDA techniques, functional principal components analysis (fPCA) and bivariate fPCA (bfPCA) were explored as potential statistical approaches for application with force profiles. Subsequently, an experimental study applied bfPCA to force-angle profiles of highly skilled male and female sculling rowers. This demonstrated that differences in the patterns of force-angle profiles could be attributed to both rower gender and boat-side. A following experimental study controlled for gender and boat-side, and used bfPCA to explore differences between rowers relative to performance measures such as competition level and boat velocity. Different force profile patterns were attributed to each measure of performance. A final experimental chapter explored whether patterns of continuous force asymmetry were associated with better rowing performance (assessed using rower competition level). This study demonstrated that international level rowers perform the skill with what appear to be deliberate and intentional asymmetries

    Monte Carlo methods for light propagation in biological tissues

    Get PDF
    Light propagation in turbid media is driven by the equation of radiative transfer. We give a formal probabilistic representation of its solution in the framework of biological tissues and we implement algorithms based on Monte Carlo methods in order to estimate the quantity of light that is received by a homogeneous tissue when emitted by an optic fiber. A variance reduction method is studied and implemented, as well as a Markov chain Monte Carlo method based on the Metropolis–Hastings algorithm. The resulting estimating methods are then compared to the so-called Wang–Prahl (or Wang) method. Finally, the formal representation allows to derive a non-linear optimization algorithm close to Levenberg–Marquardt that is used for the estimation of the scattering and absorption coefficients of the tissue from measurement

    Review of the Australian Core Skills Framework and Digital Literacy Skills Framework and relevant assessment tools. Final report

    Get PDF
    This is the final report of a project commissioned in 2020 by the then Department of Education, Skills and Employment to review the Australian Core Skills Framework (ACSF) and Digital Literacy Skills Framework (DLSF) and tools available to support assessment using these frameworks. The review explored the development and history of the ACSF and DLSF, their use and application in the Australian context and examined features of selected international adult skills frameworks and curriculum. Through this desktop research and range of consultation activities with stakeholders, the Australian Council for Educational Research (ACER) has investigated current use of the ACSF and DLSF and identified issues that indicate a need for framework reform or change. This report documents the findings from the review and proposes recommendations that will better position the frameworks to support agile responses to swiftly changing skill needs. The first main recommendation is to consider the Digital Capability Framework (when available) as a replacement for the DLSF. The second is to continue to maintain the ACSF as the pre-eminent Australian framework for language, literacy and numeracy (LLN)

    Bayesian Multi-Model Frameworks - Properly Addressing Conceptual Uncertainty in Applied Modelling

    Get PDF
    We use models to understand or predict a system. Often, there are multiple plausible but competing model concepts. Hence, modelling is associated with conceptual uncertainty, i.e., the question about proper handling of such model alternatives. For mathematical models, it is possible to quantify their plausibility based on data and rate them accordingly. Bayesian probability calculus offers several formal multi-model frameworks to rate models in a finite set and to quantify their conceptual uncertainty as model weights. These frameworks are Bayesian model selection and averaging (BMS/BMA), Pseudo-BMS/BMA and Bayesian Stacking. The goal of this dissertation is to facilitate proper utilization of these Bayesian multi-model frameworks. They follow different principles in model rating, which is why derived model weights have to be interpreted differently, too. These principles always concern the model setting, i.e., how the models in the set relate to one another and the true model of the system that generated observed data. This relation is formalized in model scores that are used for model weighting within each framework. The scores resemble framework-specific compromises between the ability of a model to fit the data and the therefore required model complexity. Hence, first, the scores are investigated systematically regarding their respective take on model complexity and are allocated in a developed classification scheme. This shows that BMS/BMA always pursues to identify the true model in the set, that Pseudo-BMS/BMA searches the model with largest predictive power despite none of the models being the true one, and that, on that condition, Bayesian Stacking seeks reliability in prediction by combining predictive distributions of multiple models. An application example with numerical models illustrates these behaviours and demonstrates which misinterpretations of model weights impend, if a certain framework is applied despite being unsuitable for the underlying model setting. Regarding applied modelling, first, a new setting is proposed that allows to identify a ``quasi-true'' model in a set. Second, Bayesian Bootstrapping is employed to take into account that rating of predictive capability is based on only limited data. To ensure that the Bayesian multi-model frameworks are employed properly and goal-oriented, a guideline is set up. With respect to a clearly defined modelling goal and the allocation of available models to the respective setting, it leads to the suitable multi-model framework. Aside of the three investigated frameworks, this guideline further contains an additional one that allows to identify a (quasi-)true model if it is composed of a linear combination of the model alternatives in the set. The gained insights enable a broad range of users in science practice to properly employ Bayesian multi-model frameworks in order to quantify and handle conceptual uncertainty. Thus, maximum reliability in system understanding and prediction with multiple models can be achieved. Further, the insights pave the way for systematic model development and improvement.Wir benutzen Modelle, um ein System zu verstehen oder vorherzusagen. Oft gibt es dabei mehrere plausible aber konkurrierende Modellkonzepte. Daher geht Modellierung einher mit konzeptioneller Unsicherheit, also der Frage nach dem angemessenen Umgang mit solchen Modellalternativen. Bei mathematischen Modellen ist es möglich, die Plausibilität jedes Modells anhand von Daten des Systems zu quantifizieren und Modelle entsprechend zu bewerten. Bayes'sche Wahrscheinlichkeitsrechnung bietet dazu verschiedene formale Multi-Modellrahmen, um Modellalternativen in einem endlichen Set zu bewerten und ihre konzeptionelle Unsicherheit als Modellgewichte zu beziffern. Diese Rahmen sind Bayes'sche Modellwahl und -mittelung (BMS/BMA), Pseudo-BMS/BMA und Bayes'sche Modellstapelung. Das Ziel dieser Dissertation ist es, den adäquaten Umgang mit diesen Bayes'schen Multi-Modellrahmen zu ermöglichen. Sie folgen unterschiedlichen Prinzipien in der Modellbewertung weshalb die abgeleiteten Modellgewichte auch unterschiedlich zu interpretieren sind. Diese Prinzipien beziehen sich immer auf das Modellsetting, also darauf, wie sich die Modelle im Set zueinander und auf das wahre Modell des Systems beziehen, welches bereits gemessene Daten erzeugt hat. Dieser Bezug ist in Kenngrößen formalisiert, die innerhalb jedes Rahmens der Modellgewichtung dienen. Die Kenngrößen stellen rahmenspezifische Kompromisse dar, zwischen der Fähigkeit eines Modells die Daten zu treffen und der dazu benötigten Modellkomplexität. Daher werden die Kenngrößen zunächst systematisch auf ihre jeweilige Bewertung von Modellkomplexität untersucht und in einem entsprechend entwickelten Klassifikationschema zugeordnet. Dabei zeigt sich, dass BMS/BMA stets verfolgt das wahre Modell im Set zu identifizieren, dass Pseudo-BMS/BMA das Modell mit der höchsten Vorsagekraft sucht, obwohl kein wahres Modell verfügbar ist, und dass Bayes'sche Modellstapelung unter dieser Bedingung Verlässlichkeit von Vorhersagen anstrebt, indem die Vorhersageverteilungen mehrerer Modelle kombiniert werden. Ein Anwendungsbeispiel mit numerischen Modellen verdeutlicht diese Verhaltenweisen und zeigt auf, welche Fehlinterpretationen der Modellgewichte drohen, wenn ein bestimmter Rahmen angewandt wird, obwohl er nicht zum zugrundeliegenden Modellsetting passt. Mit Bezug auf anwendungsorientierte Modellierung wird dabei erstens ein neues Setting vorgestellt, das es ermöglicht, ein ``quasi-wahres'' Modell in einem Set zu identifizieren. Zweitens wird Bayes'sches Bootstrapping eingesetzt um bei der Bewertung der Vorhersagegüte zu berücksichtigen, dass diese auf Basis weniger Daten erfolgt. Um zu gewährleisten, dass die Bayes'schen Multi-Modellrahmen angemessen und zielführend eingesetzt werden, wird schließlich ein Leitfaden erstellt. Anhand eines klar definierten Modellierungszieles und der Einordnung der gegebenen Modelle in das entspechende Setting leitet dieser zum geeigneten Multi-Modellrahmen. Neben den drei untersuchten Rahmen enthält dieser Leitfaden zudem einen weiteren, der es ermöglicht ein (quasi-)wahres Modell zu identifizieren, wenn dieses aus einer Linearkombination der Modellalternativen im Set besteht. Die gewonnenen Erkenntnisse ermöglichen es einer breiten Anwenderschaft in Wissenschaft und Praxis, Bayes'sche Multi-Modellrahmen zur Quantifizierung und Handhabung konzeptioneller Unsicherheit adäquat einzusetzen. Dadurch lässt sich maximale Verlässlichkeit in Systemverständis und -vorhersage durch mehrere Modelle erreichen. Die Erkenntnisse ebnen darüber hinaus den Weg für systematische Modellentwicklung und -verbesserung

    Theory and Application of Dynamic Spatial Time Series Models

    Get PDF
    Stochastic economic processes are often characterized by dynamic interactions between variables that are dependent in both space and time. Analyzing these processes raises a number of questions about the econometric methods used that are both practically and theoretically interesting. This work studies econometric approaches to analyze spatial data that evolves dynamically over time. The book provides a background on least squares and maximum likelihood estimators, and discusses some of the limits of basic econometric theory. It then discusses the importance of addressing spatial heterogeneity in policies. The next chapters cover parametric modeling of linear and nonlinear spatial time series, non-parametric modeling of nonlinearities in panel data, modeling of multiple spatial time series variables that exhibit long and short memory, and probabilistic causality in spatial time series settings
    • …
    corecore