9,725 research outputs found

    The Relationship Between the Random Walk of the Returns of Financial Market Indices and Market Efficiency: an Analytical Study of the Indicators of a Sample of Arab Financial Markets

    Get PDF
    Purpose:  The aim of this article is to study focused through the sample that was selected for the Arab financial markets (Iraq, Kuwait, Dubai) on testing the behavior of the returns of the stock indices for the sample to verify whether they follow the random walk or not.   Theoretical framework: The concept of financial market indices and market efficiency was considered as a  complex multi-tiered system. theory of capital markets functioning were employed in the study.   Design/methodology/approach:  At the weak level, the research dealt with the returns of the daily market indices during the period from January 5/2021 to December 1, 2021.   Findings: through the use of three tests, which are to test the normal distribution of the studied observations using the test (Kolmogorov-Smirnov test), and the time-series stability test (Stationary), which is known as the unit root test through the use of the modified Dickey-Fuller Test, and the serial self-correlation test (Q-Stat) as part of the financial markets efficiency test.which means that the conscious investor can benefit from achieving extraordinary returns in those markets.   Research, Practical & Social implications: We suggest a future research agenda and highlight the contributions made to executive and financial market.   Originality/value:  The research concluded that the random movement hypothesis was accepted, and that the stock indices reflect all the historical information in the researched markets, and then the efficiency of the studied markets at the weak level.

    Likelihood Asymptotics in Nonregular Settings: A Review with Emphasis on the Likelihood Ratio

    Full text link
    This paper reviews the most common situations where one or more regularity conditions which underlie classical likelihood-based parametric inference fail. We identify three main classes of problems: boundary problems, indeterminate parameter problems -- which include non-identifiable parameters and singular information matrices -- and change-point problems. The review focuses on the large-sample properties of the likelihood ratio statistic. We emphasize analytical solutions and acknowledge software implementations where available. We furthermore give summary insight about the possible tools to derivate the key results. Other approaches to hypothesis testing and connections to estimation are listed in the annotated bibliography of the Supplementary Material

    Corporate Social Responsibility: the institutionalization of ESG

    Get PDF
    Understanding the impact of Corporate Social Responsibility (CSR) on firm performance as it relates to industries reliant on technological innovation is a complex and perpetually evolving challenge. To thoroughly investigate this topic, this dissertation will adopt an economics-based structure to address three primary hypotheses. This structure allows for each hypothesis to essentially be a standalone empirical paper, unified by an overall analysis of the nature of impact that ESG has on firm performance. The first hypothesis explores the evolution of CSR to the modern quantified iteration of ESG has led to the institutionalization and standardization of the CSR concept. The second hypothesis fills gaps in existing literature testing the relationship between firm performance and ESG by finding that the relationship is significantly positive in long-term, strategic metrics (ROA and ROIC) and that there is no correlation in short-term metrics (ROE and ROS). Finally, the third hypothesis states that if a firm has a long-term strategic ESG plan, as proxied by the publication of CSR reports, then it is more resilience to damage from controversies. This is supported by the finding that pro-ESG firms consistently fared better than their counterparts in both financial and ESG performance, even in the event of a controversy. However, firms with consistent reporting are also held to a higher standard than their nonreporting peers, suggesting a higher risk and higher reward dynamic. These findings support the theory of good management, in that long-term strategic planning is both immediately economically beneficial and serves as a means of risk management and social impact mitigation. Overall, this contributes to the literature by fillings gaps in the nature of impact that ESG has on firm performance, particularly from a management perspective

    Model Diagnostics meets Forecast Evaluation: Goodness-of-Fit, Calibration, and Related Topics

    Get PDF
    Principled forecast evaluation and model diagnostics are vital in fitting probabilistic models and forecasting outcomes of interest. A common principle is that fitted or predicted distributions ought to be calibrated, ideally in the sense that the outcome is indistinguishable from a random draw from the posited distribution. Much of this thesis is centered on calibration properties of various types of forecasts. In the first part of the thesis, a simple algorithm for exact multinomial goodness-of-fit tests is proposed. The algorithm computes exact pp-values based on various test statistics, such as the log-likelihood ratio and Pearson\u27s chi-square. A thorough analysis shows improvement on extant methods. However, the runtime of the algorithm grows exponentially in the number of categories and hence its use is limited. In the second part, a framework rooted in probability theory is developed, which gives rise to hierarchies of calibration, and applies to both predictive distributions and stand-alone point forecasts. Based on a general notion of conditional T-calibration, the thesis introduces population versions of T-reliability diagrams and revisits a score decomposition into measures of miscalibration, discrimination, and uncertainty. Stable and efficient estimators of T-reliability diagrams and score components arise via nonparametric isotonic regression and the pool-adjacent-violators algorithm. For in-sample model diagnostics, a universal coefficient of determination is introduced that nests and reinterprets the classical R2R^2 in least squares regression. In the third part, probabilistic top lists are proposed as a novel type of prediction in classification, which bridges the gap between single-class predictions and predictive distributions. The probabilistic top list functional is elicited by strictly consistent evaluation metrics, based on symmetric proper scoring rules, which admit comparison of various types of predictions

    Modeling Uncertainty for Reliable Probabilistic Modeling in Deep Learning and Beyond

    Full text link
    [ES] Esta tesis se enmarca en la intersección entre las técnicas modernas de Machine Learning, como las Redes Neuronales Profundas, y el modelado probabilístico confiable. En muchas aplicaciones, no solo nos importa la predicción hecha por un modelo (por ejemplo esta imagen de pulmón presenta cáncer) sino también la confianza que tiene el modelo para hacer esta predicción (por ejemplo esta imagen de pulmón presenta cáncer con 67% probabilidad). En tales aplicaciones, el modelo ayuda al tomador de decisiones (en este caso un médico) a tomar la decisión final. Como consecuencia, es necesario que las probabilidades proporcionadas por un modelo reflejen las proporciones reales presentes en el conjunto al que se ha asignado dichas probabilidades; de lo contrario, el modelo es inútil en la práctica. Cuando esto sucede, decimos que un modelo está perfectamente calibrado. En esta tesis se exploran tres vias para proveer modelos más calibrados. Primero se muestra como calibrar modelos de manera implicita, que son descalibrados por técnicas de aumentación de datos. Se introduce una función de coste que resuelve esta descalibración tomando como partida las ideas derivadas de la toma de decisiones con la regla de Bayes. Segundo, se muestra como calibrar modelos utilizando una etapa de post calibración implementada con una red neuronal Bayesiana. Finalmente, y en base a las limitaciones estudiadas en la red neuronal Bayesiana, que hipotetizamos que se basan en un prior mispecificado, se introduce un nuevo proceso estocástico que sirve como distribución a priori en un problema de inferencia Bayesiana.[CA] Aquesta tesi s'emmarca en la intersecció entre les tècniques modernes de Machine Learning, com ara les Xarxes Neuronals Profundes, i el modelatge probabilístic fiable. En moltes aplicacions, no només ens importa la predicció feta per un model (per ejemplem aquesta imatge de pulmó presenta càncer) sinó també la confiança que té el model per fer aquesta predicció (per exemple aquesta imatge de pulmó presenta càncer amb 67% probabilitat). En aquestes aplicacions, el model ajuda el prenedor de decisions (en aquest cas un metge) a prendre la decisió final. Com a conseqüència, cal que les probabilitats proporcionades per un model reflecteixin les proporcions reals presents en el conjunt a què s'han assignat aquestes probabilitats; altrament, el model és inútil a la pràctica. Quan això passa, diem que un model està perfectament calibrat. En aquesta tesi s'exploren tres vies per proveir models més calibrats. Primer es mostra com calibrar models de manera implícita, que són descalibrats per tècniques d'augmentació de dades. S'introdueix una funció de cost que resol aquesta descalibració prenent com a partida les idees derivades de la presa de decisions amb la regla de Bayes. Segon, es mostra com calibrar models utilitzant una etapa de post calibratge implementada amb una xarxa neuronal Bayesiana. Finalment, i segons les limitacions estudiades a la xarxa neuronal Bayesiana, que es basen en un prior mispecificat, s'introdueix un nou procés estocàstic que serveix com a distribució a priori en un problema d'inferència Bayesiana.[EN] This thesis is framed at the intersection between modern Machine Learning techniques, such as Deep Neural Networks, and reliable probabilistic modeling. In many machine learning applications, we do not only care about the prediction made by a model (e.g. this lung image presents cancer) but also in how confident is the model in making this prediction (e.g. this lung image presents cancer with 67% probability). In such applications, the model assists the decision-maker (in this case a doctor) towards making the final decision. As a consequence, one needs that the probabilities provided by a model reflects the true underlying set of outcomes, otherwise the model is useless in practice. When this happens, we say that a model is perfectly calibrated. In this thesis three ways are explored to provide more calibrated models. First, it is shown how to calibrate models implicitly, which are decalibrated by data augmentation techniques. A cost function is introduced that solves this decalibration taking as a starting point the ideas derived from decision making with Bayes' rule. Second, it shows how to calibrate models using a post-calibration stage implemented with a Bayesian neural network. Finally, and based on the limitations studied in the Bayesian neural network, which we hypothesize that came from a mispecified prior, a new stochastic process is introduced that serves as a priori distribution in a Bayesian inference problem.Maroñas Molano, J. (2022). Modeling Uncertainty for Reliable Probabilistic Modeling in Deep Learning and Beyond [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/181582TESI

    Development of in-vitro in-silico technologies for modelling and analysis of haematological malignancies

    Get PDF
    Worldwide, haematological malignancies are responsible for roughly 6% of all the cancer-related deaths. Leukaemias are one of the most severe types of cancer, as only about 40% of the patients have an overall survival of 10 years or more. Myelodysplastic Syndrome (MDS), a pre-leukaemic condition, is a blood disorder characterized by the presence of dysplastic, irregular, immature cells, or blasts, in the peripheral blood (PB) and in the bone marrow (BM), as well as multi-lineage cytopenias. We have created a detailed, lineage-specific, high-fidelity in-silico erythroid model that incorporates known biological stimuli (cytokines and hormones) and a competing diseased haematopoietic population, correctly capturing crucial biological checkpoints (EPO-dependent CFU-E differentiation) and replicating the in-vivo erythroid differentiation dynamics. In parallel, we have also proposed a long-term, cytokine-free 3D cell culture system for primary MDS cells, which was firstly optimized using easily-accessible healthy controls. This system enabled long-term (24-day) maintenance in culture with high (>75%) cell viability, promoting spontaneous expansion of erythroid phenotypes (CD71+/CD235a+) without the addition of any exogenous cytokines. Lastly, we have proposed a novel in-vitro in-silico framework using GC-MS metabolomics for the metabolic profiling of BM and PB plasma, aiming not only to discretize between haematological conditions but also to sub-classify MDS patients, potentially based on candidate biomarkers. Unsupervised multivariate statistical analysis showed clear intra- and inter-disease separation of samples of 5 distinct haematological malignancies, demonstrating the potential of this approach for disease characterization. The work herein presented paves the way for the development of in-vitro in-silico technologies to better, characterize, diagnose, model and target haematological malignancies such as MDS and AML.Open Acces

    Three Essays on Housing, Credit and Uncertainty

    Get PDF
    This thesis comprises three essays in macroeconomics. The key aim of our work is to quantify the link between housing, credit and uncertainty. In the first chapter we investigate the propagation mechanism of a temporary uncertainty shock for the UK. We adopt a factor augmented VAR model which facilitates an examination of variables which have not been included in previous studies. Our empirical analysis establishes that while the ‘traditional’ channels generally hold; across different sectors there are asymmetric responses. For example, precautionary behaviour implies that agents cut back on consumption and increase saving in order to mitigate the risks associated with uncertainty. However, decomposing consumption, we provide evidence that for food and fuel markets the impact of an increase in uncertainty is close to zero. This follows because we do not capture the expected fall in demand given the consumption decision is a necessity. In terms of housing and credit we propose a new housing uncertainty channel which is closely linked to growth option theory. The idea is that a second moment uncertainty shock extends the tails of the distributions and thus increases the potential payoffs. This in turn leads to an increase in investment. For those who are able to access credit, we capture an increase in housing investment and a corresponding expansion in mortgage credit which contributes to a reduction in the negative impacts of uncertainty shocks. The second paper extends the discussion in Chapter 1, by examining the transmission of uncertainty shocks in the US. Specifically, we quantify the linear and non linear impacts of uncertainty. For the linear analysis, we estimate a proxy SVAR using narrative identification, net of first moment shocks, and provide supporting evidence of the housing uncertainty channel. The interaction between housing and credit is shown to be crucial, reinforcing the findings we present in Chapter 1. The intuition for our non linear analysis builds upon the idea that once uncertainty has reached a certain level, any additional increases in uncertainty are unlikely to have any impact on macroeconomic aggregates. In order to test this conjecture, we propose an instrument to identify uncertainty shocks, which is constructed by isolating the variation in the price of gold around events associated with uncertainty. We argue that the change in the price of gold accurately represents uncertainty, because it is perceived as a safe haven asset. When faced with the additional risk associated with uncertainty agents invest in gold. This reflects a flight to safety. We adopt a threshold VAR model which isolates responses dependant on regimes synoptic with high and low uncertainty. We show that in a low uncertainty regime, uncertainty propagates similarly to the linear case. In contrast, there is a clear distinction in a high uncertainty regime driven by impatient behaviour. In our final chapter, we propose a DSGE model which is consistent with the empirical evidence we provide in the previous chapters. We choose to order the chapters in such a way that we first establish the empirical facts of uncertainty shocks and then use these to inform our theoretical model. The key empirical takeaway following a shock to uncertainty is a co-movement between consumption and investment. We demonstrate that a vanilla housing real business cycle model is not consistent with these empirical facts. In order to match the theoretical model to the empirics, we extend the baseline model by including both banks and financial frictions. We document first a credit channel which limits access to external funds for the credit dependant sector of the economy. Second, we find a housing demand channel, which leads to tighter constraints for households and entrepreneurs and lowers the return on capital. Together, both channels amplify precautionary saving for household borrowers. The credit channel creates a real option channel for entrepreneurs, while the housing demand channel impacts households savers by amplifying their reduction in investment. In the absence of credit constraints, the housing uncertainty channel dominates behaviour. However, this channel is reversed when agents face difficulty in accessing credit consistent with Chapter 1

    On the Education of a Psychiatrist: Notes from the Field

    Get PDF
    The overarching borrowed question that frames the work of this PhD asks, "What does an education in psychiatry do to a psychiatrist?" Early in my practice of child and adolescent psychiatry, the "know how" in the custodianship of care neither readily nor easily translated into "show how," resulting in a pedagogical conundrum that belatedly registered as uncomfortable emotional symptoms about my education. This nidus of professional confusion and uncertainty creates the context for my inquiry into the complexities and dilemmas of contemporary matters of medical education, specifically as it pertains to my identity as a psychiatrist. To probe these queries, three non-traditional, blended methodologies are relied upon. John Forrester's "thinking in cases" is utilized in reading memoirs and critical histories in psychiatry, such that the thesis can be read as a case of many educational cases. I stay close to the reading of Oliver Sacks' memoir whose work in neurology also grapples with questions of the mind; an idea which becomes a leitmotif in my own autoethnographic reflections for re-constructing my education in psychiatry and its potential beginnings as a trainee and educator in both Canada and Uganda. Weaving in and out of historical observations made by Foucault about psychiatry and linking them to Sacks' recall of numerous medical institutional encounters, I tackle the problem of matricide in an educational arena weary of newness and how this deadly curriculum can be generative in its intent. Through attempts at engaging a decolonizing discourse about my experiences as a clinician educator in Uganda, the concept of an educational void and how it was both ruthlessly encountered as a situational dilemma but underwent a thought transformation to understand it as a survival tactic, is described. Psychoanalytic orientations are heavily leaned upon in my interpretations, highlighting the emotional logic inherent in the transference sites constituting the human work of medical practice and education. Broad themes emerge focusing on history, place, gender, and positioning of the body as educational markers speaking to a different kind of experiential pedagogy predicated on somatic revelations to make the mind intelligible in its relevance to the temporality of education. I arrive at the fault lines of education, difficult knowledge, and the uncertainties, including the frailty of my own self as a resource for the mind, that form educational myths needed to tackle obstacles to learning. Through this process, a personal and professional awakening occurs

    How to Be a God

    Get PDF
    When it comes to questions concerning the nature of Reality, Philosophers and Theologians have the answers. Philosophers have the answers that can’t be proven right. Theologians have the answers that can’t be proven wrong. Today’s designers of Massively-Multiplayer Online Role-Playing Games create realities for a living. They can’t spend centuries mulling over the issues: they have to face them head-on. Their practical experiences can indicate which theoretical proposals actually work in practice. That’s today’s designers. Tomorrow’s will have a whole new set of questions to answer. The designers of virtual worlds are the literal gods of those realities. Suppose Artificial Intelligence comes through and allows us to create non-player characters as smart as us. What are our responsibilities as gods? How should we, as gods, conduct ourselves? How should we be gods
    • …
    corecore