15,030 research outputs found

    Holographic reconstruction of spacetime and renormalization in the AdS/CFT correspondence

    Get PDF
    We develop a systematic method for renormalizing the AdS/CFT prescription for computing correlation functions. This involves regularizing the bulk on-shell supergravity action in a covariant way, computing all divergences, adding counterterms to cancel them and then removing the regulator. We explicitly work out the case of pure gravity up to six dimensions and of gravity coupled to scalars. The method can also be viewed as providing a holographic reconstruction of the bulk spacetime metric and of bulk fields on this spacetime, out of conformal field theory data. Knowing which sources are turned on is sufficient in order to obtain an asymptotic expansion of the bulk metric and of bulk fields near the boundary to high enough order so that all infrared divergences of the on-shell action are obtained. To continue the holographic reconstruction of the bulk fields one needs new CFT data: the expectation value of the dual operator. In particular, in order to obtain the bulk metric one needs to know the expectation value of stress-energy tensor of the boundary theory. We provide completely explicit formulae for the holographic stress-energy tensors up to six dimensions. We show that both the gravitational and matter conformal anomalies of the boundary theory are correctly reproduced. We also obtain the conformal transformation properties of the boundary stress-energy tensors

    Causal mediation analysis with multiple mediators.

    Get PDF
    In diverse fields of empirical research-including many in the biological sciences-attempts are made to decompose the effect of an exposure on an outcome into its effects via a number of different pathways. For example, we may wish to separate the effect of heavy alcohol consumption on systolic blood pressure (SBP) into effects via body mass index (BMI), via gamma-glutamyl transpeptidase (GGT), and via other pathways. Much progress has been made, mainly due to contributions from the field of causal inference, in understanding the precise nature of statistical estimands that capture such intuitive effects, the assumptions under which they can be identified, and statistical methods for doing so. These contributions have focused almost entirely on settings with a single mediator, or a set of mediators considered en bloc; in many applications, however, researchers attempt a much more ambitious decomposition into numerous path-specific effects through many mediators. In this article, we give counterfactual definitions of such path-specific estimands in settings with multiple mediators, when earlier mediators may affect later ones, showing that there are many ways in which decomposition can be done. We discuss the strong assumptions under which the effects are identified, suggesting a sensitivity analysis approach when a particular subset of the assumptions cannot be justified. These ideas are illustrated using data on alcohol consumption, SBP, BMI, and GGT from the Izhevsk Family Study. We aim to bridge the gap from "single mediator theory" to "multiple mediator practice," highlighting the ambitious nature of this endeavor and giving practical suggestions on how to proceed

    Causal mediation analysis with multiple mediators

    Get PDF
    In diverse fields of empirical research - including many in the biological sciences - attempts are made to decompose the effect of an exposure on an outcome into its effects via a number of different pathways. For example, we may wish to separate the effect of heavy alcohol consumption on systolic blood pressure (SBP) into effects via body mass index (BMI), via gamma-glutamyl transpeptidase (GGT), and via other pathways. Much progress has been made, mainly due to contributions from the field of causal inference, in understanding the precise nature of statistical estimands that capture such intuitive effects, the assumptions under which they can be identified, and statistical methods for doing so. These contributions have focused almost entirely on settings with a single mediator, or a set of mediators considered en bloc; in many applications, however, researchers attempt a much more ambitious decomposition into numerous path-specific effects through many mediators. In this article, we give counterfactual definitions of such path-specific estimands in settings with multiple mediators, when earlier mediators may affect later ones, showing that there are many ways in which decomposition can be done. We discuss the strong assumptions under which the effects are identified, suggesting a sensitivity analysis approach when a particular subset of the assumptions cannot be justified. These ideas are illustrated using data on alcohol consumption, SBP, BMI, and GGT from the Izhevsk Family Study. We aim to bridge the gap from single mediator theory to multiple mediator practice, highlighting the ambitious nature of this endeavor and giving practical suggestions on how to proceed

    Forecasting Player Behavioral Data and Simulating in-Game Events

    Full text link
    Understanding player behavior is fundamental in game data science. Video games evolve as players interact with the game, so being able to foresee player experience would help to ensure a successful game development. In particular, game developers need to evaluate beforehand the impact of in-game events. Simulation optimization of these events is crucial to increase player engagement and maximize monetization. We present an experimental analysis of several methods to forecast game-related variables, with two main aims: to obtain accurate predictions of in-app purchases and playtime in an operational production environment, and to perform simulations of in-game events in order to maximize sales and playtime. Our ultimate purpose is to take a step towards the data-driven development of games. The results suggest that, even though the performance of traditional approaches such as ARIMA is still better, the outcomes of state-of-the-art techniques like deep learning are promising. Deep learning comes up as a well-suited general model that could be used to forecast a variety of time series with different dynamic behaviors

    Antifungal activity of bacterial strains from the rhizosphere of Stachytarpheta crassifolia

    Get PDF
    This study evaluated the antifungal action of biomolecules produced from the secondary metabolism of bacterial strains found in the rhizosphere of semi arid plants against human pathogenic Candida albicans. Crude extracts were obtained using ethyl acetate as an organic solvent and the bioactivity was assessed with a bioautography technique. The results showed that bacterial strains, Alcaligenes faecalis MRbS12 and Bacillus cereus MRbS26, had compounds with antifungal bioactivity. The largest inhibition zones for both compounds were located on spots with Rf values below 0.500, indicating that the molecules possibly had polar characteristics. These results suggested that microorganisms found in the environment could have bioprospecting potential.Key words: Biomolecules, bioautoghaphy, antifungal activity, Alcaligenes faecalis, Bacillus cereus, Candida albicans

    Financial Distress Prediction with Stacking Ensemble Learning

    Get PDF
    Previous studies have used financial ratios extensively to build their predictive model of financial distress. The Altman ratio is the most often used to predict, especially in academic studies. However, the Altman ratio is highly dependent on the validity of the data in financial statements, so other variables are needed to assess the possibility of manipulation of financial statements. None of the previous studies combined the five Altman Ratios with the Beneish M-Score. We use Stacking Ensemble Learning to classify crisis companies and perform a comprehensive analysis. This insight helps the investment public make lending decisions by mixing all the financial indicator information and assessing it carefully based on long-term and short-term conditions and possible manipulation of financial statements

    Electroproduction of nucleon resonances

    Full text link
    The unitary isobar model MAID has been extended and used for a partial wave analysis of pion photo- and electroproduction in the resonance region W < 2 GeV. Older data from the world data base and more recent experimental results from Mainz, Bates, Bonn and JLab for Q^2 up to 4.0 (GeV/c)^2 have been analyzed and the Q^2 dependence of the helicity amplitudes have been extracted for a series of four star resonances. We compare single-Q^2 analyses with a superglobal fit in a new parametrization of Maid2003 together with predictions of the hypercentral constituent quark model. As a result we find that the helicity amplitudes and transition form factors of constituent quark models should be compared with the analysis of bare resonances, where the pion cloud contributions have been subtracted.Comment: 6 pages Latex including 5 figures, Invited talk at ICTP 4th International Conference on Perspectives in Hadronic Physics, Trieste, Italy, 12-16 May 200

    P-splines with derivative based penalties and tensor product smoothing of unevenly distributed data

    Get PDF
    The P-splines of Eilers and Marx (1996) combine a B-spline basis with a discrete quadratic penalty on the basis coefficients, to produce a reduced rank spline like smoother. P-splines have three properties that make them very popular as reduced rank smoothers: i) the basis and the penalty are sparse, enabling efficient computation, especially for Bayesian stochastic simulation; ii) it is possible to flexibly `mix-and-match' the order of B-spline basis and penalty, rather than the order of penalty controlling the order of the basis as in spline smoothing; iii) it is very easy to set up the B-spline basis functions and penalties. The discrete penalties are somewhat less interpretable in terms of function shape than the traditional derivative based spline penalties, but tend towards penalties proportional to traditional spline penalties in the limit of large basis size. However part of the point of P-splines is not to use a large basis size. In addition the spline basis functions arise from solving functional optimization problems involving derivative based penalties, so moving to discrete penalties for smoothing may not always be desirable. The purpose of this note is to point out that the three properties of basis-penalty sparsity, mix-and-match penalization and ease of setup are readily obtainable with B-splines subject to derivative based penalization. The penalty setup typically requires a few lines of code, rather than the two lines typically required for P-splines, but this one off disadvantage seems to be the only one associated with using derivative based penalties. As an example application, it is shown how basis-penalty sparsity enables efficient computation with tensor product smoothers of scattered data
    corecore