931 research outputs found

    On fitting the Pareto-Levy distribution to stock market index data: selecting a suitable cutoff value

    Full text link
    The so-called Pareto-Levy or power-law distribution has been successfully used as a model to describe probabilities associated to extreme variations of worldwide stock markets indexes data and it has the form Pr(X>x) x(alpha)forgamma<x<infinity.TheselectionofthethresholdparametergammaPr(X>x) ~ x**(-alpha) for gamma< x <infinity. The selection of the threshold parameter gamma from empirical data and consequently, the determination of the exponent alpha, is often is done by using a simple graphical method based on a log-log scale, where a power-law probability plot shows a straight line with slope equal to the exponent of the power-law distribution. This procedure can be considered subjective, particularly with regard to the choice of the threshold or cutoff parameter gamma. In this work is presented a more objective procedure, based on a statistical measure of discrepancy between the empirical and the Pareto-Levy distribution. The technique is illustrated for data sets from the New York Stock Exchange Index and the Mexican Stock Market Index (IPC).Comment: Econophysics paper. 5 pages 9 figure

    A comparison of an enzyme-linked immunosorbent assay and counter current electrophoresis for the detection of bovine serum albumin in virus vaccines.

    Get PDF
    A monoclonal antibody directed against bovine serum albumin (BSA) has been developed and used in an enzyme-linked immunosorbent assay (ELISA) system for the detection of BSA in virus vaccines. The results correlated well with those obtained with a counter current electrophoresis system which has been employed routinely for this purpose. The ELISA was slightly more sensitive and more readily applicable to the screening of large numbers of samples but could not be used in the presence of certain stabilizers

    No Black Box and No Black Hole: from Social Capital to Gift Exchange

    Get PDF
    In this paper, we draw on the literature about gift exchange to suggest a conceptualization of the emergence, maintenance and use of social capital (SK). We thus open up the black box of how social relations are established, and are able to indicate what can be meaningfully ascribed to social capital. Social capital as a concept cannot be invoked at will to explain situations that are primarily perceived as favorable. Instead, when the way in which social capital emerges, maintained and used is conceptually clarified, it becomes clear that situations perceived as unfavorable can be ascribed to SK as well, and it becomes clear that SK cannot be drawn on at will, by just anybody. SK resides in what we call a social capital community

    Ranking Templates for Linear Loops

    Full text link
    We present a new method for the constraint-based synthesis of termination arguments for linear loop programs based on linear ranking templates. Linear ranking templates are parametrized, well-founded relations such that an assignment to the parameters gives rise to a ranking function. This approach generalizes existing methods and enables us to use templates for many different ranking functions with affine-linear components. We discuss templates for multiphase, piecewise, and lexicographic ranking functions. Because these ranking templates require both strict and non-strict inequalities, we use Motzkin's Transposition Theorem instead of Farkas Lemma to transform the generated \exists\forall-constraint into an \exists-constraint.Comment: TACAS 201

    Proving Termination Starting from the End

    Full text link
    We present a novel technique for proving program termination which introduces a new dimension of modularity. Existing techniques use the program to incrementally construct a termination proof. While the proof keeps changing, the program remains the same. Our technique goes a step further. We show how to use the current partial proof to partition the transition relation into those behaviors known to be terminating from the current proof, and those whose status (terminating or not) is not known yet. This partition enables a new and unexplored dimension of incremental reasoning on the program side. In addition, we show that our approach naturally applies to conditional termination which searches for a precondition ensuring termination. We further report on a prototype implementation that advances the state-of-the-art on the grounds of termination and conditional termination.Comment: 16 page

    Electrical field stimulation causes oxidation of exogenous histamine in Krebs-Henseleit buffer: A potential source of error in studies of isolated airways

    Get PDF
    Electric field stimulation (EFS) relaxes human histamine-precontracted airways in vitro. This relaxation is only partly neurally mediated. Nonneural relaxation has been also shown in blood vessels and is due to the generation of oxygen radicals by EFS. In isolated airways the origin of the nonneural component of the relaxation is not clear. Because exogenous catecholamines are oxidized during EPS of carbogenated Krebs-Henseleit (K-H) buffer, we questioned whether this is also the case for exogenous histamine. Human airways precontracted with histamine or methacholine were exposed to either EFS-stimulated carbogenated K-H buffer that also contained histamine or methacholine or unstimulated buffer. Airways exposed to EFS-stimulated buffer that contained histamine relaxed, whereas airways exposed to buffer containing methacholine or exposed to unstimulated buffer did not. It appeared that the histamine concentrations in the organ baths decreased during 30 min of EFS. This decrease was significantly reduced in the presence of ascorbic acid. We conclude that EFS causes oxidation of histamine in carbogenated K-H buffer, and this may at least partly explain the nonneural component of EFS-induced relaxations of precontracted human isolated airways. Therefore, histamine should not be used to induce precontraction in EFS experiments

    Resistance distance, information centrality, node vulnerability and vibrations in complex networks

    Get PDF
    We discuss three seemingly unrelated quantities that have been introduced in different fields of science for complex networks. The three quantities are the resistance distance, the information centrality and the node displacement. We first prove various relations among them. Then we focus on the node displacement, showing its usefulness as an index of node vulnerability.We argue that the node displacement has a better resolution as a measure of node vulnerability than the degree and the information centrality

    Development of a clinical prediction model for an international normalised ratio ≥ 4·5 in hospitalised patients using vitamin K antagonists

    Get PDF
    Vitamin K antagonists (VKAs) used for the prevention and treatment of thromboembolic disease, increase the risk of bleeding complications. We developed and validated a model to predict the risk of an international normalised ratio (INR) ≥ 4·5 during a hospital stay. Adult patients admitted to a tertiary hospital and treated with VKAs between 2006 and 2010 were analysed. Bleeding risk was operationalised as an INR value ≥4·5. Multivariable logistic regression analysis was used to assess the association between potential predictors and an INR ≥ 4·5 and validated in an independent cohort of patients from the same hospital between 2011 and 2014. We identified 8996 admissions of patients treated with VKAs, of which 1507 (17%) involved an INR ≥ 4·5. The final model included the following predictors: gender, age, concomitant medication and several biochemical parameters. Temporal validation showed a c statistic of 0·71. We developed and validated a clinical prediction model for an INR ≥ 4·5 in VKA-treated patients admitted to our hospital. The model includes factors that are collected during routine care and are extractable from electronic patient records, enabling easy use of this model to predict an increased bleeding risk in clinical practice

    Alternating runtime and size complexity analysis of integer programs

    Get PDF
    We present a modular approach to automatic complexity analysis. Based on a novel alternation between finding symbolic time bounds for program parts and using these to infer size bounds on program variables, we can restrict each analysis step to a small part of the program while maintaining a high level of precision. Extensive experiments with the implementation of our method demonstrate its performance and power in comparison with other tools
    corecore