862 research outputs found

    Shapes with a technical function: an ever-expanding exclusion?

    Get PDF

    Association between asthma control and bronchial hyperresponsiveness and airways inflammation: a cross-sectional study in daily practice.

    Full text link
    Summary Background The primary end-point in the management of asthma is to obtain optimal control. The aim of this study was to assess the relationships between the markers of airway inflammation (sputum eosinophilia and exhaled nitric oxide), bronchial hyperresponsiveness (BHR) and asthma control. Methods One hundred and thirty-four patients were recruited from our asthma clinic between January 2004 and September 2005 [mean age: 42 years, mean forced expiratory volume in 1 s (FEV(1)): 86% predicted]. Eighty-six of them were treated by inhaled corticosteroids, 99 were atopic and 23 were current smokers. They all underwent detailed investigations including fractional-exhaled nitric oxide (FE(NO)) measurement, sputum induction and methacholine challenge when FEV(1) was >70% predicted, and filled in a validated asthma control questionnaire (ACQ6 Juniper). Results When dividing patients into the three groups according to their level of asthma control determined by ACQ [well-controlled asthma (ACQ score /=1.5)], it appeared that uncontrolled asthmatics had a greater BHR to methacholine and sputum eosinophilia than controlled asthma (P/=1.5) from controlled and borderline (ACQ<1.5) asthma, sputum eosinophilia and methacholine responsiveness were found to be more accurate than FE(NO) (area under the curve: 0.72, 0.72 and 0.59, respectively). Conclusion In a broad spectrum of asthmatics encountered in clinical practice, sputum eosinophilia and methacholine bronchial hyperresponsiveness, but not FE(NO), are associated with uncontrolled asthma.Peer reviewe

    Multi-Horizon Forecast Comparison

    Get PDF
    We introduce tests for multi-horizon superior predictive ability. Rather than comparing forecasts of different models at multiple horizons individually, we propose to jointly consider all horizons of a forecast path. We define the concepts of uniform and average superior predictive ability. The former entails superior performance at each individual horizon, while the latter allows inferior performance at some horizons to be compensated by others. The paper illustrates how the tests lead to more coherent conclusions, and how they are better able to differentiate between models than the single-horizon tests. We provide an extension of the previously introduced Model Confidence Set to allow for multi-horizon comparison of more than two models. Simulations demonstrate appropriate size and high power. An illustration of the tests on a large set of macroeconomic variables demonstrates the empirical benefits of multi-horizon comparison

    Risk and uncertainty

    Get PDF

    Hedging Long-Term Liabilities

    Get PDF
    Pension funds and life insurers face interest rate risk arising from the duration mismatch of their assets and liabilities. With the aim of hedging long-term liabilities, we estimate variations of a Nelson–Siegel model using swap returns with maturities up to 50 years. We consider versions with three and five factors, as well as constant and time-varying factor loadings. We find that we need either five factors or time-varying factor loadings in the three-factor model to accommodate the long end of the yield curve. The resulting factor hedge portfolios perform poorly due to strong multicollinearity of the factor loadings in the long end, and are easily beaten by a robust, near Mean-Squared-Error- optimal, hedging strategy that concentrates its weight on the longest available liquid bond

    Maximum Independent Set: Self-Training through Dynamic Programming

    Full text link
    This work presents a graph neural network (GNN) framework for solving the maximum independent set (MIS) problem, inspired by dynamic programming (DP). Specifically, given a graph, we propose a DP-like recursive algorithm based on GNNs that firstly constructs two smaller sub-graphs, predicts the one with the larger MIS, and then uses it in the next recursive call. To train our algorithm, we require annotated comparisons of different graphs concerning their MIS size. Annotating the comparisons with the output of our algorithm leads to a self-training process that results in more accurate self-annotation of the comparisons and vice versa. We provide numerical evidence showing the superiority of our method vs prior methods in multiple synthetic and real-world datasets.Comment: Accepted in NeurIPS 202
    • …
    corecore