33,074 research outputs found

    Consumer finance: challenges for operational research

    No full text
    Consumer finance has become one of the most important areas of banking, both because of the amount of money being lent and the impact of such credit on global economy and the realisation that the credit crunch of 2008 was partly due to incorrect modelling of the risks in such lending. This paper reviews the development of credit scoring—the way of assessing risk in consumer finance—and what is meant by a credit score. It then outlines 10 challenges for Operational Research to support modelling in consumer finance. Some of these involve developing more robust risk assessment systems, whereas others are to expand the use of such modelling to deal with the current objectives of lenders and the new decisions they have to make in consumer finance. <br/

    Operations research in consumer finance: challenges for operational research

    No full text
    Consumer finance has become one of the most important areas of banking both because of the amount of money being lent and the impact of such credit on the global economy and the realisation that the credit crunch of 2008 was partly due to incorrect modelling of the risks in such lending. This paper reviews the development of credit scoring,-the way of assessing risk in consumer finance- and what is meant by a credit score. It then outlines ten challenges for Operational Research to support modelling in consumer finance. Some of these are to developing more robust risk assessment systems while others are to expand the use of such modelling to deal with the current objectives of lenders and the new decisions they have to make in consumer financ

    Identifying which septic patients have increased mortality risk using severity scores:a cohort study

    Get PDF
    Background: Early aggressive therapy can reduce the mortality associated with severe sepsis but this relies on prompt recognition, which is hindered by variation among published severity criteria. Our aim was to test the performance of different severity scores in predicting mortality among a cohort of hospital inpatients with sepsis. Methods: We anonymously linked routine outcome data to a cohort of prospectively identified adult hospital inpatients with sepsis, and used logistic regression to identify associations between mortality and demographic variables, clinical factors including blood culture results, and six sets of severity criteria. We calculated performance characteristics, including area under receiver operating characteristic curves (AUROC), of each set of severity criteria in predicting mortality. Results: Overall mortality was 19.4% (124/640) at 30 days after sepsis onset. In adjusted analysis, older age (odds ratio 5.79 (95% CI 2.87-11.70) for &ge;80y versus &lt;60y), having been admitted as an emergency (OR 3.91 (1.31-11.70) versus electively), and longer inpatient stay prior to sepsis onset (OR 2.90 (1.41-5.94) for &gt;21d versus &lt;4d), were associated with increased 30 day mortality. Being in a surgical or orthopaedic, versus medical, ward was associated with lower mortality (OR 0.47 (0.27-0.81) and 0.26 (0.11-0.63), respectively). Blood culture results (positive vs. negative) were not significantly association with mortality. All severity scores predicted mortality but performance varied. The CURB65 community-acquired pneumonia severity score had the best performance characteristics (sensitivity 81%, specificity 52%, positive predictive value 29%, negative predictive value 92%, for 30 day mortality), including having the largest AUROC curve (0.72, 95% CI 0.67-0.77). Conclusions: The CURB65 pneumonia severity score outperformed five other severity scores in predicting risk of death among a cohort of hospital inpatients with sepsis. The utility of the CURB65 score for risk-stratifying patients with sepsis in clinical practice will depend on replicating these findings in a validation cohort including patients with sepsis on admission to hospital

    Deriving a preference-based measure for cancer using the EORTC QLQ-C30 : a confirmatory versus exploratory approach

    Get PDF
    Background: To derive preference-based measures from various condition-specific descriptive health-related quality of life (HRQOL) measures. A general 2-stage method is evolved: 1) an item from each domain of the HRQOL measure is selected to form a health state classification system (HSCS); 2) a sample of health states is valued and an algorithm derived for estimating the utility of all possible health states. The aim of this analysis was to determine whether confirmatory or exploratory factor analysis (CFA, EFA) should be used to derive a cancer-specific utility measure from the EORTC QLQ-C30. Methods: Data were collected with the QLQ-C30v3 from 356 patients receiving palliative radiotherapy for recurrent or metastatic cancer (various primary sites). The dimensional structure of the QLQ-C30 was tested with EFA and CFA, the latter based on a conceptual model (the established domain structure of the QLQ-C30: physical, role, emotional, social and cognitive functioning, plus several symptoms) and clinical considerations (views of both patients and clinicians about issues relevant to HRQOL in cancer). The dimensions determined by each method were then subjected to item response theory, including Rasch analysis. Results: CFA results generally supported the proposed conceptual model, with residual correlations requiring only minor adjustments (namely, introduction of two cross-loadings) to improve model fit (increment χ2(2) = 77.78, p 75% observation at lowest score), 6 exhibited misfit to the Rasch model (fit residual > 2.5), none exhibited disordered item response thresholds, 4 exhibited DIF by gender or cancer site. Upon inspection of the remaining items, three were considered relatively less clinically important than the remaining nine. Conclusions: CFA appears more appropriate than EFA, given the well-established structure of the QLQ-C30 and its clinical relevance. Further, the confirmatory approach produced more interpretable results than the exploratory approach. Other aspects of the general method remain largely the same. The revised method will be applied to a large number of data sets as part of the international and interdisciplinary project to develop a multi-attribute utility instrument for cancer (MAUCa)

    Advancing impact assessments of non-native species: strategies for strengthening the evidence-base

    Get PDF
    The numbers and impacts of non-native species (NNS) continue to grow. Multiple ranking protocols have been developed to identify and manage the most damaging species. However, existing protocols differ considerably in the type of impact they consider, the way evidence of impacts is included and scored, and in the way the precautionary principle is applied. These differences may lead to inconsistent impact assessments. Since these protocols are considered a main policy tool to promote mitigation efforts, such inconsistencies are undesirable, as they can affect our ability to reliably identify the most damaging NNS, and can erode public support for NNS management. Here we propose a broadly applicable framework for building a transparent NNS impact evidence base. First, we advise to separate the collection of evidence of impacts from the act of scoring the severity of these impacts. Second, we propose to map the collected evidence along a set of distinguishing criteria: where it is published, which methodological approach was used to obtain it, the relevance of the geographical area from which it originates, and the direction of the impact. This procedure produces a transparent and reproducible evidence base which can subsequently be used for different scoring protocols, and which should be made public. Finally, we argue that the precautionary principle should only be used at the risk management stage. Conditional upon the evidence presented in an impact assessment, decision-makers may use the precautionary principle for NNS management under scientific uncertainty regarding the likelihood and magnitude of NNS impacts. Our framework paves the way for an improved application of impact assessments protocols, reducing inconsistencies and ultimately enabling more effective NNS management

    An update on statistical boosting in biomedicine

    Get PDF
    Statistical boosting algorithms have triggered a lot of research during the last decade. They combine a powerful machine-learning approach with classical statistical modelling, offering various practical advantages like automated variable selection and implicit regularization of effect estimates. They are extremely flexible, as the underlying base-learners (regression functions defining the type of effect for the explanatory variables) can be combined with any kind of loss function (target function to be optimized, defining the type of regression setting). In this review article, we highlight the most recent methodological developments on statistical boosting regarding variable selection, functional regression and advanced time-to-event modelling. Additionally, we provide a short overview on relevant applications of statistical boosting in biomedicine

    Clinical review: Can we predict which patients are at risk of complications following surgery?

    Get PDF
    There are a vast number of operations carried out every year, with a small proportion of patients being at highest risk of mortality and morbidity. There has been considerable work to try and identify these high-risk patients. In this paper, we look in detail at the commonly used perioperative risk prediction models. Finally, we will be looking at the evolution and evidence for functional assessment and the National Surgical Quality Improvement Program (in the USA), both topical and exciting areas of perioperative prediction
    corecore