1,276,653 research outputs found

    Integrated comparative validation tests as an aid for building simulation tool users and developers

    Get PDF
    Published validation tests developed within major research projects have been an invaluable aid to program developers to check on their programs. This paper sets out how selected ASHRAE Standard 140-2004 and European CEN standards validation tests have been incorporated into the ESP-r simulation program so that they can be easily run by users and also discusses some of the issues associated with compliance checking. Embedding the tests within a simulation program allows program developers to check routinely whether updates to the simulation program have led to significant changes in predictions and to run sensitivity tests to check on the impact of alternative algorithms. Importantly, it also allows other users to undertake the tests to check that their installation is correct and to give them, and their clients, confidence in results. This paper also argues that validation tests should characterize some of the significant heat transfer processes (particularly internal surface convection) in greater detail in order to reduce the acceptance bands for program predictions. This approach is preferred to one in which validation tests are overly prescriptive (e.g., specifying fixed internal convection coefficients), as these do not reflect how programs are used in practice

    What does validation of cases in electronic record databases mean? The potential contribution of free text

    Get PDF
    Electronic health records are increasingly used for research. The definition of cases or endpoints often relies on the use of coded diagnostic data, using a pre-selected group of codes. Validation of these cases, as ‘true’ cases of the disease, is crucial. There are, however, ambiguities in what is meant by validation in the context of electronic records. Validation usually implies comparison of a definition against a gold standard of diagnosis and the ability to identify false negatives (‘true’ cases which were not detected) as well as false positives (detected cases which did not have the condition). We argue that two separate concepts of validation are often conflated in existing studies. Firstly, whether the GP thought the patient was suffering from a particular condition (which we term confirmation or internal validation) and secondly, whether the patient really had the condition (external validation). Few studies have the ability to detect false negatives who have not received a diagnostic code. Natural language processing is likely to open up the use of free text within the electronic record which will facilitate both the validation of the coded diagnosis and searching for false negatives

    Validation of suitable internal control genes for expression studies in aging.

    Get PDF
    Quantitative data from experiments of gene expression are often normalized through levels of housekeeping genes transcription by assuming that expression of these genes is highly uniform. This practice is being questioned as it becomes increasingly clear that the level of housekeeping genes expression may vary considerably in certain biological samples. To date, the validation of reference genes in aging has received little attention and suitable reference genes have not yet been defined. Our aim was to evaluate the expression stability of frequently used reference genes in human peripheral blood mononuclear cells with respect to aging. Using quantitative RT-PCR, we carried out an extensive evaluation of five housekeeping genes, i.e. 18s rRNA, ACTB, GAPDH, HPRT1 and GUSB, for stability of expression in samples from donors in the age range 35-74 years. The consistency in the expression stability was quantified on the basis of the coefficient of variation and two algorithms termed geNorm and NormFinder. Our results indicated GUSB be the most suitable transcript and 18s the least for accurate normalization in PBMCs. We also demonstrated that aging is a confounding factor with respect to stability of 18s, HPRT1 and ACTB expression, which were particularly prone to variability in aged donors

    The Internal Validation of a National Model of Long Distance Traffic.

    Get PDF
    During 1980/81, the Department of Transport developed a model for describing the distribution of private vehicle trips between 642 districts in Great Britain, using data from household and roadside interviews conducted in 1976 for the Regional Highways Traffic Model, and a new formulation of the gravity model, called a composite approach, in which shorter length movements were described at a finer level of zonal detail than longer movements. This report describes the results of an independent validation exercise conducted for the Department, in which the theoretical basis of the model and its the quality of its fit to base year data were examined. The report discusses model specification; input data; calibration issues; and accuracy assessment. The main problems addressed included the treatment of intrazonal and terminal costs, which was thought to be deficient; the trip-end estimates to which the model was constrained, which were shown to have substantial variability and to be biassed (though the cause of the latter could be readily removed), with some evidence of geographical under-specification; and the differences between roadside and household interview estimates. The report includes a detailed examination of the composite model specification and contains suggestions for improving the way in which such models are fitted. The main technical developments, for both theory and practice, are the methods developed for assessing the accuracy of the fitted model and for examining the quality of its fit with respect to the observed data, taking account of the variances and covariances of modelled and data values. Overall, the broad conclusion was that, whilst there appeared to be broad compatibility between modelled and onserved data in observed cells, there was clear evidence of inadequacy in certain respects, such as for example underestimation of intradistr ict trips. This work was done in co-operation with Howard Humphreys and Partners and Transportation Planning Associates, who validated the model against independent external data; their work is reported separately

    A method for sensitivity analysis to assess the effects of measurement error in multiple exposure variables using external validation data

    Get PDF
    Measurement error in self-reported dietary intakes is known to bias the association between dietary intake and a health outcome of interest such as risk of a disease. The association can be distorted further by mismeasured confounders, leading to invalid results and conclusions. It is, however, difficult to adjust for the bias in the association when there is no internal validation data

    Calibration and validation of a combustion-cogeneration

    Get PDF
    This paper describes the calibration and validation of a combustion cogeneration model for whole-building simulation. As part of IEA Annex 42, we proposed a combustion cogeneration model for studying residentialscale cogeneration systems based on both Stirling and internal combustion engines. We implemented this model independently in the EnergyPlus, ESP-r and TRNSYS building simulation programs, and undertook a comprehensive effort to validate the model's predictions. Using established comparative testing and empirical validation principles, we vetted the model's theoretical basis and its software implementations. The results demonstrate acceptable-to-excellent agreement, and suggest the calibrated model can be used with confidence

    Design of Anisotropic Diffusion Hardware Fiber Phantoms

    Get PDF
    A gold standard for the validation of diffusion weighted magnetic resonance imaging (DW-MRI) in brain white matter (WM) is essential for clinical purposes but still not available. Synthetic anisotropic fiber bundles are proposed as phantoms for the validation of DW-MRI because of their well-known structure, their long preservability and the possibility to create complex geometries such as curved and fiber crossings. A crucial question is how the different material properties and size of the fiber phantoms influence the outcome of the DW-MRI experiment. Several fiber materials are compared in this study. The effect of surface relaxation and internal gradients on the SNR is evaluated. In addition, the dependency of the fiber density and fiber radius on the diffusion properties is investigated

    The Barthel index: italian translation, adaptation and validation

    Get PDF
    The Barthel Index (BI) is widely used to measure disability also in Italy, although a validated and culturally adapted Italian version of BI has not been produced yet. This article describes the translation and cultural adaptation into Italian of the original 10-item version of BI, and reports the procedures for testing its validity and reliability. The cultural adaptation and validation process was based on data from a cohort of disabled patients from two different Rehabilitation Centers in Rome, Italy. Forward and backward translation method was adopted by qualified linguist and independent native English official translators. The scale obtained was reviewed by 20 experts in psychometric sciences. The Italian adapted version of the BI was then produced and validated. A total number of 180 patients were submitted to the adapted scale for testing its acceptability and internal consistency. The total time of compilation was 5 ± 2,6 minutes (range 3-10). Validation of the scale was performed by 7 trained professional therapists that submitted both the translated and the adapted versions to a group of 62 clinically stable patients (T-test=-2.051 p=0.05). The internal consistency by Cronbach’s alpha resulted equal to 0.96. Test – retest intra – rater reliability was evaluated on 35 cases; at test-retest was ICC=0.983 (95%IC: 0.967-0.992). This is the first study that reports translation, adaptation and validation of the BI in Italian language. It provides a new tool for professionals to measure functional disability when appraising Italian speaking disable patients in health and social care settings along the continuum of care

    Validating multi-photon quantum interference with finite data

    Full text link
    Multi-particle interference is a key resource for quantum information processing, as exemplified by Boson Sampling. Hence, given its fragile nature, an essential desideratum is a solid and reliable framework for its validation. However, while several protocols have been introduced to this end, the approach is still fragmented and fails to build a big picture for future developments. In this work, we propose an operational approach to validation that encompasses and strengthens the state of the art for these protocols. To this end, we consider the Bayesian hypothesis testing and the statistical benchmark as most favorable protocols for small- and large-scale applications, respectively. We numerically investigate their operation with finite sample size, extending previous tests to larger dimensions, and against two adversarial algorithms for classical simulation: the Mean-Field sampler and the Metropolized Independent Sampler. To evidence the actual need for refined validation techniques, we show how the assessment of numerically simulated data depends on the available sample size, as well as on the internal hyper-parameters and other practically relevant constraints. Our analyses provide general insights into the challenge of validation, and can inspire the design of algorithms with a measurable quantum advantage.Comment: 10 pages, 7 figure
    corecore