89 research outputs found

    The effect of favourable and unfavourable frost on air cooling coil performance : a thesis presented in partial fulfilment of the requirements for the degree of Master of Technology, at Massey University

    Get PDF
    The most common type of air cooling coil used in the refrigeration industry is the finned tube heat exchanger. The performance of such coils can be greatly hindered by frost formation, which will occur when the coil surface temperature is both below the dewpoint of the air passing over it, and below 0°C. Frost reduces performance, both through the increased thermal resistance of the frost layer, and by reduction of the air flow through the coil. Whilst frosting on coils is influential on performance, there is comparatively little information available on the performance of finned tube heat exchangers under frosting conditions. Smith (1989) has proposed an "unfavourable" frost formation theory. The theory states that unfavourable frost formation occurs when the line representing the temperature and humidity of the air passing through the coil, crosses the saturation line of the psychrometric chart. This criteria is more likely to occur under conditions of high relative humidity, low sensible heat ratio (SHR), and/or high refrigerant-to-air temperature difference (TD). Under unfavourable conditions it is suggested that the frost will be of particularly low density, which would cause coil performance to decline to a much greater extent for the same total frost accumulation, than under "favourable" frosting conditions. The objectives of this study were to measure the change in performance of a cooling coil under frosting conditions, and to assess the validity of the unfavourable frost formation theory. A calorimeter style coil test facility was used, that allowed coil performance to be measured as frost accumulated in a manner consistent with coil operation in industrial practice (i.e. declining air flowrate and a wide range of SHR's). The data collected supported the concept of unfavourable frost formation with a more rapid decline in performance for operation with low SHR, than that at high SHR, for the same total frost accumulation. Some recovery of coil performance was observed when operation at low SHR (with rapid performance deterioration) was followed by a period of high SHR operation. Equations were developed that allowed the theoretical conditions for the formation from favourable to unfavourable frosting to be quantified. The measured change in the rate of coil performance deterioration with frost buildup was dependent on air and coil conditions, in a manner consistent with these equations. The transition between favourable and unfavourable frost formation appeared to be related to the lowest temperature on the coil surface rather than the mean surface temperature. Satisfactory predictions of frost formation types were obtained by using the refrigerant evaporation temperature as an approximation to the lowest coil surface temperature

    Why licensing authorities need to consider the net value of new drugs in assigning review priorities: Addressing the tension between licensing and reimbursement

    Get PDF
    Pharmaceutical regulators and healthcare reimbursement authorities operate in different intellectual paradigms and adopt very different decision rules. As a result, drugs that have been licensed are often not available to all patients who could benefit because reimbursement authorities judge that the cost of therapies is greater than the health produced. This finding creates uncertainty for pharmaceutical companies planning their research and development investment, as licensing is no longer a guarantee of market access. In this study, we propose that it would be consistent with the objectives of pharmaceutical regulators to use the Net Benefit Framework of reimbursement authorities to identify those therapies that should be subject to priority review, that it is feasible to do so and that this would have several positive effects for patients, industry, and healthcare systems

    The probability of cost-effectiveness

    Get PDF
    BACKGROUND: The study of cost-effectiveness comparisons between competing medical interventions has led to a variety of proposals for quantifying cost-effectiveness. The differences between the various approaches can be subtle, and one purpose of this article is to clarify some important distinctions. DISCUSSION: We discuss alternative measures in the framework of individual, patient-level, incremental net benefits. In particular we examine the probability of cost-effectiveness for an individual, proposed by Willan. SUMMARY: We argue that this is a useful addition to the range of cost-effectiveness measures, but will be of secondary interest to most decision makers. We also demonstrate that Willan's proposed estimate of this probability is logically flawed

    Novel Statistical Model for a Piece-Wise Linear Radiocarbon Calibration Curve

    Get PDF
    The process of calibrating radiocarbon determinations onto the calendar scale requires the setting of a specific statistical model for the calibration curve. This model specification will bear fundamental importance for the resulting inference regarding the parameter of interest—namely, in general, the calendar age associated to the sample that has been 14C-dated. Traditionally, the 14C calibration curve has been modelled simply as the piece-wise linear curve joining the (internationally agreed) high-precision calibration data points; or, less frequently, by proposing spline functions in order to obtain a smoother curve. We present a model for the 14C calibration curve which, based on specific characteristics of the dating method, yields a piece-wise linear curve, but one which rather than interpolating the data points, smooths them. We show that with this specific model if a piece-wise linear curve is desired, an underlying random walk model is implied as covariance structure (and vice versa). Furthermore, by making use of all the information provided by the calibration data in a comprehensive way, we achieve an improvement over current models by getting more realistic variance values for the calibration curve.The Radiocarbon archives are made available by Radiocarbon and the University of Arizona Libraries. Contact [email protected] for further information.Migrated from OJS platform February 202

    Calculating partial expected value of perfect information via Monte Carlo sampling algorithms

    Get PDF
    Partial expected value of perfect information (EVPI) calculations can quantify the value of learning about particular subsets of uncertain parameters in decision models. Published case studies have used different computational approaches. This article examines the computation of partial EVPI estimates via Monte Carlo sampling algorithms. The mathematical definition shows 2 nested expectations, which must be evaluated separately because of the need to compute a maximum between them. A generalized Monte Carlo sampling algorithm uses nested simulation with an outer loop to sample parameters of interest and, conditional upon these, an inner loop to sample remaining uncertain parameters. Alternative computation methods and shortcut algorithms are discussed and mathematical conditions for their use considered. Maxima of Monte Carlo estimates of expectations are biased upward, and the authors show that the use of small samples results in biased EVPI estimates. Three case studies illustrate 1) the bias due to maximization and also the inaccuracy of shortcut algorithms 2) when correlated variables are present and 3) when there is nonlinearity in net benefit functions. If relatively small correlation or nonlinearity is present, then the shortcut algorithm can be substantially inaccurate. Empirical investigation of the numbers of Monte Carlo samples suggests that fewer samples on the outer level and more on the inner level could be efficient and that relatively small numbers of samples can sometimes be used. Several remaining areas for methodological development are set out. A wider application of partial EVPI is recommended both for greater understanding of decision uncertainty and for analyzing research priorities

    The multiple sclerosis risk sharing scheme monitoring study - early results and lessons for the future

    Get PDF
    Background: Risk sharing schemes represent an innovative and important approach to the problems of rationing and achieving cost-effectiveness in high cost or controversial health interventions. This study aimed to assess the feasibility of risk sharing schemes, looking at long term clinical outcomes, to determine the price at which high cost treatments would be acceptable to the NHS. Methods: This case study of the first NHS risk sharing scheme, a long term prospective cohort study of beta interferon and glatiramer acetate in multiple sclerosis ( MS) patients in 71 specialist MS centres in UK NHS hospitals, recruited adults with relapsing forms of MS, meeting Association of British Neurologists (ABN) criteria for disease modifying therapy. Outcome measures were: success of recruitment and follow up over the first three years, analysis of baseline and initial follow up data and the prospect of estimating the long term cost-effectiveness of these treatments. Results: Centres consented 5560 patients. Of the 4240 patients who had been in the study for a least one year, annual review data were available for 3730 (88.0%). Of the patients who had been in the study for at least two years and three years, subsequent annual review data were available for 2055 (78.5%) and 265 (71.8%) patients respectively. Baseline characteristics and a small but statistically significant progression of disease were similar to those reported in previous pivotal studies. Conclusion: Successful recruitment, follow up and early data analysis suggest that risk sharing schemes should be able to deliver their objectives. However, important issues of analysis, and political and commercial conflicts of interest still need to be addressed

    Protection of Rhesus Monkeys by a DNA Prime/Poxvirus Boost Malaria Vaccine Depends on Optimal DNA Priming and Inclusion of Blood Stage Antigens

    Get PDF
    (Pk) malaria. This is a multi-stage vaccine that includes two pre-erythrocytic antigens, PkCSP and PkSSP2(TRAP), and two erythrocytic antigens, PkAMA-1 and PkMSP-1(42kD). The present study reports three further experiments where we investigate the effects of DNA dose, timing, and formulation. We also compare vaccines utilizing only the pre-erythrocytic antigens with the four antigen vaccine.In three experiments, rhesus monkeys were immunized with malaria vaccines using DNA plasmid injections followed by boosting with poxvirus vaccine. A variety of parameters were tested, including formulation of DNA on poly-lactic co-glycolide (PLG) particles, varying the number of DNA injections and the amount of DNA, varying the interval between the last DNA injection to the poxvirus boost from 7 to 21 weeks, and using vaccines with from one to four malaria antigens. Monkeys were challenged with Pk sporozoites given iv 2 to 4 weeks after the poxvirus injection, and parasitemia was measured by daily Giemsa stained blood films. Immune responses in venous blood samples taken after each vaccine injection were measured by ELIspot production of interferon-γ, and by ELISA.1) the number of DNA injections, the formulation of the DNA plasmids, and the interval between the last DNA injection and the poxvirus injection are critical to vaccine efficacy. However, the total dose used for DNA priming is not as important; 2) the blood stage antigens PkAMA-1 and PkMSP-1 were able to protect against high parasitemias as part of a genetic vaccine where antigen folding is not well defined; 3) immunization with PkSSP2 DNA inhibited immune responses to PkCSP DNA even when vaccinations were given into separate legs; and 4) in a counter-intuitive result, higher interferon-γ ELIspot responses to the PkCSP antigen correlated with earlier appearance of parasites in the blood, despite the fact that PkCSP vaccines had a protective effect

    Exploiting a Rose Bengal-bearing, oxygen-producing nanoparticle for SDT and associated immune-mediated therapeutic effects in the treatment of pancreatic cancer

    Get PDF
    Sonodynamic therapy (SDT) is an emerging stimulus-responsive approach for the targeted treatment of solid tumours. However, its ability to generate stimulus-responsive cytotoxic reactive oxygen species (ROS), is compromised by tumour hypoxia. Here we describe a robust means of preparing a pH-sensitive polymethacrylate-coated CaO2 nanoparticle that is capable of transiently alleviating tumour hypoxia. Systemic administration of particles to animals bearing human xenograft BxPC3 pancreatic tumours increases oxygen partial pressures (PO2) to 20 - 50 mmHg for over 40 min. RT-qPCR analysis of expression of selected tumour marker genes in treated animals suggests that the transient production of oxygen is sufficient to elicit effects at a molecular genetic level. Using particles labelled with the near infra-red (nIR) fluorescent dye, indocyanine green, selective uptake of particles by tumours was observed. Systemic administration of particles containing Rose Bengal (RB) at concentrations of 0.1 mg/mg of particles are capable of eliciting nanoparticle-induced, SDT-mediated antitumour effects using the BxPC3 human pancreatic tumour model in immuno-compromised mice. Additionally, a potent abscopal effect was observed in off-target tumours in a syngeneic murine bilateral tumour model for pancreatic cancer and an increase in tumour cytotoxic T cells (CD8+) and a decrease in immunosuppressive tumour regulatory T cells [Treg (CD4+, FoxP3+)] was observed in both target and off-target tumours in SDT treated animals. We suggest that this approach offers significant potential in the treatment of both focal and disseminated (metastatic) pancreatic cancer
    • …
    corecore