702 research outputs found

    Higher dietary magnesium intake and higher magnesium status are associated with lower prevalence of coronary heart disease in patients with Type 2 Diabetes

    Get PDF
    In type 2 diabetes mellitus (T2D), the handling of magnesium is disturbed. Magnesium deficiency may be associated with a higher risk of coronary heart disease (CHD). We investigated the associations between (1) dietary magnesium intake; (2) 24 h urinary magnesium excretion; and (3) plasma magnesium concentration with prevalent CHD in T2D patients. This cross-sectional analysiswas performed on baseline data fromthe DIAbetes and LifEstyle Cohort Twente-1 (DIALECT-1, n = 450, age 63 � 9 years, 57%men, and diabetes duration of 11 (7–18) years). Prevalence ratios (95% CI) of CHD by sex-specific quartiles of magnesium indicators, as well as by magnesium intake per dietary source, were determined using multivariable Cox proportional hazard models. CHD was present in 100 (22%) subjects. Adjusted CHD prevalence ratios for the highest compared to the lowest quartiles were 0.40 (0.20, 0.79) for magnesium intake, 0.63 (0.32, 1.26) for 24 h urinary magnesium excretion, and 0.62 (0.32, 1.20) for plasma magnesium concentration. For every 10 mg increase of magnesium intake from vegetables, the prevalence of CHD was, statistically non-significantly, lower (0.75 (0.52, 1.08)). In this T2D cohort, higher magnesium intake, higher 24 h urinary magnesium excretion, and higher plasma magnesium concentration are associated with a lower prevalence of CHD

    Effect of pooling samples on the efficiency of comparative studies using microarrays

    Full text link
    Many biomedical experiments are carried out by pooling individual biological samples. However, pooling samples can potentially hide biological variance and give false confidence concerning the data significance. In the context of microarray experiments for detecting differentially expressed genes, recent publications have addressed the problem of the efficiency of sample-pooling, and some approximate formulas were provided for the power and sample size calculations. It is desirable to have exact formulas for these calculations and have the approximate results checked against the exact ones. We show that the difference between the approximate and exact results can be large. In this study, we have characterized quantitatively the effect of pooling samples on the efficiency of microarray experiments for the detection of differential gene expression between two classes. We present exact formulas for calculating the power of microarray experimental designs involving sample pooling and technical replications. The formulas can be used to determine the total numbers of arrays and biological subjects required in an experiment to achieve the desired power at a given significance level. The conditions under which pooled design becomes preferable to non-pooled design can then be derived given the unit cost associated with a microarray and that with a biological subject. This paper thus serves to provide guidance on sample pooling and cost effectiveness. The formulation in this paper is outlined in the context of performing microarray comparative studies, but its applicability is not limited to microarray experiments. It is also applicable to a wide range of biomedical comparative studies where sample pooling may be involved.Comment: 8 pages, 1 figure, 2 tables; to appear in Bioinformatic

    Accounting for the effect of concentration fluctuations on toxic load for gaseous releases of carbon dioxide.

    Get PDF
    Research Highlights • An approach to account for the effect of concentration fluctuations on toxic load is investigated in the context of land-use planning for major hazard sites. • For momentum-dominated free-jets of CO 2 gas, the approach is shown to be conservative. • For low-momentum dense CO 2 plumes, the validity of the approach is uncertain • Recommendations are provided for additional analysis of experimental data and numerical simulations in order to address this uncertainty. • Measurements of concentration fluctuations in large-scale CO 2 release experiments would be beneficial Abstract In Great Britain, advice on land-use planning decisions in the vicinity of major hazard sites and pipelines is provided to Local Planning Authorities by the Health and Safety Executive (HSE), based on quantified risk assessments of the risks to the public in the event of an accidental release. For potential exposures to toxic substances, the hazard and risk is estimated by HSE on the basis of a "toxic load". For carbon dioxide (CO 2 ), this is calculated from the time-integral of the gas concentration to the power eight. As a consequence of this highly non-linear dependence of the toxic load on the concentration, turbulent concentration fluctuations that occur naturally in jets or plumes of CO 2 may have a significant effect on the calculated hazard ranges. Most dispersion models used for QRA only provide estimates of the time-or ensemble-averaged concentrations. If only mean concentrations are used to calculate the toxic load, and the effects of concentration fluctuations are ignored, there is a danger that toxic loads and hence hazard ranges will be significantly under-estimated. This paper explores a simple and pragmatic modification to the calculation procedure for CO 2 toxic load calculations. It involves the assumption that the concentration fluctuates by a factor 3 of two with a prescribed square-wave variation over time. To assess the validity of this methodology, two simple characteristic flows are analysed: the free jet and the dense plume (or gravity current). In the former case, an empirical model is used to show that the factor-oftwo approach provides conservative estimates of the hazard range. In the latter case, a survey of the literature indicates that there is at present insufficient information to come to any definite conclusions. Recommendations are provided for future work to investigate the concentration fluctuation behaviour in dense CO 2 plumes. This includes further analysis of existing dense gas dispersion data, measurements of concentration fluctuations in ongoing large-scale CO 2 release experiments, and numerical simulations. Keywords Concentration fluctuations, carbon dioxide, toxic load, free jet, dense plume, land-use planning Introduction In Great Britain, advice on land-use planning decisions in the vicinity of major hazard sites and pipelines is provided to Local Planning Authorities by the Health and Safety Executive (HSE), based on Quantified Risk Assessments (QRA) for the risks to the public in the event of an accidental release. For potential exposures to toxic substances, QRA is based on estimates of individual or societal risk for exposures to amounts of substances that would result in certain levels of toxicity. The toxicological hazard is determined by HSE, based on the duration of exposure as specified according to the Toxic Load (TL) (HSE, 2008). Risk estimates are based on the likelihood of a hypothetical individual receiving an exposure equal to or greater than a threshold level of TL known as the Specified Level of Toxicity (SLOT). The TL relating to the mortality of 50% of an exposed population is also specified by a threshold level known as the Significant Likelihood of Death (SLOD). Further information on the SLOT and SLOD concepts is provided by To calculate the TL, HSE uses the well-known formula of ten Berge (ten where c is the instantaneous gas concentration at a point in space, T is the duration of exposure and n is the ten Berge toxic load exponent, which is specific to the particular substance released. Values of n together with SLOT and SLOD levels are provided for chemicals of major hazard interest by HSE (2008). For a review of alternative toxic load models, see for example 4 However, for substances where n is greater than unity, fluctuations in concentration over time can have a significant effect on the toxic load. In the study of chlorine releases by For carbon dioxide, the ten Berge exponent n is eight (HSE, 2008), reflecting the highly nonlinear response to exposure. A factor of two increase in CO 2 concentration therefore produces a factor of 256 increase in the toxic load. Any fluctuations in concentration above the mean level will very quickly tend to increase the toxic load. Basing TL calculations for CO 2 solely on the mean concentration could therefore lead to a significant under-estimate of the hazard range. In practically all foreseeable releases of CO 2 , the dispersion of the gas will involve some fluctuations in concentration over time due to turbulence. Turbulence is produced from the strong shear layers induced by high-momentum jets, from frictional effects from a dense current rolling along the ground, or from turbulence already present in the atmosphere. Even if gas is produced at the source at a constant rate, an observer at some distance downstream will in nearly all circumstances be subjected to a time-varying concentration. This phenomenon is very well known and has been the subject of extensive study, e.g. Many dispersion models used for risk assessment purposes are unable to provide reliable estimates of the concentration fluctuations over time. A notable exception is the FROST software developed by GL Noble Denton (P. Cleaver, Personal Communication, January 2011) which assumes profiles for both peak and mean concentrations, and hence allows the effects of concentration fluctuations to be included in a simple manner. Although some models, such as DRIFT To provide a practical and simple means of moving forward, the present paper examines a simple and pragmatic modification to the calculation procedure for CO 2 toxic load. It involves the assumption that the concentration fluctuates by a factor of two with a prescribed squarewave variation over time, i.e. it is assumed that the concentration is twice the mean for half of the time, and zero for the remaining time. This rudimentary approach is not intended to provide a realistic reflection of actual turbulent fluctuations, but is merely aimed at incorporating the effects of fluctuations on the toxic load to a very basic degree. The utility of such an approach is that it can be readily implemented in simple dispersion models and therefore provides a practical solution methodology at the present time. In the future, more robust scientifically-based models will no doubt be proposed for use in risk assessment. 5 The notion of a factor of two variation about the mean to account for turbulent concentration fluctuations is not a new concept. It is commonly used in the context of flammable vapour clouds, where hazard ranges are often defined as the location where the predicted average gas concentration reaches half of the Lower Flammability Limit (50% LFL), e.g. To analyse whether the methodology involving a factor-of-two variation about the mean is valid for CO 2 toxic load calculations, two idealised scenarios are examined in the present work: the free jet and the dense plume (or gravity current). Free Jets The empirically-based toxic load model for free jets examined here is derived from the flammability factor model of To calculate the toxic load requires two simple modifications of the flammability factor model. Firstly, the concentration Probability Distribution Function (PDF) is integrated between concentration volume fraction limits of zero and one (rather than just between the flammable limits) and, secondly, the concentration is raised to the ten Berge toxic load exponent, as follows: where ( ) c p~is the concentration PDF, and the time-varying concentration, c , is expressed as a volume fraction. In the model of • The mean concentration along the jet centreline is determined from empirical profiles from 6 The present model was implemented in MatLab and the integration of Equation The results show, as expected, that the smallest hazard range is produced if concentration fluctuations are ignored and only the mean concentration is used to determine the toxic load. Assuming a factor-of-two variation about the mean produces the largest hazard range. The distance from the jet source to the SLOT or SLOD is approximately 50% higher when the factor-of-two model is adopted, compared to the approach where concentration fluctuations are ignored. Results from the PDF model suggest that the factor-of-two approach is conservative in terms of the distance to the SLOT and SLOD on the centreline of the jet. Since the intensity of the fluctuations increases towards the periphery of the jet, the PDF model predicts the toxic effect to extend over a wider area near the base of the jet than the other two model results, which are based solely on the mean concentration contours. The results shown in The PDF on which the present model is based was derived from experimental measurements of gas concentration in free jets by The results shown here are for free jets in a quiescent environment. Commonly, risk assessments consider releases in non-zero wind-speeds. The present model is not valid under these conditions. Gas jets in a cross flow were studied experimentally by Low-Momentum Dense Plumes Although CO 2 is likely to be stored and transported at high pressure, perhaps in the supercritical or dense-phase state As the dense CO 2 vapour cloud spreads along the ground away from the source it will entrain fresh air and dilute. However, unlike many toxic gases such as chlorine, which remains toxic down to very low concentrations, the short-term exposure levels for CO 2 are relatively high. Its Immediately Dangerous to Life and Health (IDLH) concentration is 4% vol/vol (40,000 ppm) compared to just 0.001% vol/vol (10 ppm) for chlorine (NIOSH, 1995). If it is assumed that the source of CO 2 gas is at its sublimation temperature at atmospheric pressure (-78.5 ˚C), and the ambient temperature is 0˚C, then by the time that the CO 2 has diluted to 4% vol/vol, the CO 2 -air mixture will have a density 5% greater than ambient. At a higher CO 2 concentration of 10% vol/vol, which causes unconsciousness after 30 minutes exposure (NORSOK, 2001), the gas mixture will be 11% denser than ambient. Therefore, over the range of concentrations of practical interest, it is likely that a large, low-momentum CO 2 release will exhibit gravity effects. The CO 2 cloud will not behave as a passive or neutral tracer gas. Dense gas clouds exhibit different dispersion behaviour to those of neutrally-buoyant gases. Gravitational forces act to accelerate the cloud, whilst the vertical density gradient tends to suppress turbulence and reduce dilution. Concentrations inside the spreading dense plume tend to be more uniform than those in the equivalent passive plume. Therefore, as the dense plume meanders it produces lower intensity fluctuations in the core and higher intensity fluctuations on the periphery as compared to equivalent passive plumes (see Britter, 1988, and The bulk of research efforts to analyse concentration fluctuations in gas dispersion have been undertaken for passive, neutrally-buoyant plumes. Early work in this area includes that of 8 Further work on concentration fluctuations has been undertaken by Wilson and co-workers at the University of Alberta A significant body of research on concentration fluctuations has been undertaken over the last 30 years by Chatwin, Mole, Nielsen and co-workers at Sheffield University and Risø National Laboratory Research into concentration fluctuations with more focus on practical models for risk assessment has been undertaken by Other related work in this field includes the statistical analysis of concentration in dense gas clouds by None of the work examined as part of this literature review was found to provide a model for concentration fluctuations in dispersing dense gas clouds that could be used to calculate the toxic load. However, some previously reported studies provide anecdotal evidence of the potential magnitude of certain relevant parameters. Statistical analysis by Future Directions The literature survey has not revealed any further useful insight into concentration fluctuations in dense gas releases, to validate the proposed factor-of-two model. Three potential avenues for future work are as follows: Firstly, a thorough assessment of previous field-scale dense gas dispersion experiments involving time-resolving gas concentration measurements could be undertaken. The measurements need to have been taken at a rate equal to, or faster, than the human breathing rate (approx. 0.3 Hz). In some cases, it may be possible to infer concentration values from thermocouple measurements of fluctuating temperature, using an approach similar to that adopted by Witcofksi and Chirivella (1984). It has already been established that dense gas experiments exhibit a degree of scatter due to the stochastic nature of the flow Secondly, if sufficient concentration fluctuation data does not yet exist, it could be generated by new field-scale experiments. There are difficulties in interpreting data from reduced scale wind tunnel tests due to the need to scale dimensionless parameters for both buoyancy and turbulence simultaneously. Often, wind tunnel tests are performed at lower Reynolds numbers, which do not feature the full range of turbulence scales, or the slow changes in conditions which are present in the atmosphere. At the present time, a number of field-scale CO 2 releases are planned in order to support the risk assessment of planned carbon capture, 10 transport and storage infrastructure. This includes the medium-scale and field-scale tests to be undertaken as part of the EU-funded CO 2 PipeHaz project 1 , and the large-scale tests to be undertaken as part of the National Grid COOLTRANS project. In view of this, it would be advantageous to maximise the potential benefits from these large and costly experiments by recording time-varying CO 2 concentrations (or at least temperatures), which could subsequently be used to develop concentration PDFs. Thirdly, the matter could be investigated by numerical simulations, using methods in which time-varying concentrations are resolved. The most promising avenue is to use Large-Eddy Simulation (LES). This approach has previously been used to assess concentration fluctuations in passive and buoyant plumes by Once these analyses have been performed, it would be beneficial to revisit the factor-of-two square-wave model proposed here. If it was shown to be significantly under-or overconservative, other alternatives could be investigated, such as the use of prescribed triangular or sinusoidal variations in concentration over time. Discussion and Conclusions The present work has examined the validity of a simple approach to account for the effect of concentration fluctuations in calculating the toxic load for atmospheric CO 2 releases. It is based on the assumption that the concentration at any point in space fluctuates by a factor of two with a prescribed square-wave variation over time. Analysis of free jets of CO 2 using a PDF-based model originally derived to predict the ignition probability of flammable gas jets has shown that this factor-of-two approach produces conservative predictions of the hazard range, in terms of the maximum distance to the SLOT and SLOD. For low-momentum plumes of dense CO 2 gas, a review of the literature has shown that, at present, it is not possible to establish the validity of the factor-of-two model. Suggestions have been provided for future work to address this matter, involving analysis of existing data, new field-scale measurements and numerical simulations using LES. It is clear from the literature review and analysis presented in the current work that if only mean concentrations are used to calculate the toxic load, hazard ranges for CO 2 releases are likely to be significantly under-predicted. Given the current state of knowledge, it is unclear whether in all circumstances the proposed factor-of-two model will always give rise to conservative predictions. However, at the very least this approach provides a step in the right direction, and incorporates the effect of fluctuations on the toxic load in a way that can be easily adopted using the current generation of quantified risk assessment models. Impact analysis will show whether or not the approach leads to untenable (over-conservative) hazard ranges in scenarios of practical interest. As scientific understanding develops, and more 1 http://www.co2pipehaz.eu 11 sophisticated, practical models are developed, it will be necessary to reassess this methodology

    “Cautiously Optimistic” Older Parent-Carers of Adults with Intellectual Disabilities response to the Care Act 2014

    Get PDF
    This paper discusses potential opportunities for best practice in the UK that may be brought about by the Care Act (2014). Carers in the UK were given new rights within this legislation with a focus on needs led assessment. The underpinning philosophy of the Care Act is to streamline previous legislation and offer a framework for carers and people in receipt of care, to enable a more personalised approach to care and support

    Thickness-Dependent Differential Reflectance Spectra of Monolayer and Few-Layer MoS2, MoSe2, WS2 and WSe2

    Full text link
    The research field of two dimensional (2D) materials strongly relies on optical microscopy characterization tools to identify atomically thin materials and to determine their number of layers. Moreover, optical microscopy-based techniques opened the door to study the optical properties of these nanomaterials. We presented a comprehensive study of the differential reflectance spectra of 2D semiconducting transition metal dichalcogenides (TMDCs), MoS2, MoSe2, WS2, and WSe2, with thickness ranging from one layer up to six layers. We analyzed the thickness-dependent energy of the different excitonic features, indicating the change in the band structure of the different TMDC materials with the number of layers. Our work provided a route to employ differential reflectance spectroscopy for determining the number of layers of MoS2, MoSe2, WS2, and WSe2.Comment: Main text (3 Figures) and Supp. Info. (23 Figures

    The Unyvero P55 ‘sample-in, answer-out’ pneumonia assay: A performance evaluation

    Get PDF
    Background: O’Neill’s recent Review on Antimicrobial Resistance expressed the view that by 2020 high-income countries should make it mandatory to support antimicrobial prescribing with rapid diagnostic evidence whenever possible. Methods: Routine microbiology diagnosis of 95 respiratory specimens from patients with severe infection were compared with those generated by the Unyvero P55 test, which detects 20 pathogens and 19 antimicrobial resistance markers. Supplementary molecular testing for antimicrobial resistance genes, comprehensive culture methodology and 16S rRNA sequencing were performed. Results: Unyvero P55 produced 85 valid results, 67% of which were concordant with those from the routine laboratory. Unyvero P55 identified more potential pathogens per specimen than routine culture (1.34 vs. 0.47 per specimen). Independent verification using 16S rRNA sequencing and culture (n = 10) corroborated 58% of additional detections compared to routine microbiology. Overall the average sensitivity for organism detection by Unyvero P55 was 88.8% and specificity was 94.9%. While Unyvero P55 detected more antimicrobial resistance markers than routine culture, some instances of phenotypic resistance were missed. Conclusions: The Unyvero P55 is a rapid pathogen detection test for lower respiratory specimens, which identifies a larger number of pathogens than routine microbiology. The clinical significance of these additional organisms is yet to be determined. Further studies are required to determine the effect of the test in practise on antimicrobial prescribing and patient outcomes

    Urban tourism and population change: Gentrification in the age of mobilities

    Get PDF
    The prepandemic unbridled growth of tourism has triggered a significant debate regarding the future of cities; several authors suggest that neighbourhood change produced by tourism should be conceived as a form of gentrification. Yet research on population shifts—a fundamental dimension of gentrification—in such neighbourhoods is scarce. Our exploration of the Gòtic area in Barcelona, using quantitative and qualitative techniques, reveals a process of population restructuring characterised by a decrease of long-term residents and inhabited dwellings, and the arrival of young and transnational gentrifiers that are increasingly mobile and form a transient population. We then use some insights from the mobilities literature to make sense of these results. In the gentrification of the Gòtic, the attractiveness of the area for visitors and for a wider palette of transnational dwellers feeds one another, resulting in an uneven negotiation whereby more wealthy and ‘footloose’ individuals gain access and control of space and housing over less mobile and more dependent populations.info:eu-repo/semantics/publishedVersio

    A simple and robust method for connecting small-molecule drugs using gene-expression signatures

    Get PDF
    Interaction of a drug or chemical with a biological system can result in a gene-expression profile or signature characteristic of the event. Using a suitably robust algorithm these signatures can potentially be used to connect molecules with similar pharmacological or toxicological properties. The Connectivity Map was a novel concept and innovative tool first introduced by Lamb et al to connect small molecules, genes, and diseases using genomic signatures [Lamb et al (2006), Science 313, 1929-1935]. However, the Connectivity Map had some limitations, particularly there was no effective safeguard against false connections if the observed connections were considered on an individual-by-individual basis. Further when several connections to the same small-molecule compound were viewed as a set, the implicit null hypothesis tested was not the most relevant one for the discovery of real connections. Here we propose a simple and robust method for constructing the reference gene-expression profiles and a new connection scoring scheme, which importantly allows the valuation of statistical significance of all the connections observed. We tested the new method with the two example gene-signatures (HDAC inhibitors and Estrogens) used by Lamb et al and also a new gene signature of immunosuppressive drugs. Our testing with this new method shows that it achieves a higher level of specificity and sensitivity than the original method. For example, our method successfully identified raloxifene and tamoxifen as having significant anti-estrogen effects, while Lamb et al's Connectivity Map failed to identify these. With these properties our new method has potential use in drug development for the recognition of pharmacological and toxicological properties in new drug candidates.Comment: 8 pages, 2 figures, and 2 tables; supplementary data supplied as a ZIP fil

    Antibiotic prescribing decisions in intensive care: A qualitative study

    Get PDF
    corecore