273 research outputs found

    Milk whey protein concentration and mRNA associated with β-lactoglobulin phenotype

    Get PDF
    Two common genetic variants of β-lactoglobulin (β-lg), A and B, exist as co- dominant alleles in dairy cattle (Aschaffenburg, 1968). Numerous studies have shown that cows homozygous for β-lg A have more β-lg and less α-lactalbumin (α-la) and casein in their milk than cows expressing only the B variant of β-lg (Ng-Kwai-Hang et al. 1987; Graml et al. 1989; Hill, 1993; Hill et al. 1995, 1997). These differences have a significant impact on the processing characteristics of the milk. For instance, the moisture-adjusted yield of Cheddar cheese is up to 10% higher using milk from cows of the β-lg BB phenotype compared with milk from cows expressing only the A variant (Hill et al. 1997). All these studies, however, describe compositional differences associated with β-lg phenotype in established lactation only. No information is available on the first few weeks of lactation, when there are marked changes in the concentrations of β-lg and α-la (Pérez et al. 1990)

    Biotic analogies for self-organising cities

    Get PDF
    Nature has inspired generations of urban designers and planners in pursuit of harmonious and functional built environments. Research regarding self-organisation has encouraged urbanists to consider the role of bottom-up approaches in generating urban order. However, the extent to which self-organisation-inspired approaches draw directly from nature is not always clear. Here, we examined the biological basis of urban research, focusing on self-organisation. We conducted a systematic literature search of self-organisation in urban design and biology, mapped the relationship between key biological terms across the two fields and assessed the quality and validity of biological comparisons in the urban design literature. Finding deep inconsistencies in the mapping of central terms between the two fields, a preponderance for cross-level analogies and comparisons that spanned molecules to ecosystems, we developed a biotic framework to visualise the analogical space and elucidate areas where new inspiration may be sought

    γ-H2AX Kinetic Profile in Mouse Lymphocytes Exposed to the Internal Emitters Cesium-137 and Strontium-90

    Get PDF
    In the event of a dirty bomb scenario or an industrial nuclear accident, a significant dose of volatile radionuclides such as 137Cs and 90Sr may be dispersed into the atmosphere as a component of fallout and inhaled or ingested by hundreds and thousands of people. To study the effects of prolonged exposure to ingested radionuclides, we have performed long-term (30 day) internal-emitter mouse irradiations using soluble-injected 137CsCl and 90SrCl2 radioisotopes. The effect of ionizing radiation on the induction and repair of DNA double strand breaks (DSBs) in peripheral mouse lymphocytes in vivo was determined using the γ-H2AX biodosimetry marker. Using a serial sacrifice experimental design, whole-body radiation absorbed doses for 137Cs (0 to 10 Gy) and 90Sr (0 to 49 Gy) were delivered over 30 days following exposure to each radionuclide. The committed absorbed doses of the two internal emitters as a function of time post exposure were calculated based on their retention parameters and their derived dose coefficients for each specific sacrifice time. In order to measure the kinetic profile for γ-H2AX, peripheral blood samples were drawn at 5 specific timed dose points over the 30-day study period and the total γ-H2AX nuclear fluorescence per lymphocyte was determined using image analysis software. A key finding was that a significant γ-H2AX signal was observed in vivo several weeks after a single radionuclide exposure. A mechanistically-motivated model was used to analyze the temporal kinetics of γ-H2AX fluorescence. Exposure to either radionuclide showed two peaks of γ-H2AX: one within the first week, which may represent the death of mature, differentiated lymphocytes, and the second at approximately three weeks, which may represent the production of new lymphocytes from damaged progenitor cells. The complexity of the observed responses to internal irradiation is likely caused by the interplay between continual production and repair of DNA damage, cell cycle effects and apoptosis

    The relationship between adverse neighborhood socioeconomic context and HIV continuum of care outcomes in a diverse HIV clinic cohort in the Southern United States

    Get PDF
    Retention in care and viral suppression are critical to delaying HIV progression and reducing transmission. Neighborhood socioeconomic context (NSEC) may affect HIV care receipt. We therefore assessed NSEC's impact on retention and viral suppression in a diverse HIV clinical cohort. HIV-positive adults with ≥1 visit at the Vanderbilt Comprehensive Care Clinic and 5-digit ZIP code tabulation area (ZCTA) information between 2008 and 2012 contributed. NSEC z-score indices used neighborhood-level socioeconomic indicators for poverty, education, labor-force participation, proportion of males, median age, and proportion of residents of black race by ZCTA. Retention was defined as ≥2 HIV care visits per calendar year, >90 days apart. Viral suppression was defined as an HIV-1 RNA <200 copies/mL at last measurement per calendar year. Modified Poisson regression was used to estimate risk ratios (RR) and 95% confidence intervals (CI). Among 2272 and 2541 adults included for retention and viral suppression analyses, respectively, median age and CD4 count at enrollment were approximately 38 (1st and 3rd quartile: 30, 44) years and 351 (176, 540) cells/μL, respectively, while 24% were female, and 39% were black. Across 243 ZCTAs, median NSEC z-score was 0.09 (-0.66, 0.48). Overall, 79% of person-time contributed was retained and 74% was virally suppressed. In adjusted models, NSEC was not associated with retention, though being in the 4th vs. 1st NSEC quartile was associated with lack of viral suppression (RR = 0.88; 95% CI: 0.80-0.97). Residing in the most adverse NSEC was associated with lack of viral suppression. Future studies are needed to confirm this finding

    Using liminality to understand mothers’ experiences of long-term breastfeeding: ‘Betwixt and between’, and ‘matter out of place’

    Get PDF
    © 2015, © The Author(s) 2015. Breastmilk is widely considered as the optimum nutrition source for babies and an important factor in both improving public health and reducing health inequalities. Current international/national policy supports long-term breastfeeding. UK breastfeeding initiation rates are high but rapidly decline, and the numbers breastfeeding in the second year and beyond are unknown. This study used the concept of liminality to explore the experiences of a group of women breastfeeding long-term in the United Kingdom, building on Mahon-Daly and Andrews. Over 80 breastfeeding women were included within the study, which used micro-ethnographic methods (participant observation in breastfeeding support groups, face-to-face interviews and online asynchronous interviews via email). Findings about women’s experiences are congruent with the existing literature, although it is mostly dated and from outside the United Kingdom. Liminality was found to be useful in providing insight into women’s experiences of long-term breastfeeding in relation to both time and place. Understanding women’s experience of breastfeeding beyond current usual norms can be used to inform work with breastfeeding mothers and to encourage more women to breastfeed for longer

    Prospective validation of a checklist to predict short-term death in older patients after emergency department admission in Australia and Ireland

    Get PDF
    Abstract Background Emergency departments (EDs) are pressured environment where patients with supportive and palliative care needs may not be identified. We aimed to test the predictive ability of the CriSTAL (Criteria for Screening and Triaging to Appropriate aLternative care) checklist to flag patients at risk of death within 3 months who may benefit from timely end-of-life discussions. Methods Prospective cohorts of >65-year-old patients admitted for at least one night via EDs in five Australian hospitals and one Irish hospital. Purpose-trained nurses and medical students screened for frailty using two instruments concurrently and completed the other risk factors on the CriSTAL tool at admission. Postdischarge telephone follow-up was used to determine survival status. Logistic regression and bootstrapping techniques were used to test the predictive accuracy of CriSTAL for death within 90 days of admission as primary outcome. Predictability of in-hospital death was the secondary outcome. Results A total of 1,182 patients, with median age 76 to 80 years (IRE-AUS), were included. The deceased had significantly higher mean CriSTAL with Australian mean of 8.1 (95% confidence interval [CI] = 7.7–8.6) versus 5.7 (95% CI = 5.1–6.2) and Irish mean of 7.7 (95% CI = 6.9–8.5) versus 5.7 (95% CI = 5.1–6.2). The model with Fried frailty score was optimal for the derivation (Australian) cohort but prediction with the Clinical Frailty Scale (CFS) was also good (areas under the receiver-operating characteristic [AUROC] = 0.825 and 0.81, respectively). Values for the validation (Irish) cohort were AUROC = 0.70 with Fried and 0.77 using CFS. A minimum of five of 29 variables were sufficient for accurate prediction, and a cut point of 7+ or 6+ depending on the cohort was strongly indicative of risk of death. The most significant independent predictor of short-term death in both cohorts was frailty, carrying a twofold risk of death. CriSTAL's accuracy for in-hospital death prediction was also good (AUROC = 0.795 and 0.81 in Australia and Ireland, respectively), with high specificity and negative predictive values. Conclusions The modified CriSTAL tool (with CFS instead of Fried's frailty instrument) had good discriminant power to improve certainty of short-term mortality prediction in both health systems. The predictive ability of models is anticipated to help clinicians gain confidence in initiating earlier end-of-life discussions. The practicalities of embedding screening for risk of death in routine practice warrant further investigation

    Economic evaluation of the NET intervention versus guideline dissemination for management of mild head injury in hospital emergency departments

    Get PDF
    Abstract Background Evidence-based guidelines for the management of mild traumatic brain injury (mTBI) in the emergency department (ED) are now widely available, and yet, clinical practice remains inconsistent with the guidelines. The Neurotrauma Evidence Translation (NET) intervention was developed to increase the uptake of guideline recommendations and improve the management of minor head injury in Australian emergency departments (EDs). However, the adoption of this type of intervention typically entails an upfront investment that may or may not be fully offset by improvements in clinical practice, health outcomes and/or reductions in health service utilisation. The present study estimates the cost and cost-effectiveness of the NET intervention, as compared to the passive dissemination of the guideline, to evaluate whether any improvements in clinical practice or health outcomes due to the NET intervention can be obtained at an acceptable cost. Methods and findings Study setting: The NET cluster randomised controlled trial [ACTRN12612001286831]. Study sample: Seventeen EDs were randomised to the control condition and 14 to the intervention. One thousand nine hundred forty-three patients were included in the analysis of clinical practice outcomes (NET sample). A total of 343 patients from 14 control and 10 intervention EDs participated in follow-up interviews and were included in the analysis of patient-reported health outcomes (NET-Plus sample). Outcome measures: Appropriate post-traumatic amnesia (PTA) screening in the ED (primary outcome). Secondary clinical practice outcomes: provision of written information on discharge (INFO) and safe discharge (defined as CT scan appropriately provided plus PTA plus INFO). Secondary patient-reported, post-discharge health outcomes: anxiety (Hospital Anxiety and Depression Scale), post-concussive symptoms (Rivermead), and preference-based health-related quality of life (SF6D). Methods: Trial-based economic evaluations from a health sector perspective, with time horizons set to coincide with the final follow-up for the NET sample (2 months post-intervention) and to 1-month post-discharge for the NET-Plus sample. Results: Intervention and control groups were not significantly different in health service utilisation received in the ED/inpatient ward following the initial mTBI presentation (adjusted mean difference 23.86perpatient;9523.86 per patient; 95%CI − 106, 153;p=0.719)oroverthelongerfollowupintheNETplussample(adjustedmeandifference153; p = 0.719) or over the longer follow-up in the NET-plus sample (adjusted mean difference 341.78 per patient; 95%CI − 58,58, 742; p = 0.094). Savings from lower health service utilisation are therefore unlikely to offset the significantly higher upfront cost of the intervention (mean difference 138.20perpatient;95138.20 per patient; 95%CI 135, 141;p<0.000).Estimatesoftheneteffectoftheinterventionontotalcost(interventioncostnetofhealthserviceutilisation)suggestthattheinterventionentailssignificantlyhighercoststhanthecontrolcondition(adjustedmeandifference141; p < 0.000). Estimates of the net effect of the intervention on total cost (intervention cost net of health service utilisation) suggest that the intervention entails significantly higher costs than the control condition (adjusted mean difference 169.89 per patient; 95%CI 43,43, 297, p = 0.009). This effect is larger in absolute magnitude over the longer follow-up in the NET-plus sample (adjusted mean difference 505.06;95505.06; 95%CI 96, 915;p=0.016),mostlyduetoadditionalhealthserviceutilisation.Fortheprimaryoutcome,theNETinterventionismorecostlyandmoreeffectivethanpassivedissemination;entailinganadditionalcostof915; p = 0.016), mostly due to additional health service utilisation. For the primary outcome, the NET intervention is more costly and more effective than passive dissemination; entailing an additional cost of 1246 per additional patient appropriately screened for PTA (169.89/0.1363;Fiellers95169.89/0.1363; Fieller’s 95%CI 525, $2055). For NET to be considered cost-effective with 95% confidence, decision-makers would need to be willing to trade one quality-adjusted life year (QALY) for 25 additional patients appropriately screened for PTA. While these results reflect our best estimate of cost-effectiveness given the data, it is possible that a NET intervention that has been scaled and streamlined ready for wider roll-out may be more or less cost-effective than the NET intervention as delivered in the trial. Conclusions While the NET intervention does improve the management of mTBI in the ED, it also entails a significant increase in cost and—as delivered in the trial—is unlikely to be cost-effective at currently accepted funding thresholds. There may be a scope for a scaled-up and streamlined NET intervention to achieve a better balance between costs and outcomes. Trial registration Australian New Zealand Clinical Trials Registry ACTRN12612001286831, date registered 12 December 2012

    Physical activity monitoring: Addressing the difficulties of accurately detecting slow walking speeds

    Get PDF
    OBJECTIVE: To test the accuracy of a multi-sensor activity monitor (SWM) in detecting slow walking speeds in patients with chronic obstructive pulmonary disease (COPD). BACKGROUND: Concerns have been expressed regarding the use of pedometers in patient populations. Although activity monitors are more sophisticated devices, their accuracy at detecting slow walking speeds common in patients with COPD has yet to be proven. METHODS: A prospective observational study design was employed. An incremental shuttle walk test (ISWT) was completed by 57 patients with COPD wearing an SWM. The ISWT was repeated by 20 patients wearing the same SWM. RESULTS: Differences were identified between metabolic equivalents (METS) and between step-count across five levels of the ISWT (p < 0.001). Good within monitor reproducibility between two ISWT was identified for total energy expenditure and step-count (p < 0.001). CONCLUSIONS: The SWM is able to detect slow (standardized) speeds of walking and is an acceptable method for measuring physical activity in individuals disabled by COPD

    Geographic Variations in Retention in Care among HIV-Infected Adults in the United States

    Get PDF
    ObjectiveTo understand geographic variations in clinical retention, a central component of the HIV care continuum and key to improving individual- and population-level HIV outcomes.DesignWe evaluated retention by US region in a retrospective observational study.MethodsAdults receiving care from 2000–2010 in 12 clinical cohorts of the North American AIDS Cohort Collaboration on Research and Design (NA-ACCORD) contributed data. Individuals were assigned to Centers for Disease Control and Prevention (CDC)-defined regions by residential data (10 cohorts) and clinic location as proxy (2 cohorts). Retention was ≥2 primary HIV outpatient visits within a calendar year, >90 days apart. Trends and regional differences were analyzed using modified Poisson regression with clustering, adjusting for time in care, age, sex, race/ethnicity, and HIV risk, and stratified by baseline CD4+ count.ResultsAmong 78,993 adults with 444,212 person-years of follow-up, median time in care was 7 years (Interquartile Range: 4–9). Retention increased from 2000 to 2010: from 73% (5,000/6,875) to 85% (7,189/8,462) in the Northeast, 75% (1,778/2,356) to 87% (1,630/1,880) in the Midwest, 68% (8,451/12,417) to 80% (9,892/12,304) in the South, and 68% (5,147/7,520) to 72% (6,401/8,895) in the West. In adjusted analyses, retention improved over time in all regions (p<0.01, trend), although the average percent retained lagged in the West and South vs. the Northeast (p<0.01).ConclusionsIn our population, retention improved, though regional differences persisted even after adjusting for demographic and HIV risk factors. These data demonstrate regional differences in the US which may affect patient care, despite national care recommendations
    corecore