356 research outputs found

    Nile perch distribution in south-east Lake Victoria is more strongly driven by abiotic factors, than by prey densities

    Get PDF
    Abstract We studied the effects of environmental driving factors (maximum depth, visibility, oxygen, temperature, and prey densities) on the distribution and diet composition of Nile perch (Lates niloticus) in south-east Lake Victoria from 2009 to 2011. We tested the hypotheses that (i) Nile perch distribution is regulated by the same environmental factors on a local scale (Mwanza Gulf) and on a regional scale (Mwanza Gulf, Speke Gulf and the open lake in Sengerema district), and (ii) driving factors act differently on different Nile perch size classes. Fish were sampled with gillnets. Nile perch densities were highest in the shallow part of the Mwanza Gulf and during the wet seasons, mainly caused by high densities of juveniles. The environmental driving factors explained Nile perch distributions on both regional and local scales in a similar way, often showing non-linear relationships. Maximum depth and temperature were the best predictors of Nile perch densities. Prey densities of shrimp and haplochromines did not strongly affect Nile perch distributions, but did explain Nile perch diet on a local and regional scale. We conclude that abiotic variables drive Nile perch distributions more strongly than prey densities and that feeding takes place opportunistically

    Emergency ambulance service involvement with residential care homes in the support of older people with dementia : an observational study

    Get PDF
    © 2014 Amador et al.; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.BACKGROUND: Older people resident in care homes have a limited life expectancy and approximately two-thirds have limited mental capacity. Despite initiatives to reduce unplanned hospital admissions for this population, little is known about the involvement of emergency services in supporting residents in these settings.METHODS: This paper reports on a longitudinal study that tracked the involvement of emergency ambulance personnel in the support of older people with dementia, resident in care homes with no on-site nursing providing personal care only. 133 residents with dementia across 6 care homes in the East of England were tracked for a year. The paper examines the frequency and reasons for emergency ambulance call-outs, outcomes and factors associated with emergency ambulance service use. RESULTS: 56% of residents used ambulance services. Less than half (43%) of all call-outs resulted in an unscheduled admission to hospital. In addition to trauma following a following a fall in the home, results suggest that at least a reasonable proportion of ambulance contacts are for ambulatory care sensitive conditions. An emergency ambulance is not likely to be called for older rather than younger residents or for women more than men. Length of residence does not influence use of emergency ambulance services among older people with dementia. Contact with primary care services and admission route into the care home were both significantly associated with emergency ambulance service use. The odds of using emergency ambulance services for residents admitted from a relative's home were 90% lower than the odds of using emergency ambulance services for residents admitted from their own home. CONCLUSIONS: Emergency service involvement with this vulnerable population merits further examination. Future research on emergency ambulance service use by older people with dementia in care homes, should account for important contextual factors, namely, presence or absence of on-site nursing, GP involvement, and access to residents' family, alongside resident health characteristics.Peer reviewedFinal Published versio

    The impact of pre-exposure prophylaxis (PrEP) on HIV epidemics in Africa and India: A simulation study

    Get PDF
    Background: Pre-exposure prophylaxis (PrEP) is a promising new HIV prevention method, especially for women. An urgent demand for implementation of PrEP is expected at the moment efficacy has been demonstrated in clinical trials. We explored the long-term impact of PrEP on HIV transmission in different HIV epidemics. Methodology/Principal Findings: We used a mathematical model that distinguishes the general population, sex workers and their clients. PrEP scenarios varying in effectiveness, coverage and target group were modeled in the epidemiological settings of Botswana, Nyanza Province in Kenya, and Southern India. We also studied the effect of condom addition or condom substitution during PrEP use. Main outcome was number of HIV infections averted over ten years of PrEP use. PrEP strategies with high effectiveness and high coverage can have a substantial impact in African settings. In Southern India, by contrast, the number of averted HIV infections in different PrEP scenarios would be much lower. The impact of PrEP may be strongly diminished or even reversed by behavioral disinhibition, especially in scenarios with low coverage and low effectiveness. However, additional condom use during low coverage and low effective PrEP doubled the amount of averted HIV infections. Conclusions/Significance: The public health impact of PrEP can be substantial. However, this impact may be diminished, or even reversed, by changes in risk behavior. Implementation of PrEP strategies should therefore come on top of current condom campaigns, not as a substitution

    Using ordinal logistic regression to evaluate the performance of laser-Doppler predictions of burn-healing time

    Get PDF
    Background Laser-Doppler imaging (LDI) of cutaneous blood flow is beginning to be used by burn surgeons to predict the healing time of burn wounds; predicted healing time is used to determine wound treatment as either dressings or surgery. In this paper, we do a statistical analysis of the performance of the technique. Methods We used data from a study carried out by five burn centers: LDI was done once between days 2 to 5 post burn, and healing was assessed at both 14 days and 21 days post burn. Random-effects ordinal logistic regression and other models such as the continuation ratio model were used to model healing-time as a function of the LDI data, and of demographic and wound history variables. Statistical methods were also used to study the false-color palette, which enables the laser-Doppler imager to be used by clinicians as a decision-support tool. Results Overall performance is that diagnoses are over 90% correct. Related questions addressed were what was the best blood flow summary statistic and whether, given the blood flow measurements, demographic and observational variables had any additional predictive power (age, sex, race, % total body surface area burned (%TBSA), site and cause of burn, day of LDI scan, burn center). It was found that mean laser-Doppler flux over a wound area was the best statistic, and that, given the same mean flux, women recover slightly more slowly than men. Further, the likely degradation in predictive performance on moving to a patient group with larger %TBSA than those in the data sample was studied, and shown to be small. Conclusion Modeling healing time is a complex statistical problem, with random effects due to multiple burn areas per individual, and censoring caused by patients missing hospital visits and undergoing surgery. This analysis applies state-of-the art statistical methods such as the bootstrap and permutation tests to a medical problem of topical interest. New medical findings are that age and %TBSA are not important predictors of healing time when the LDI results are known, whereas gender does influence recovery time, even when blood flow is controlled for. The conclusion regarding the palette is that an optimum three-color palette can be chosen 'automatically', but the optimum choice of a 5-color palette cannot be made solely by optimizing the percentage of correct diagnoses

    Standardized and Individualized Parenteral Nutrition Mixtures in a Pediatric Home Parenteral Nutrition Population

    Get PDF
    OBJECTIVES: Studies evaluating efficacy or safety of standardized parenteral nutrition (PN) versus individualized PN are lacking. We aimed to assess effects on growth and safety of standardized PN compared with individualized PN in our Home PN group. METHODS: Descriptive cohort study in Dutch children on Home PN, in which standardized PN was compared with individualized PN. Both groups received similar micronutrient-supplementation. Primary outcome was growth over 2 years, secondary outcomes were electrolyte disturbances and biochemical abnormalities. Additionally, patients were matched for age to control for potential confounding characteristics. RESULTS: Fifty patients (50% girls, median age 6.5 years) were included, 16 (32%) received standardized PN mixtures. Age (11 vs 5 years), gestational age (39.2 vs 36.2 weeks) and PN duration (97 vs 39 months) were significantly higher in the group receiving standardized PN (P: ≤0.001; 0.027; 0.013 respectively). The standardized PN group showed an increase in weight-for-age (WFA), compared with a decrease in the individualized PN group (+0.38 SD vs -0.55 SD, P: 0.003). Electrolyte disturbances and biochemical abnormalities did not differ. After matching for age, resulting in comparable groups, no significant differences were demonstrated in WFA, height-for-age, or weight-for-height SD change. CONCLUSIONS: In children with chronic IF, over 2,5 years of age, standardized PN mixtures show a comparable effect on weight, height, and weight for height when compared with individualized PN mixtures. Also, standardized PN mixtures (with added micronutrients) seem noninferior to individualized PN mixtures in terms of electrolyte disturbances and basic biochemical abnormalities. Larger studies are needed to confirm these conclusions. TRIAL REGISTRATION: Academical Medical Center medical ethics committee number W18_079 #18.103

    Framing Male Circumcision to Promote its Adoption in Different Settings

    Get PDF
    The effectiveness of male circumcision in preventing transmission of HIV from females to males has been established. Those who are now advocating its widespread use face many challenges in convincing policy-makers and the public of circumcision’s value. We suggest that frames are a useful lens for communicating public health messages that may help promote adoption of circumcision. Frames relate to how individuals and societies perceive and understand the world. Existing frames are often hard to shift, and should be borne in mind by advocates and program implementers as they attempt to promote male circumcision by invoking new frames. Frames differ across and within societies, and advocates must find ways of delivering resonant messages that take into account prior perceptions and use the most appropriate means of communicating the benefits and value of male circumcision to different audiences

    The challenge for genetic epidemiologists: how to analyze large numbers of SNPs in relation to complex diseases

    Get PDF
    Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods have been developed for analyzing the relation between large numbers of genetic and environmental predictors to disease or disease-related variables in genetic association studies. In this commentary we discuss logistic regression analysis, neural networks, including the parameter decreasing method (PDM) and genetic programming optimized neural networks (GPNN) and several non-parametric methods, which include the set association approach, combinatorial partitioning method (CPM), restricted partitioning method (RPM), multifactor dimensionality reduction (MDR) method and the random forests approach. The relative strengths and weaknesses of these methods are highlighted. Logistic regression and neural networks can handle only a limited number of predictor variables, depending on the number of observations in the dataset. Therefore, they are less useful than the non-parametric methods to approach association studies with large numbers of predictor variables. GPNN on the other hand may be a useful approach to select and model important predictors, but its performance to select the important effects in the presence of large numbers of predictors needs to be examined. Both the set association approach and random forests approach are able to handle a large number of predictors and are useful in reducing these predictors to a subset of predictors with an important contribution to disease. The combinatorial methods give more insight in combination patterns for sets of genetic and/or environmental predictor variables that may be related to the outcome variable. As the non-parametric methods have different strengths and weaknesses we conclude that to approach genetic association studies using the case-control design, the application of a combination of several methods, including the set association approach, MDR and the random forests approach, will likely be a useful strategy to find the important genes and interaction patterns involved in complex diseases

    Evaluation of biases present in the cohort multiple randomised controlled trial design: a simulation study

    Get PDF
    Background The cohort multiple randomised controlled trial (cmRCT) design provides an opportunity to incorporate the benefits of randomisation within clinical practice; thus reducing costs, integrating electronic healthcare records, and improving external validity. This study aims to address a key concern of the cmRCT design: refusal to treatment is only present in the intervention arm, and this may lead to bias and reduce statistical power. Methods We used simulation studies to assess the effect of this refusal, both random and related to event risk, on bias of the effect estimator and statistical power. A series of simulations were undertaken that represent a cmRCT trial with time-to-event endpoint. Intention-to-treat (ITT), per protocol (PP), and instrumental variable (IV) analysis methods, two stage predictor substitution and two stage residual inclusion, were compared for various refusal scenarios. Results We found the IV methods provide a less biased estimator for the causal effect when refusal is present in the intervention arm, with the two stage residual inclusion method performing best with regards to minimum bias and sufficient power. We demonstrate that sample sizes should be adapted based on expected and actual refusal rates in order to be sufficiently powered for IV analysis. Conclusion We recommend running both an IV and ITT analyses in an individually randomised cmRCT as it is expected that the effect size of interest, or the effect we would observe in clinical practice, would lie somewhere between that estimated with ITT and IV analyses. The optimum (in terms of bias and power) instrumental variable method was the two stage residual inclusion method. We recommend using adaptive power calculations, updating them as refusal rates are collected in the trial recruitment phase in order to be sufficiently powered for IV analysis
    corecore