220 research outputs found

    Eating disorder symptoms among NCAA Division I female athletes in the Southeastern Conference

    Get PDF
    The purposes of this study were to determine: a) eating disorder symptoms among National Collegiate Athletic Association (NCAA), Division I, intercollegiate female athletes in the Southeastern Conference (SEC), b) if lean sport athletes were more prone to eating disorder symptoms than non-lean sport athletes, c) what percentage of athletes who were at risk for eating disorders sought help or support, and d) what kind of assistance was offered from athletic institutions in the SEC to help the student-athletes deal with eating disorder issues. The participants in this study were NCAA, Division I, female intercollegiate athletes from four universities in the SEC. There were 325 participants all between 18-23 years old, and currently a member of an intercollegiate sport in the SEC. The selection of the participants was based on the geographical location and close proximity of the Division I universities in the SEC. Availability of participants was based on which schools and athletes agreed to participate in this study. The universities which had agreed to participate in this study were sent questionnaires which were to be completed and returned to the researcher for analysis. Instrumentation used for this study involved two survey questionnaires and a clinical instrument. The first survey questionnaire served as the demographic questionnaire. Its purpose was to determine in what sport the participant was involved, and determined the age and year of eligibility of the participant. The second instrument used in this study was the Eating Disorder Inventory-2 (EDI-2) questionnaire (Gamer, 1991). The EDI-2,91-item self-report, 6-point, Likert-type scale questionnaire was used to measure symptoms related to anorexia nervosa and bulimia nervosa. The final survey questionnaire was also designed to ascertain information from the participants regarding the type(s) of support system(s) offered by their athletic institutions for eating disorders. The survey questionnaire also addressed the question, If there is not an organized support program available specifically for the female college athletes with eating disorders, would the athletes like to see their university implement such a program? The study determined that there were no significant differences when evaluating lean and non-lean sport athletes for the prevalence of eating disorder symptoms under the subscale BUL. There were, however, significant findings of lean sport athletes scoring higher than the general female college population for the prevalence of eating disorder symptoms; their scores were significantly higher than expected by chance at the .05 level of reliability (Hays, 1973). The findings of the study also revealed that lean sport athletes were more prone to one type of eating disorder symptoms than were non-lean athletes: the lean sport athletes did score in the C range on DT significantly more often than did non-lean sport athletes. This difference also was significant at the .05 level of confidence (Hays, 1973). Regarding the third question posed in the study, what percentage of athletes prone to eating disorders sought help or support, there were 17 (22.1%) participants who sought help from their athletic institution which already had a program designed to help female athletes deal with eating disorders, depression, and/or body dissatisfaction. The other question posed in the purpose of the study was what kind of assistance was offered from athletic institutions in the SEC to help student-athletes deal with eating disorder issues. The response regarding this question was 223 (68.6%) participants stated their athletic department provided an organized support program specifically designed for female athletes to attend to discuss personal issues. There were 212 (65.2%) participants who noted that their athletic department provided organized team talks regarding eating disorders and other personal issues, however, due to the confidentiality issues surrounding this study, it was not possible to discuss which athletic institutions provided these types of programs

    Direct Measurement of Perchlorate Exposure Biomarkers in a Highly Exposed Population: A Pilot Study

    Get PDF
    Exposure to perchlorate is ubiquitous in the United States and has been found to be widespread in food and drinking water. People living in the lower Colorado River region may have perchlorate exposure because of perchlorate in ground water and locally-grown produce. Relatively high doses of perchlorate can inhibit iodine uptake and impair thyroid function, and thus could impair neurological development in utero. We examined human exposures to perchlorate in the Imperial Valley among individuals consuming locally grown produce and compared perchlorate exposure doses to state and federal reference doses. We collected 24-hour urine specimen from a convenience sample of 31 individuals and measured urinary excretion rates of perchlorate, thiocyanate, nitrate, and iodide. In addition, drinking water and local produce were also sampled for perchlorate. All but two of the water samples tested negative for perchlorate. Perchlorate levels in 79 produce samples ranged from non-detect to 1816 ppb. Estimated perchlorate doses ranged from 0.02 to 0.51 µg/kg of body weight/day. Perchlorate dose increased with the number of servings of dairy products consumed and with estimated perchlorate levels in produce consumed. The geometric mean perchlorate dose was 70% higher than for the NHANES reference population. Our sample of 31 Imperial Valley residents had higher perchlorate dose levels compared with national reference ranges. Although none of our exposure estimates exceeded the U. S. EPA reference dose, three participants exceeded the acceptable daily dose as defined by bench mark dose methods used by the California Office of Environmental Health Hazard Assessment

    On a theorem of Y. Miyashita

    Get PDF
    Background: Portion size is an important driver of larger meals. However, effects on food choice remain unclear. Objective: Our aim was to identify how portion size influences the effect of palatability and expected satiety on choice. Methods: In Study 1, adult participants (n = 24, 87.5% women) evaluated the palatability and expected satiety of 5 lunchtime meals and ranked them in order of preference. Separate ranks were elicited for equicaloric portions from 100 to 800 kcal (100-kcal steps). In Study 2, adult participants (n = 24, 75% women) evaluated 9 meals and ranked 100–600 kcal portions in 3 contexts (scenarios), believing that 1) the next meal would be at 1900, 2) they would receive only a bite of one food, and 3) a favorite dish would be offered immediately afterwards. Regression analysis was used to quantify predictors of choice. Results: In Study 1, the extent to which expected satiety and palatability predicted choice was highly dependent on portion size (P < 0.001). With smaller portions, expected satiety was a positive predictor, playing a role equal to palatability (100-kcal portions: expected satiety, β: 0.42; palatability, β: 0.46). With larger portions, palatability was a strong predictor (600-kcal portions: β: 0.53), and expected satiety was a poor or negative predictor (600-kcal portions: β: −0.42). In Study 2, this pattern was moderated by context (P = 0.024). Results from scenario 1 replicated Study 1. However, expected satiety was a poor predictor in both scenario 2 (expected satiety was irrelevant) and scenario 3 (satiety was guaranteed), and palatability was the primary driver of choice across all portions. Conclusions: In adults, expected satiety influences food choice, but only when small equicaloric portions are compared. Larger portions not only promote the consumption of larger meals, but they encourage the adoption of food choice strategies motivated solely by palatability

    Effect of extended morning fasting upon ad libitum lunch intake and associated metabolic and hormonal responses in obese adults

    Get PDF
    Background/Objectives: Breakfast omission is positively associated with obesity and increased risk of disease. However, little is known about the acute effects of extended morning fasting upon subsequent energy intake and associated metabolic/regulatory factors in obese adults. Subjects/Methods: In a randomised cross-over design, 24 obese men (n=8) and women (n=16) extended their overnight fast by omitting breakfast consumption or ingesting a typical carbohydrate-rich breakfast of 2183±393 kJ (521±94 kcal), before an ad libitum pasta lunch 3 h later. Blood samples were obtained throughout the day until 3 h post lunch and analysed for hormones implicated in appetite regulation, along with metabolic outcomes and subjective appetite measures. Results: Lunch intake was unaffected by extended morning fasting (difference=218 kJ, 95% confidence interval −54 kJ, 490 kJ; P=0.1) resulting in lower total intake in the fasting trial (difference=−1964 kJ, 95% confidence interval −1645 kJ, −2281 kJ; P<0.01). Systemic concentrations of peptide tyrosine–tyrosine and leptin were lower during the afternoon following morning fasting (Pless than or equal to0.06). Plasma-acylated ghrelin concentrations were also lower following the ad libitum lunch in the fasting trial (P<0.05) but this effect was not apparent for total ghrelin (Pgreater than or equal to0.1). Serum insulin concentrations were greater throughout the afternoon in the fasting trial (P=0.05), with plasma glucose also greater 1 h after lunch (P<0.01). Extended morning fasting did not result in greater appetite ratings after lunch, with some tendency for lower appetite 3 h post lunch (P=0.09). Conclusions: We demonstrate for the first time that, in obese adults, extended morning fasting does not cause compensatory intake during an ad libitum lunch nor does it increase appetite during the afternoon. Morning fasting reduced satiety hormone responses to a subsequent lunch meal but counterintuitively also reduced concentrations of the appetite-stimulating hormone-acylated ghrelin during the afternoon relative to lunch consumed after breakfast

    Continuous ambulatory peritoneal dialysis: pharmacokinetics and clinical outcome of paclitaxel and carboplatin treatment

    Get PDF
    Purpose: Administration of chemotherapy in patients with renal failure, treated with hemodialysis or continuous ambulatory peritoneal dialysis (CAPD) is still a challenge and literature data is scarce. Here we present a case study of a patient on CAPD, treated with weekly and three-weekly paclitaxel/ carboplatin for recurrent ovarian cancer. Experimental: During the first, second and ninth cycle of treatment, blood, urine and CAPD samples were collected for pharmacokinetic analysis of paclitaxel and total and unbound carboplatin-derived platinum. Results: Treatment was well tolerated by the patient. No excessive toxicity was observed and at the e

    Computational prediction and experimental validation associating FABP-1 and pancreatic adenocarcinoma with diabetes

    Get PDF
    <p/> <p>Background</p> <p>Pancreatic cancer, composed principally of pancreatic adenocarcinoma (PaC), is the fourth leading cause of cancer death in the United States. PaC-associated diabetes may be a marker of early disease. We sought to identify molecules associated with PaC and PaC with diabetes (PaC-DM) using a novel translational bioinformatics approach. We identified fatty acid binding protein-1 (FABP-1) as one of several candidates. The primary aim of this pilot study was to experimentally validate the predicted association between FABP-1 with PaC and PaC with diabetes.</p> <p>Methods</p> <p>We searched public microarray measurements for genes that were specifically highly expressed in PaC. We then filtered for proteins with known involvement in diabetes. Validation of FABP-1 was performed via antibody immunohistochemistry on formalin-fixed paraffin embedded pancreatic tissue microarrays (FFPE TMA). FFPE TMA were constructed using148 cores of pancreatic tissue from 134 patients collected between 1995 and 2002 from patients who underwent pancreatic surgery. Primary analysis was performed on 21 normal and 60 pancreatic adenocarcinoma samples, stratified for diabetes. Clinical data on samples was obtained via retrospective chart review. Serial sections were cut per standard protocol. Antibody staining was graded by an experienced pathologist on a scale of 0-3. Bivariate and multivariate analyses were conducted to assess FABP-1 staining and clinical characteristics.</p> <p>Results</p> <p>Normal samples were significantly more likely to come from younger patients. PaC samples were significantly more likely to stain for FABP-1, when FABP-1 staining was considered a binary variable. Compared to normals, there was significantly increased staining in diabetic PaC samples (p = 0.004) and there was a trend towards increased staining in the non-diabetic PaC group (p = 0.07). In logistic regression modeling, FABP-1 staining was significantly associated with diagnosis of PaC (OR 8.6 95% CI 1.1-68, p = 0.04), though age was a confounder.</p> <p>Conclusions</p> <p>Compared to normal controls, there was a significant positive association between FABP-1 staining and PaC on FFPE-TMA, strengthened by the presence of diabetes. Further studies with closely phenotyped patient samples are required to understand the true relationship between FABP-1, PaC and PaC-associated diabetes. A translational bioinformatics approach has potential to identify novel disease associations and potential biomarkers in gastroenterology.</p

    Observational analytic studies in multiple sclerosis: controlling bias through study design and conduct. The Australian Multicentre Study of Environment and Immune Function

    Get PDF
    Rising multiple sclerosis incidence over the last 50 years and geographic patterns of occurrence suggest an environmental role in the causation of this multifactorial disease. Design options for epidemiological studies of environmental causes of multiple sclerosis are limited by the low incidence of the disease, possible diagnostic delay and budgetary constraints. We describe scientific and methodological issues considered in the development of the Australian Multicentre Study of Environment and Immune Function (the Ausimmune Study), which seeks, in particular, to better understand the causes of the well-known MS positive latitudinal gradient. A multicentre, case-control design down the eastern seaboard of Australia allows the recruitment of sufficient cases for adequate study power and provides data on environmental exposures that vary by latitude. Cases are persons with an incident first demyelinating event (rather than prevalent multiple sclerosis), sourced from a population base using a two tier notification system. Controls, matched on sex, age (within two years) and region of residence, are recruited from the general population. Biases common in case-control studies, eg, prevalence-incidence bias, admission-rate bias, non-respondent bias, observer bias and recall bias, as well as confounding have been carefully considered in the study design and conduct of the Ausimmune Study

    Malaria paediatric hospitalization between 1999 and 2008 across Kenya

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Intervention coverage and funding for the control of malaria in Africa has increased in recent years, however, there are few descriptions of changing disease burden and the few reports available are from isolated, single site observations or are of reports at country-level. Here we present a nationwide assessment of changes over 10 years in paediatric malaria hospitalization across Kenya.</p> <p>Methods</p> <p>Paediatric admission data on malaria and non-malaria diagnoses were assembled for the period 1999 to 2008 from in-patient registers at 17 district hospitals in Kenya and represented the diverse malaria ecology of the country. These data were then analysed using autoregressive moving average time series models with malaria and all-cause admissions as the main outcomes adjusted for rainfall, changes in service use and populations-at-risk within each hospital's catchment to establish whether there has been a statistically significant decline in paediatric malaria hospitalization during the observation period.</p> <p>Results</p> <p>Among the 17 hospital sites, adjusted paediatric malaria admissions had significantly declined at 10 hospitals over 10 years since 1999; had significantly increased at four hospitals, and remained unchanged in three hospitals. The overall estimated average reduction in malaria admission rates was 0.0063 cases per 1,000 children aged 0 to 14 years per month representing an average percentage reduction of 49% across the 10 hospitals registering a significant decline by the end of 2008. Paediatric admissions for all-causes had declined significantly with a reduction in admission rates of greater than 0.0050 cases per 1,000 children aged 0 to 14 years per month at 6 of 17 hospitals. Where malaria admissions had increased three of the four sites were located in Western Kenya close to Lake Victoria. Conversely there was an indication that areas with the largest declines in malaria admission rates were areas located along the Kenyan coast and some sites in the highlands of Kenya.</p> <p>Conclusion</p> <p>A country-wide assessment of trends in malaria hospitalizations indicates that all is not equal, important variations exist in the temporal pattern of malaria admissions between sites and these differences require more detailed investigation to understand what is required to promote a clinical transition across Africa.</p

    Malaria Rapid Testing by Community Health Workers Is Effective and Safe for Targeting Malaria Treatment: Randomised Cross-Over Trial in Tanzania

    Get PDF
    Early diagnosis and prompt, effective treatment of uncomplicated malaria is critical to prevent severe disease, death and malaria transmission. We assessed the impact of rapid malaria diagnostic tests (RDTs) by community health workers (CHWs) on provision of artemisinin-based combination therapy (ACT) and health outcome in fever patients. Twenty-two CHWs from five villages in Kibaha District, a high-malaria transmission area in Coast Region, Tanzania, were trained to manage uncomplicated malaria using RDT aided diagnosis or clinical diagnosis (CD) only. Each CHW was randomly assigned to use either RDT or CD the first week and thereafter alternating weekly. Primary outcome was provision of ACT and main secondary outcomes were referral rates and health status by days 3 and 7. The CHWs enrolled 2930 fever patients during five months of whom 1988 (67.8%) presented within 24 hours of fever onset. ACT was provided to 775 of 1457 (53.2%) patients during RDT weeks and to 1422 of 1473 (96.5%) patients during CD weeks (Odds Ratio (OR) 0.039, 95% CI 0.029-0.053). The CHWs adhered to the RDT results in 1411 of 1457 (96.8%, 95% CI 95.8-97.6) patients. More patients were referred on inclusion day during RDT weeks (10.0%) compared to CD weeks (1.6%). Referral during days 1-7 and perceived non-recovery on days 3 and 7 were also more common after RDT aided diagnosis. However, no fatal or severe malaria occurred among 682 patients in the RDT group who were not treated with ACT, supporting the safety of withholding ACT to RDT negative patients. RDTs in the hands of CHWs may safely improve early and well-targeted ACT treatment in malaria patients at community level in Africa.\ud \ud \ud \u
    corecore