32 research outputs found
The addition of locust bean gum but not water delayed the gastric emptying rate of a nutrient semisolid meal in healthy subjects
BACKGROUND: Most of the previous studies regarding the effects of gel-forming fibres have considered the gastric emptying of liquid or solid meals after the addition of pectin or guar gum. The influence of locust bean gum, on gastric emptying of nutrient semisolid meals in humans has been less well studied, despite its common occurrence in foods. Using a standardised ultrasound method, this study was aimed at investigating if the gastric emptying in healthy subjects could be influenced by adding locust been gum, a widely used thickening agent, or water directly into a nutrient semisolid test meal. METHODS: The viscosity of a basic test meal (300 g rice pudding, 330 kcal) was increased by adding Nestargel (6 g, 2.4 kcal), containing viscous dietary fibres (96.5%) provided as seed flour of locust bean gum, and decreased by adding 100 ml of water. Gastric emptying of these three test meals were evaluated in fifteen healthy non-smoking volunteers, using ultrasound measurements of the gastric antral area to estimate the gastric emptying rate (GER). RESULTS: The median value of GER with the basic test meal (rice pudding) was estimated at 63 %, (range 47 to 84 %), (the first quartile = 61 %, the third quartile = 69 %). Increasing the viscosity of the rice pudding by adding Nestargel, resulted in significantly lower gastric emptying rates (p < 0.01), median GER 54 %, (range 7 to 71 %), (the first quartile = 48 %, the third quartile = 60 %). When the viscosity of the rice pudding was decreased (basic test meal added with water), the difference in median GER 65 %, (range 38 to 79 %), (the first quartile = 56 %, the third quartile = 71 %) was not significantly different (p = 0.28) compared to the GER of the basic test meal. CONCLUSIONS: We conclude that the addition of locust bean gum to a nutrient semisolid meal has a major impact on gastric emptying by delaying the emptying rate, but that the addition of water to this test meal has no influence on gastric emptying in healthy subjects
Recommended from our members
Ecto- and arbuscular mycorrhizal symbiosis can induce tolerance to toxic pulses of phosphorus in jarrah (Eucalyptus marginata) seedlings
In common with many plants native to low P soils, jarrah (Eucalyptus marginata) develops toxicity symptoms upon exposure to elevated phosphorus (P). Jarrah plants can establish arbuscular mycorrhizal (AM) and ectomycorrhizal (ECM) associations, along with a non-colonizing symbiosis described recently. AM colonization is known to influence the pattern of expression of genes required for P uptake of host plants and our aim was to investigate this phenomenon in relation to P sensitivity. Therefore, we examined the effect on hosts of the presence of AM and ECM fungi in combination with toxic pulses of P and assessed possible correlations between the induced tolerance and the shoot P concentration. The P transport dynamics of AM (Rhizophagus irregularis and Scutellospora calospora), ECM (Scleroderma sp.), non-colonizing symbiosis (Austroboletus occidentalis), dual mycorrhizal (R. irregularis and Scleroderma sp.), and non-mycorrhizal (NM) seedlings were monitored following two pulses of P. The ECM and A. occidentalis associations significantly enhanced the shoot P content of jarrah plants growing under P-deficient conditions. In addition, S. calospora, A. occidentalis, and Scleroderma sp. all stimulated plant growth significantly. All inoculated plants had significantly lower phytotoxicity symptoms compared to NM controls 7 days after addition of an elevated P dose (30 mg P kg−1 soil). Following exposure to toxicity-inducing levels of P, the shoot P concentration was significantly lower in R. irregularis-inoculated and dually inoculated plants compared to NM controls. Although all inoculated plants had reduced toxicity symptoms and there was a positive linear relationship between rank and shoot P concentration, the protective effect was not necessarily explained by the type of fungal association or the extent of mycorrhizal colonization
Exploiting bacterial DNA gyrase as a drug target: current state and perspectives
DNA gyrase is a type II topoisomerase that can introduce negative supercoils into DNA at the expense of ATP hydrolysis. It is essential in all bacteria but absent from higher eukaryotes, making it an attractive target for antibacterials. The fluoroquinolones are examples of very successful gyrase-targeted drugs, but the rise in bacterial resistance to these agents means that we not only need to seek new compounds, but also new modes of inhibition of this enzyme. We review known gyrase-specific drugs and toxins and assess the prospects for developing new antibacterials targeted to this enzyme
Structural Insights into the Quinolone Resistance Mechanism of Mycobacterium tuberculosis DNA Gyrase
Mycobacterium tuberculosis DNA gyrase, an indispensable nanomachine involved in the regulation of DNA topology, is the only type II topoisomerase present in this organism and is hence the sole target for quinolone action, a crucial drug active against multidrug-resistant tuberculosis. To understand at an atomic level the quinolone resistance mechanism, which emerges in extensively drug resistant tuberculosis, we performed combined functional, biophysical and structural studies of the two individual domains constituting the catalytic DNA gyrase reaction core, namely the Toprim and the breakage-reunion domains. This allowed us to produce a model of the catalytic reaction core in complex with DNA and a quinolone molecule, identifying original mechanistic properties of quinolone binding and clarifying the relationships between amino acid mutations and resistance phenotype of M. tuberculosis DNA gyrase. These results are compatible with our previous studies on quinolone resistance. Interestingly, the structure of the entire breakage-reunion domain revealed a new interaction, in which the Quinolone-Binding Pocket (QBP) is blocked by the N-terminal helix of a symmetry-related molecule. This interaction provides useful starting points for designing peptide based inhibitors that target DNA gyrase to prevent its binding to DNA
The challenges of measuring bleeding outcomes in clinical trials of platelet transfusions
Background Many platelet (PLT) transfusion trials now use bleeding as a primary outcome; however, previous studies have shown a wide variation in the amount (5%-70%) and type of bleeding documented. Differences in the way bleeding has been identified, recorded, and graded may account for some of this variability. This study's aim was to compare trials' method to document and grade bleeding. Study Design and Methods Data were collected via three methods: a review of study publications, study case report forms, and a questionnaire sent to the authors. Authors of randomized controlled trials of PLT transfusion that used bleeding as an outcome measure were identified from the searches reported by two recent systematic reviews. Twenty-four authors were contacted, and 13 agreed to participate. Data submitted were reviewed and summarized. Results More recent studies with trained bleeding assessors, detailed documentation, and expanded grading systems have reported higher overall levels of bleeding. The World Health Organization grading system was widely used to grade bleeding, but there was no consistency in the bleeding grade definitions. For example, bleeding classified as Grade 2 in some studies (spreading petechiae) was classified as Grade 1 in other studies. Conclusions This study has highlighted differences in the method of recording and grading bleeding, which may account for some of the variation in reported bleeding rates. To ensure that differences between studies can be attributed to trial interventions or types of participant included, this study group is developing consensus bleeding definitions, a standardized approach to record and grade bleeding, and guidance notes to educate and train bleeding assessors. © 2013 American Association of Blood Banks
The challenges of measuring bleeding outcomes in clinical trials of platelet transfusions
Background: Many platelet (PLT) transfusion trials now use bleeding as a primary outcome; however, previous studies have shown a wide variation in the amount (5%‐70%) and type of bleeding documented. Differences in the way bleeding has been identified, recorded, and graded may account for some of this variability. This study's aim was to compare trials' method to document and grade bleeding. Study Design and Methods: Data were collected via three methods: a review of study publications, study case report forms, and a questionnaire sent to the authors. Authors of randomized controlled trials of PLT transfusion that used bleeding as an outcome measure were identified from the searches reported by two recent systematic reviews. Twenty‐four authors were contacted, and 13 agreed to participate. Data submitted were reviewed and summarized. Results: More recent studies with trained bleeding assessors, detailed documentation, and expanded grading systems have reported higher overall levels of bleeding. The World Health Organization grading system was widely used to grade bleeding, but there was no consistency in the bleeding grade definitions. For example, bleeding classified as Grade 2 in some studies (spreading petechiae) was classified as Grade 1 in other studies. Conclusions: This study has highlighted differences in the method of recording and grading bleeding, which may account for some of the variation in reported bleeding rates. To ensure that differences between studies can be attributed to trial interventions or types of participant included, this study group is developing consensus bleeding definitions, a standardized approach to record and grade bleeding, and guidance notes to educate and train bleeding assessors. </p
Recommended from our members
Variation in seedling growth of 11 perennial legumes in response to phosphorus supply
Phosphorus (P) deficiency is a major problem for Australian agriculture. Development of new perennial pasture legumes that acquire or use P more efficiently than the current major perennial pasture legume, lucerne (Medicago sativa L.), is urgent. A glasshouse experiment compared the response of ten perennial herbaceous legume species to a series of P supplies ranging from 0 to 384 µg g−1 soil, with lucerne as the control. Under low-P conditions, several legumes produced more biomass than lucerne. Four species (Lotononis bainesii Baker, Kennedia prorepens F.Muell, K. prostrata R.Br, Bituminaria bituminosa (L.) C.H.Stirt) achieved maximum growth at 12 µg P g−1 soil, while other species required 24 µg P g−1. In most tested legumes, biomass production was reduced when P supply was ≥192 µg g−1, due to P toxicity, while L. bainesii and K. prorepens showed reduced biomass when P was ≥24 µg g−1 and K. prostrata at ≥48 µg P g−1 soil. B. bituminosa and Glycine canescens F.J.Herm required less soil P to achieve 0.5 g dry mass than the other species did. Lucerne performed poorly with low P supply and our results suggest that some novel perennial legumes may perform better on low-P soils