136 research outputs found
A prospective, double-blind, randomized, controlled clinical trial comparing standard wound care with adjunctive hyperbaric oxygen therapy (HBOT) to standard wound care only for the treatment of chronic, non-healing ulcers of the lower limb in patients with diabetes mellitus: a study protocol
<p>Abstract</p> <p>Background</p> <p>It has been suggested that the use of adjunctive hyperbaric oxygen therapy improves the healing of diabetic foot ulcers, and decreases the risk of lower extremity amputations. A limited number of studies have used a double blind approach to evaluate the efficacy of hyperbaric oxygen therapy in the treatment of diabetic ulcers. The primary aim of this study is to assess the efficacy of hyperbaric oxygen therapy plus standard wound care compared with standard wound care alone in preventing the need for major amputation in patients with diabetes mellitus and chronic ulcers of the lower limb.</p> <p>Methods/Design</p> <p>One hundred and eighteen (59 patients per arm) patients with non-healing diabetic ulcers of the lower limb, referred to the Judy Dan Research and Treatment Centre are being recruited if they are at least 18 years of age, have either Type 1 or 2 diabetes with a Wagner grading of foot lesions 2, 3 or 4 on lower limb not healing for at least 4 weeks. Patients receive hyperbaric oxygen therapy every day for 6 weeks during the treatment phase and are provided ongoing wound care and weekly assessments. Patients are required to return to the study centre every week for an additional 6 weeks of follow-up for wound evaluation and management. The primary outcome is freedom from having, or meeting the criteria for, a major amputation (below knee amputation, or metatarsal level) up to 12 weeks after randomization. The decision to amputate is made by a vascular surgeon. Other outcomes include wound healing, effectiveness, safety, healthcare resource utilization, quality of life, and cost-effectiveness. The study will run for a total of about 3 years.</p> <p>Discussion</p> <p>The results of this study will provide detailed information on the efficacy of hyperbaric oxygen therapy for the treatment of non-healing ulcers of the lower limb. This will be the first double-blind randomized controlled trial for this health technology which evaluates the efficacy of hyperbaric oxygen therapy in prevention of amputations in diabetic patients.</p> <p>Trial registration</p> <p>ClinicalTrials.gov Identifier: <a href="http://www.clinicaltrials.gov/ct2/show/NCT00621608">NCT00621608</a></p
Distinct Gene Number-Genome Size Relationships for Eukaryotes and Non-Eukaryotes: Gene Content Estimation for Dinoflagellate Genomes
The ability to predict gene content is highly desirable for characterization of not-yet sequenced genomes like those of dinoflagellates. Using data from completely sequenced and annotated genomes from phylogenetically diverse lineages, we investigated the relationship between gene content and genome size using regression analyses. Distinct relationships between log10-transformed protein-coding gene number (Yβ²) versus log10-transformed genome size (Xβ², genome size in kbp) were found for eukaryotes and non-eukaryotes. Eukaryotes best fit a logarithmic model, Yβ²β=βln(-46.200+22.678Xβ², whereas non-eukaryotes a linear model, Yβ²β=β0.045+0.977Xβ², both with high significance (p<0.001, R2>0.91). Total gene number shows similar trends in both groups to their respective protein coding regressions. The distinct correlations reflect lower and decreasing gene-coding percentages as genome size increases in eukaryotes (82%β1%) compared to higher and relatively stable percentages in prokaryotes and viruses (97%β47%). The eukaryotic regression models project that the smallest dinoflagellate genome (3Γ106 kbp) contains 38,188 protein-coding (40,086 total) genes and the largest (245Γ106 kbp) 87,688 protein-coding (92,013 total) genes, corresponding to 1.8% and 0.05% gene-coding percentages. These estimates do not likely represent extraordinarily high functional diversity of the encoded proteome but rather highly redundant genomes as evidenced by high gene copy numbers documented for various dinoflagellate species
Spatial patterns of microbial diversity and activity in an aged creosote-contaminated site
Restoration of polluted sites via in situ bioremediation relies heavily on the indigenous microbes and their activities. Spatial heterogeneity of microbial populations, contaminants and soil chemical parameters on such sites is a major hurdle in optimizing and implementing an appropriate bioremediation regime. We performed a grid-based sampling of an aged creosote-contaminated site followed by geostatistical modelling to illustrate the spatial patterns of microbial diversity and activity and to relate these patterns to the distribution of pollutants. Spatial distribution of bacterial groups unveiled patterns of niche differentiation regulated by patchy distribution of pollutants and an east-to-west pH gradient at the studied site. Proteobacteria clearly dominated in the hot spots of creosote pollution, whereas the abundance of Actinobacteria, TM7 and Planctomycetes was considerably reduced from the hot spots. The pH preferences of proteobacterial groups dominating in pollution could be recognized by examining the order and family-level responses. Acidobacterial classes came across as generalists in hydrocarbon pollution whose spatial distribution seemed to be regulated solely by the pH gradient. Although the community evenness decreased in the heavily polluted zones, basal respiration and fluorescein diacetate hydrolysis rates were higher, indicating the adaptation of specific indigenous microbial populations to hydrocarbon pollution. Combining the information from the kriged maps of microbial and soil chemistry data provided a comprehensive understanding of the long-term impacts of creosote pollution on the subsurface microbial communities. This study also highlighted the prospect of interpreting taxa-specific spatial patterns and applying them as indicators or proxies for monitoring polluted sites
Natural and anthropogenic changes to mangrove distributions in the Pioneer River Estuary (QLD, Australia)
We analyzed a time series of aerial photographs and Landsat satellite imagery of the Pioneer River Estuary (near Mackay, Queensland, Australia) to document both natural and anthropogenic changes in the area of mangroves available to filter river runoff between 1948 and 2002. Over 54 years, there was a net loss of 137 ha (22%) of tidal mangroves during four successive periods that were characterized by different driving mechanisms: (1) little net change (1948β 1962); (2) net gain from rapid mangrove expansion (1962β1972); (3) net loss from clearing and tidal isolation (1972β1991); and (4) net loss from a severe species-specific dieback affecting over 50% of remaining mangrove cover (1991β2002). Manual digitization of aerial photographs was accurate for mapping changes in the boundaries of mangrove distributions, but this technique underestimated the total loss due to dieback. Regions of mangrove dieback were identified and mapped more accurately and efficiently after applying the Normalized Difference Vegetation Index (NDVI) to Landsat Thematic Mapper satellite imagery, and then monitoring changes to the index over time. These remote sensing techniques to map and monitor mangrove changes are important for identifying habitat degradation, both spatially and temporally, in order to prioritize restoration for management of estuarine and adjacent marine ecosystems
Remote detection of invasive alien species
The spread of invasive alien species (IAS) is recognized as the most severe threat to biodiversity outside of climate change and anthropogenic habitat destruction. IAS negatively impact ecosystems, local economies, and residents. They are especially problematic because once established, they give rise to positive feedbacks, increasing the likelihood of further invasions and spread. The integration of remote sensing (RS) to the study of invasion, in addition to contributing to our understanding of invasion processes and impacts to biodiversity, has enabled managers to monitor invasions and predict the spread of IAS, thus supporting biodiversity conservation and management action. This chapter focuses on RS capabilities to detect and monitor invasive plant species across terrestrial, riparian, aquatic, and human-modified ecosystems. All of these environments have unique species assemblages and their own optimal methodology for effective detection and mapping, which we discuss in detail
FRAXβ’ and the assessment of fracture probability in men and women from the UK
SUMMARY: A fracture risk assessment tool (FRAX) is developed based on the use of clinical risk factors with or without bone mineral density tests applied to the UK. INTRODUCTION: The aim of this study was to apply an assessment tool for the prediction of fracture in men and women with the use of clinical risk factors (CRFs) for fracture with and without the use of femoral neck bone mineral density (BMD). The clinical risk factors, identified from previous meta-analyses, comprised body mass index (BMI, as a continuous variable), a prior history of fracture, a parental history of hip fracture, use of oral glucocorticoids, rheumatoid arthritis and other secondary causes of osteoporosis, current smoking, and alcohol intake 3 or more units daily. METHODS: Four models were constructed to compute fracture probabilities based on the epidemiology of fracture in the UK. The models comprised the ten-year probability of hip fracture, with and without femoral neck BMD, and the ten-year probability of a major osteoporotic fracture, with and without BMD. For each model fracture and death hazards were computed as continuous functions. RESULTS: Each clinical risk factor contributed to fracture probability. In the absence of BMD, hip fracture probability in women with a fixed BMI (25 kg/m(2)) ranged from 0.2% at the age of 50 years for women without CRF's to 22% at the age of 80 years with a parental history of hip fracture (approximately 100-fold range). In men, the probabilities were lower, as was the range (0.1 to 11% in the examples above). For a major osteoporotic fracture the probabilities ranged from 3.5% to 31% in women, and from 2.8% to 15% in men in the example above. The presence of one or more risk factors increased probabilities in an incremental manner. The differences in probabilities between men and women were comparable at any given T-score and age, except in the elderly where probabilities were higher in women than in men due to the higher mortality of the latter. CONCLUSION: The models provide a framework which enhances the assessment of fracture risk in both men and women by the integration of clinical risk factors alone and/or in combination with BMD
Assessing the Quality of Clinical Teachers: A Systematic Review of Content and Quality of Questionnaires for Assessing Clinical Teachers
BACKGROUND: Learning in a clinical environment differs from formal educational settings and provides specific challenges for clinicians who are teachers. Instruments that reflect these challenges are needed to identify the strengths and weaknesses of clinical teachers. OBJECTIVE: To systematically review the content, validity, and aims of questionnaires used to assess clinical teachers. DATA SOURCES: MEDLINE, EMBASE, PsycINFO and ERIC from 1976 up to March 2010. REVIEW METHODS: The searches revealed 54 papers on 32 instruments. Data from these papers were documented by independent researchers, using a structured format that included content of the instrument, validation methods, aims of the instrument, and its setting. Results : Aspects covered by the instruments predominantly concerned the use of teaching strategies (included in 30 instruments), supporter role (29), role modeling (27), and feedback (26). Providing opportunities for clinical learning activities was included in 13 instruments. Most studies referred to literature on good clinical teaching, although they failed to provide a clear description of what constitutes a good clinical teacher. Instrument length varied from 1 to 58 items. Except for two instruments, all had to be completed by clerks/residents. Instruments served to provide formative feedback ( instruments) but were also used for resource allocation, promotion, and annual performance review (14 instruments). All but two studies reported on internal consistency and/or reliability; other aspects of validity were examined less frequently. CONCLUSIONS: No instrument covered all relevant aspects of clinical teaching comprehensively. Validation of the instruments was often limited to assessment of internal consistency and reliability. Available instruments for assessing clinical teachers should be used carefully, especially for consequential decisions. There is a need for more valid comprehensive instruments
Beyond the Evidence of the New Hypertension Guidelines. Blood pressure measurement β is it good enough for accurate diagnosis of hypertension? Time might be in, for a paradigm shift (I)
Despite widespread availability of a large body of evidence in the area of hypertension, the translation of that evidence into viable recommendations aimed at improving the quality of health care is very difficult, sometimes to the point of questionable acceptability and overall credibility of the guidelines advocating those recommendations. The scientific community world-wide and especially professionals interested in the topic of hypertension are witnessing currently an unprecedented debate over the issue of appropriateness of using different drugs/drug classes for the treatment of hypertension. An endless supply of recent and less recent "drug-news", some in support of, others against the current guidelines, justifying the use of selected types of drug treatment or criticising other, are coming out in the scientific literature on an almost weekly basis. The latest of such debate (at the time of writing this paper) pertains the safety profile of ARBs vs ACE inhibitors. To great extent, the factual situation has been fuelled by the new hypertension guidelines (different for USA, Europe, New Zeeland and UK) through, apparently small inconsistencies and conflicting messages, that might have generated substantial and perpetuating confusion among both prescribing physicians and their patients, regardless of their country of origin. The overwhelming message conveyed by most guidelines and opinion leaders is the widespread use of diuretics as first-line agents in all patients with blood pressure above a certain cut-off level and the increasingly aggressive approach towards diagnosis and treatment of hypertension. This, apparently well-justified, logical and easily comprehensible message is unfortunately miss-obeyed by most physicians, on both parts of the Atlantic. Amazingly, the message assumes a universal simplicity of both diagnosis and treatment of hypertension, while ignoring several hypertension-specific variables, commonly known to have high level of complexity, such as: - accuracy of recorded blood pressure and the great inter-observer variability, - diversity in the competency and training of diagnosing physician, - individual patient/disease profile with highly subjective preferences, - difficulty in reaching consensus among opinion leaders, - pharmaceutical industry's influence, and, nonetheless, - the large variability in the efficacy and safety of the antihypertensive drugs. The present 2-series article attempts to identify and review possible causes that might have, at least in part, generated the current healthcare anachronism (I); to highlight the current trend to account for the uncertainties related to the fixed blood pressure cut-off point and the possible solutions to improve accuracy of diagnosis and treatment of hypertension (II)
- β¦