505 research outputs found
Nonsteroidal Anti-Inflammatory Drugs: A survey of practices and concerns of pediatric medical and surgical specialists and a summary of available safety data
<p>Abstract</p> <p>Objectives</p> <p>To examine the prescribing habits of NSAIDs among pediatric medical and surgical practitioners, and to examine concerns and barriers to their use.</p> <p>Methods</p> <p>A sample of 1289 pediatricians, pediatric rheumatologists, sports medicine physicians, pediatric surgeons and pediatric orthopedic surgeons in the United States and Canada were sent an email link to a 22-question web-based survey.</p> <p>Results</p> <p>338 surveys (28%) were completed, 84 were undeliverable. Of all respondents, 164 (50%) had never prescribed a selective cyclooxygenase-2 (COX-2) NSAID. The most common reasons for ever prescribing an NSAID were musculoskeletal pain, soft-tissue injury, fever, arthritis, fracture, and headache. Compared to traditional NSAIDs, selective COX-2 NSAIDs were believed to be as safe (42%) or safer (24%); have equal (52%) to greater efficacy (20%) for pain; have equal (59%) to greater efficacy (15%) for inflammation; and have equal (39%) to improved (44%) tolerability. Pediatric rheumatologists reported significantly more frequent abdominal pain (81% vs. 23%), epistaxis (13% vs. 2%), easy bruising (64% vs. 8%), headaches (21% vs. 1%) and fatigue (12% vs. 1%) for traditional NSAIDs than for selective COX-2 NSAIDs. Prescribing habits of NSAIDs have changed since the voluntary withdrawal of rofecoxib and valdecoxib; 3% of pediatric rheumatologists reported giving fewer traditional NSAID prescriptions, and while 57% reported giving fewer selective COX-2 NSAIDs, 26% reported that they no longer prescribed these medications.</p> <p>Conclusions</p> <p>Traditional and selective COX-2 NSAIDs were perceived as safe by pediatric specialists. The data were compared to the published pediatric safety literature.</p
Changes in Beliefs Identify Unblinding in Randomized Controlled Trials: A Method to Meet CONSORT Guidelines
Double-blinded trials are often considered the gold standard for research, but significant bias may result from unblinding of participants and investigators. Although the CONSORT guidelines discuss the importance of reporting evidence that blinding was successful , it is unclear what constitutes appropriate evidence. Among studies reporting methods to evaluate blinding effectiveness, many have compared groups with respect to the proportions correctly identifying their intervention at the end of the trial. Instead, we reasoned that participants\u27 beliefs, and not their correctness, are more directly associated with potential bias, especially in relation to self-reported health outcomes. During the Water Evaluation Trial performed in northern California in 1999, we investigated blinding effectiveness by sequential interrogation of participants about their blinded intervention assignment (active or placebo). Irrespective of group, participants showed a strong tendency to believe they had been assigned to the active intervention; this translated into a statistically significant intergroup difference in the correctness of participants\u27 beliefs, even at the start of the trial before unblinding had a chance to occur. In addition, many participants (31%) changed their belief during the trial, suggesting that assessment of belief at a single time does not capture unblinding. Sequential measures based on either two or all eight questionnaires identified significant group-related differences in belief patterns that were not identified by the single, cross-sectional measure. In view of the relative insensitivity of cross-sectional measures, the minimal additional information in more than two assessments of beliefs and the risk of modifying participants\u27 beliefs by repeated questioning, we conclude that the optimal means of assessing unblinding is an intergroup comparison of the change in beliefs (and not their correctness) between the start and end of a randomized controlled trial
Recommended from our members
Predictive Models of Assistance Dog Training Outcomes Using the Canine Behavioral Assessment and Research Questionnaire and a Standardized Temperament Evaluation
Assistance dogs can greatly improve the lives of people with disabilities. However, a large proportion of dogs bred and trained for this purpose are deemed unable to successfully fulfill the behavioral demands of this role. Often, this determination is not finalized until weeks or even months into training, when the dog is close to 2 years old. Thus, there is an urgent need to develop objective selection protocols that can identify dogs most and least likely to succeed, from early in the training process. We assessed the predictive validity of two candidate measures employed by Canine Companions for Independence (CCI), a national assistance dog organization headquartered in Santa Rosa, CA. For more than a decade, CCI has collected data on their population using the Canine Behavioral Assessment and Research Questionnaire (C-BARQ) and a standardized temperament assessment known internally as the In-For-Training (IFT) test, which is conducted at the beginning of professional training. Data from both measures were divided into independent training and test datasets, with the training data used for variable selection and cross-validation. We developed three predictive models in which we predicted success or release from the training program using C-BARQ scores (N = 3,569), IFT scores (N = 5,967), and a combination of scores from both instruments (N = 2,990). All three final models performed significantly better than the null expectation when applied to the test data, with overall accuracies ranging from 64 to 68%. Model predictions were most accurate for dogs predicted to have the lowest probability of success (ranging from 85 to 92% accurate for dogs in the lowest 10% of predicted probabilities), and moderately accurate for identifying the dogs most likely to succeed (ranging from 62 to 72% for dogs in the top 10% of predicted probabilities). Combining C-BARQ and IFT predictors into a single model did not improve overall accuracy, although it did improve accuracy for dogs in the lowest 20% of predicted probabilities. Our results suggest that both types of assessments have the potential to be used as powerful screening tools, thereby allowing more efficient allocation of resources in assistance dog selection and training.Ministry of Food, Agriculture and Fisheries of Denmark (DFFE) [3304-FVFP-09-F-011]Open access journalThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at [email protected]
Did a Severe Flood in the Midwest Cause an Increase in the Incidence of Gastrointestinal Symptoms?
Severe flooding occurred in the midwestern United States in 2001. Since November 2000, coincidentally, data on gastrointestinal symptoms had been collected for a drinking water intervention study in a community along the Mississippi River that was affected by the flood. After the flood had subsided, the authors asked these subjects (n = 1,110) about their contact with floodwater. The objectives of this investigation were to determine whether rates of gastrointestinal illness were elevated during the flood and whether contact with floodwater was associated with increased risk of gastrointestinal illness. An increase in the incidence of gastrointestinal symptoms during the flood was observed (incidence rate ratio = 1.29, 95% confidence interval: 1.06, 1.58), and this effect was pronounced among persons with potential sensitivity to infectious gastrointestinal illness. Tap water consumption was not related to gastrointestinal symptoms before, during, or after the flood. An association between gastrointestinal symptoms and contact with floodwater was also observed, and this effect was pronounced in children. This appears to be the first report of an increase in endemic gastrointestinal symptoms in a longitudinal cohort prospectively observed during a flood. These findings suggest that severe climatic events can result in an increase in the endemic incidence of gastrointestinal symptoms in the United States
Inferences Drawn from a Risk Assessment Compared Directly with a Randomized Trial of a Home Drinking Water Intervention
Risk assessments and intervention trials have been used by the U.S. Environmental Protection Agency to estimate drinking water health risks. Seldom are both methods used concurrently. Between 2001 and 2003, illness data from a trial were collected simultaneously with exposure data, providing a unique opportunity to compare direct risk estimates of waterborne disease from the intervention trial with indirect estimates from a risk assessment. Comparing the group with water treatment (active) with that without water treatment (sham), the estimated annual attributable disease rate (cases per 10,000 persons per year) from the trial provided no evidence of a significantly elevated drinking water risk [attributable risk = −365 cases/year, sham minus active; 95% confidence interval (CI), −2,555 to 1,825]. The predicted mean rate of disease per 10,000 persons per person-year from the risk assessment was 13.9 (2.5, 97.5 percentiles: 1.6, 37.7) assuming 4 log removal due to viral disinfection and 5.5 (2.5, 97.5 percentiles: 1.4, 19.2) assuming 6 log removal. Risk assessments are important under conditions of low risk when estimates are difficult to attain from trials. In particular, this assessment pointed toward the importance of attaining site-specific treatment data and the clear need for a better understanding of viral removal by disinfection. Trials provide direct risk estimates, and the upper confidence limit estimates, even if not statistically significant, are informative about possible upper estimates of likely risk. These differences suggest that conclusions about waterborne disease risk may be strengthened by the joint use of these two approaches
Timing of primary tooth emergence among U.S. racial and ethnic groups
ObjectivesTo compare timing of tooth emergence among groups of American Indian (AI), Black and White children in the United States at 12 months of age.MethodsData were from two sources – a longitudinal study of a Northern Plains tribal community and a study with sites in Indiana, Iowa and North Carolina. For the Northern Plains study, all children (n = 223) were American Indian, while for the multisite study, children (n = 320) were from diverse racial groups. Analyses were limited to data from examinations conducted within 30 days of the child’s first birthday.ResultsAI children had significantly more teeth present (Mean: 7.8, Median: 8.0) than did Whites (4.4, 4.0, P < 0.001) or Blacks (4.5, 4.0, P < 0.001). No significant differences were detected between Black and White children (P = 0.58). There was no significant sex difference overall or within any of the racial groups.ConclusionsTooth emergence occurs at a younger age for AI children than it does for contemporary White or Black children in the United States.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/135387/1/jphd12154.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/135387/2/jphd12154_am.pd
Recent Diarrhea is Associated with Elevated Salivary IgG Responses to Cryptosporidium in Residents of an Eastern Massachusetts Community
BACKGROUND: Serological data suggest that Cryptosporidium infections are common but underreported. The invasiveness of blood sampling limits the application of serology in epidemiological surveillance. We pilot-tested a non-invasive salivary anti-Cryptosporidium antibody assay in a community survey involving children and adults.
MATERIALS AND METHODS: Families with children were recruited in a Massachusetts community in July; symptoms data were collected at 3 monthly follow-up mail surveys. One saliva sample per person (n = 349) was collected via mail, with the last survey in October. Samples were analyzed for IgG and IgA responses to a recombinant C. hominis gp15 sporozoite protein using a time-resolved fluorometric immunoassay. Log-transformed assay results were regressed on age using penalized B-splines to account for the strong age-dependence of antibody reactions. Positive responses were defined as fluorescence values above the upper 99% prediction limit.
RESULTS: Forty-seven (13.5%) individuals had diarrhea without concurrent respiratory symptoms during the 3-month-long follow-up; eight of them had these symptoms during the month prior to saliva sampling. Two individuals had positive IgG responses: an adult who had diarrhea during the prior month and a child who had episodes of diarrhea during each survey month (Fisher\u27s exact test for an association between diarrhea and IgG response: p = 0.0005 for symptoms during the prior month and p = 0.02 for symptoms during the entire follow-up period). The child also had a positive IgA response, along with two asymptomatic individuals (an association between diarrhea and IgA was not significant).
CONCLUSION: These results suggest that the salivary IgG specific to Cryptosporidium antigens warrants further evaluation as a potential indicator of recent infections
- …