421 research outputs found

    Nonsteroidal Anti-Inflammatory Drugs: A survey of practices and concerns of pediatric medical and surgical specialists and a summary of available safety data

    Get PDF
    <p>Abstract</p> <p>Objectives</p> <p>To examine the prescribing habits of NSAIDs among pediatric medical and surgical practitioners, and to examine concerns and barriers to their use.</p> <p>Methods</p> <p>A sample of 1289 pediatricians, pediatric rheumatologists, sports medicine physicians, pediatric surgeons and pediatric orthopedic surgeons in the United States and Canada were sent an email link to a 22-question web-based survey.</p> <p>Results</p> <p>338 surveys (28%) were completed, 84 were undeliverable. Of all respondents, 164 (50%) had never prescribed a selective cyclooxygenase-2 (COX-2) NSAID. The most common reasons for ever prescribing an NSAID were musculoskeletal pain, soft-tissue injury, fever, arthritis, fracture, and headache. Compared to traditional NSAIDs, selective COX-2 NSAIDs were believed to be as safe (42%) or safer (24%); have equal (52%) to greater efficacy (20%) for pain; have equal (59%) to greater efficacy (15%) for inflammation; and have equal (39%) to improved (44%) tolerability. Pediatric rheumatologists reported significantly more frequent abdominal pain (81% vs. 23%), epistaxis (13% vs. 2%), easy bruising (64% vs. 8%), headaches (21% vs. 1%) and fatigue (12% vs. 1%) for traditional NSAIDs than for selective COX-2 NSAIDs. Prescribing habits of NSAIDs have changed since the voluntary withdrawal of rofecoxib and valdecoxib; 3% of pediatric rheumatologists reported giving fewer traditional NSAID prescriptions, and while 57% reported giving fewer selective COX-2 NSAIDs, 26% reported that they no longer prescribed these medications.</p> <p>Conclusions</p> <p>Traditional and selective COX-2 NSAIDs were perceived as safe by pediatric specialists. The data were compared to the published pediatric safety literature.</p

    Changes in Beliefs Identify Unblinding in Randomized Controlled Trials: A Method to Meet CONSORT Guidelines

    Get PDF
    Double-blinded trials are often considered the gold standard for research, but significant bias may result from unblinding of participants and investigators. Although the CONSORT guidelines discuss the importance of reporting evidence that blinding was successful , it is unclear what constitutes appropriate evidence. Among studies reporting methods to evaluate blinding effectiveness, many have compared groups with respect to the proportions correctly identifying their intervention at the end of the trial. Instead, we reasoned that participants\u27 beliefs, and not their correctness, are more directly associated with potential bias, especially in relation to self-reported health outcomes. During the Water Evaluation Trial performed in northern California in 1999, we investigated blinding effectiveness by sequential interrogation of participants about their blinded intervention assignment (active or placebo). Irrespective of group, participants showed a strong tendency to believe they had been assigned to the active intervention; this translated into a statistically significant intergroup difference in the correctness of participants\u27 beliefs, even at the start of the trial before unblinding had a chance to occur. In addition, many participants (31%) changed their belief during the trial, suggesting that assessment of belief at a single time does not capture unblinding. Sequential measures based on either two or all eight questionnaires identified significant group-related differences in belief patterns that were not identified by the single, cross-sectional measure. In view of the relative insensitivity of cross-sectional measures, the minimal additional information in more than two assessments of beliefs and the risk of modifying participants\u27 beliefs by repeated questioning, we conclude that the optimal means of assessing unblinding is an intergroup comparison of the change in beliefs (and not their correctness) between the start and end of a randomized controlled trial

    Inferences Drawn from a Risk Assessment Compared Directly with a Randomized Trial of a Home Drinking Water Intervention

    Get PDF
    Risk assessments and intervention trials have been used by the U.S. Environmental Protection Agency to estimate drinking water health risks. Seldom are both methods used concurrently. Between 2001 and 2003, illness data from a trial were collected simultaneously with exposure data, providing a unique opportunity to compare direct risk estimates of waterborne disease from the intervention trial with indirect estimates from a risk assessment. Comparing the group with water treatment (active) with that without water treatment (sham), the estimated annual attributable disease rate (cases per 10,000 persons per year) from the trial provided no evidence of a significantly elevated drinking water risk [attributable risk = −365 cases/year, sham minus active; 95% confidence interval (CI), −2,555 to 1,825]. The predicted mean rate of disease per 10,000 persons per person-year from the risk assessment was 13.9 (2.5, 97.5 percentiles: 1.6, 37.7) assuming 4 log removal due to viral disinfection and 5.5 (2.5, 97.5 percentiles: 1.4, 19.2) assuming 6 log removal. Risk assessments are important under conditions of low risk when estimates are difficult to attain from trials. In particular, this assessment pointed toward the importance of attaining site-specific treatment data and the clear need for a better understanding of viral removal by disinfection. Trials provide direct risk estimates, and the upper confidence limit estimates, even if not statistically significant, are informative about possible upper estimates of likely risk. These differences suggest that conclusions about waterborne disease risk may be strengthened by the joint use of these two approaches

    Timing of primary tooth emergence among U.S. racial and ethnic groups

    Full text link
    ObjectivesTo compare timing of tooth emergence among groups of American Indian (AI), Black and White children in the United States at 12 months of age.MethodsData were from two sources – a longitudinal study of a Northern Plains tribal community and a study with sites in Indiana, Iowa and North Carolina. For the Northern Plains study, all children (n = 223) were American Indian, while for the multisite study, children (n = 320) were from diverse racial groups. Analyses were limited to data from examinations conducted within 30 days of the child’s first birthday.ResultsAI children had significantly more teeth present (Mean: 7.8, Median: 8.0) than did Whites (4.4, 4.0, P < 0.001) or Blacks (4.5, 4.0, P < 0.001). No significant differences were detected between Black and White children (P = 0.58). There was no significant sex difference overall or within any of the racial groups.ConclusionsTooth emergence occurs at a younger age for AI children than it does for contemporary White or Black children in the United States.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/135387/1/jphd12154.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/135387/2/jphd12154_am.pd

    Recent Diarrhea is Associated with Elevated Salivary IgG Responses to Cryptosporidium in Residents of an Eastern Massachusetts Community

    Get PDF
    BACKGROUND: Serological data suggest that Cryptosporidium infections are common but underreported. The invasiveness of blood sampling limits the application of serology in epidemiological surveillance. We pilot-tested a non-invasive salivary anti-Cryptosporidium antibody assay in a community survey involving children and adults. MATERIALS AND METHODS: Families with children were recruited in a Massachusetts community in July; symptoms data were collected at 3 monthly follow-up mail surveys. One saliva sample per person (n = 349) was collected via mail, with the last survey in October. Samples were analyzed for IgG and IgA responses to a recombinant C. hominis gp15 sporozoite protein using a time-resolved fluorometric immunoassay. Log-transformed assay results were regressed on age using penalized B-splines to account for the strong age-dependence of antibody reactions. Positive responses were defined as fluorescence values above the upper 99% prediction limit. RESULTS: Forty-seven (13.5%) individuals had diarrhea without concurrent respiratory symptoms during the 3-month-long follow-up; eight of them had these symptoms during the month prior to saliva sampling. Two individuals had positive IgG responses: an adult who had diarrhea during the prior month and a child who had episodes of diarrhea during each survey month (Fisher\u27s exact test for an association between diarrhea and IgG response: p = 0.0005 for symptoms during the prior month and p = 0.02 for symptoms during the entire follow-up period). The child also had a positive IgA response, along with two asymptomatic individuals (an association between diarrhea and IgA was not significant). CONCLUSION: These results suggest that the salivary IgG specific to Cryptosporidium antigens warrants further evaluation as a potential indicator of recent infections

    Corticosteroid-induced spinal epidural lipomatosis in the pediatric age group: report of a new case and updated analysis of the literature

    Get PDF
    Spinal epidural lipomatosis is a rare complication of chronic corticosteroid treatment. We report a new pediatric case and an analysis of this and 19 pediatric cases identified in the international literature. The youngest of these combined 20 patients was 5 years old when lipomatosis was diagnosed. Lipomatosis manifested after a mean of 1.3 (+/- 1.5) years (SD) (median, 0.8 years; range, 3 weeks - 6.5 years) of corticosteroid treatment. The corticosteroid dose at the time of presentation of the lipomatosis ranged widely, between 5 and 80 mg of prednisone/day. Back pain was the most common presenting symptom. Imaging revealed that lipomatosis almost always involved the thoracic spine, extending into the lumbosacral region in a subset of patients. Predominantly lumbosacral involvement was documented in only two cases. Although a neurological deficit at presentation was documented in about half of the cases, surgical decompression was not performed in the cases reported after 1996. Instead, reducing the corticosteroid dose (sometimes combined with dietary restriction to mobilize fat) sufficed to induce remission. In summary, pediatric spinal epidural lipomatosis remains a potentially serious untoward effect of corticosteroid treatment, which, if recognized in a timely manner, can have a good outcome with conservative treatment

    A Randomized, Controlled Trial of In-Home Drinking Water Intervention to Reduce Gastrointestinal Illness

    Get PDF
    Trials have provided conflicting estimates of the risk of gastrointestinal illness attributable to tap water. To estimate this risk in an Iowa community with a well-run water utility with microbiologically challenged source water, the authors of this 2000-2002 study randomly assigned blinded volunteers to use externally identical devices (active device: 227 households with 646 persons; sham device: 229 households with 650 persons) for 6 months (cycle A). Each group then switched to the opposite device for 6 months (cycle B). The active device contained a 1-microm absolute ceramic filter and used ultraviolet light. Episodes of highly credible gastrointestinal illness, a published measure of diarrhea, nausea, vomiting, and abdominal cramps, were recorded. Water usage was recorded with personal diaries and an electronic totalizer. The numbers of episodes in cycle A among the active and sham device groups were 707 and 672, respectively; in cycle B, the numbers of episodes were 516 and 476, respectively. In a log-linear generalized estimating equations model using intention-to-treat analysis, the relative rate of highly credible gastrointestinal illness (sham vs. active) for the entire trial was 0.98 (95% confidence interval: 0.86, 1.10). No reduction in gastrointestinal illness was detected after in-home use of a device designed to be highly effective in removing microorganisms from water

    CLUE: a randomized comparative effectiveness trial of IV nicardipine versus labetalol use in the emergency department

    Get PDF
    Abstract Introduction Our purpose was to compare the safety and efficacy of food and drug administration (FDA) recommended dosing of IV nicardipine versus IV labetalol for the management of acute hypertension. Methods Multicenter randomized clinical trial. Eligible patients had 2 systolic blood pressure (SBP) measures ≥180 mmHg and no contraindications to nicardipine or labetalol. Before randomization, the physician specified a target SBP ± 20 mmHg (the target range: TR). The primary endpoint was the percent of subjects meeting TR during the initial 30 minutes of treatment. Results Of 226 randomized patients, 110 received nicardipine and 116 labetalol. End organ damage preceded treatment in 143 (63.3%); 71 nicardipine and 72 labetalol patients. Median initial SBP was 212.5 (IQR 197, 230) and 212 mmHg (IQR 200,225) for nicardipine and labetalol patients (P = 0.68), respectively. Within 30 minutes, nicardipine patients more often reached TR than labetalol (91.7 vs. 82.5%, P = 0.039). Of 6 BP measures (taken every 5 minutes) during the study period, nicardipine patients had higher rates of five and six instances within TR than labetalol (47.3% vs. 32.8%, P = 0.026). Rescue medication need did not differ between nicardipine and labetalol (15.5 vs. 22.4%, P = 0.183). Labetalol patients had slower heart rates at all time points (P \u3c 0.01). Multivariable modeling showed nicardipine patients were more likely in TR than labetalol patients at 30 minutes (OR 2.73, P = 0.028; C stat for model = 0.72) Conclusions Patients treated with nicardipine are more likely to reach the physician-specified SBP target range within 30 minutes than those treated with labetalol. Trial registration ClinicalTrials.gov: NCT0076564
    corecore