240 research outputs found

    Prescription Drug Abuse

    Get PDF
    Background. In the United States, the leading cause of injury death is from prescription drug overdose. The most commonly abused prescription medications are (a) pain relievers (opioids), (b) CNS depressants (tranquilizers, sedatives, hypnotics), and (c) stimulants. Opioids are a class of drugs that includes both heroin and prescription pain relievers. CNS depressants are for managing anxiety and stimulates are used in ADHD. A consequence of abuse is drug overdose death, with opioids being the leading cause. Opioids are safe for short term use but have a strong potential to be abused resulting in addiction. In order to understand this crisis, it is critical to examine: (a) demographics, (b) reasons for abuse, and (c) the provider of drugs for targeted prevention. Methodology. Information was gathered utilizing the search engines (a) Journal of the American Medical Association, (b) EBSCO host, (c) Google Scholar, and (d) the LIU library database. Search terms included: (a) prescription drug abuse, (b) prescription drug overdose, (c) United States, and (d) demographics. All publications were from 2010 to 2018 and in English. Results. Mental health disorders put an individual at greater risk for abuse, especially when an opioid is prescribed in conjunction with: (a) an antidepressant, (b) antipsychotic, or (c) benzodiazepine. In examining specific demographics, (a) non-Hispanic males are more likely to abuse prescription stimulants and tranquilizers, (b) Hispanic males are more likely to abuse prescription painkillers, and (c) non-Hispanic females are more at risk to abuse prescription sedatives. Young adults from the age 18 to 25 years old were found to be the largest population that abuses (a) opioid pain relievers, (b) ADHD stimulants and (c) anti-anxiety drugs. From 2004-2011 emergency department visits related to prescription drug abuse rose 114%. Prescription drug abuse (28%) outpaces illicit drug (25%) use in emergency department visits. Among the prescription medicines, pain relievers have shown to be the most problematic with 75.2% of all pharmaceutical overdose deaths being from opioids. Prescription pain relievers are frequently abused to (a) alleviate pain inappropriately (62.3%), (b) feel good or get high (12.9%), (c) relax or relieve tension (10.8%), (d) as a coping mechanism (3.9%), and (f) aid in sleep (3.3%). Prescription pain relievers are typically received from (a) a friend/relative (53%), (b) a healthcare provider (37.5%), or (c) bought from a stranger (6%). Conclusions. Over 80% of Americans will see a healthcare provider within the year which provides them with the opportunity to screen for prescription drug abuse at the bedside. Patients must be counseled on the use and storage of their prescriptions to prevent redistribution to unintended audiences. Health practitioners should utilize the electronic prescription drug monitoring program before prescribing scheduled drugs and evaluate the patient’s medication history to prevent dangerous drug interactions. If prescription pain relievers are indicated, then it should be prescribed for short term use at a low dose without refills. Prescribers should offer close follow up and consider alternative methods for chronic pain relief such as acupuncture and physical therapy

    How many dimensions are required to find an adversarial example?

    Full text link
    Past work exploring adversarial vulnerability have focused on situations where an adversary can perturb all dimensions of model input. On the other hand, a range of recent works consider the case where either (i) an adversary can perturb a limited number of input parameters or (ii) a subset of modalities in a multimodal problem. In both of these cases, adversarial examples are effectively constrained to a subspace VV in the ambient input space X\mathcal{X}. Motivated by this, in this work we investigate how adversarial vulnerability depends on dim(V)\dim(V). In particular, we show that the adversarial success of standard PGD attacks with p\ell^p norm constraints behaves like a monotonically increasing function of ϵ(dim(V)dimX)1q\epsilon (\frac{\dim(V)}{\dim \mathcal{X}})^{\frac{1}{q}} where ϵ\epsilon is the perturbation budget and 1p+1q=1\frac{1}{p} + \frac{1}{q} =1, provided p>1p > 1 (the case p=1p=1 presents additional subtleties which we analyze in some detail). This functional form can be easily derived from a simple toy linear model, and as such our results land further credence to arguments that adversarial examples are endemic to locally linear models on high dimensional spaces.Comment: Comments welcome! V2: minor edits for clarit

    Early Holocene human presence in Madagascar evidenced by exploitation of avian megafauna

    Get PDF
    Previous research suggests that people first arrived on Madagascar by ~2500 years before present (years B.P.). This hypothesis is consistent with butchery marks on extinct lemur bones from ~2400 years B.P. and perhaps with archaeological evidence of human presence from ~4000 years B.P. We report >10,500-year-old human-modified bones for the extinct elephant birds Aepyornis and Mullerornis, which show perimortem chop marks, cut marks, and depression fractures consistent with immobilization and dismemberment. Our evidence for anthropogenic perimortem modification of directly dated bones represents the earliest indication of humans in Madagascar, predating all other archaeological and genetic evidence by >6000 years and changing our understanding of the history of human colonization of Madagascar. This revision of Madagascar’s prehistory suggests prolonged human-faunal coexistence with limited biodiversity loss

    Health care workers’ experiences of calling-for-help when taking care of critically ill patients in hospitals in Tanzania and Kenya

    Get PDF
    Background: When caring for critically ill patients, health workers often need to ‘call-for-help’ to get assistance from colleagues in the hospital. Systems are required to facilitate calling-for-help and enable the timely provision of care for critically ill patients. Evidence around calling-for-help systems is mostly from high income countries and the state of calling-for-help in hospitals in Tanzania and Kenya has not been formally studied. This study aims to describe health workers’ experiences about calling-for-help when taking care of critically ill patients in hospitals in Tanzania and Kenya. Methods: Ten hospitals across Kenya and Tanzania were visited and in-depth interviews conducted with 30 health workers who had experience of caring for critically ill patients. The interviews were transcribed, translated and the data thematically analyzed. Results: The study identified three thematic areas concerning the systems for calling-for-help when taking care of critically ill patients: 1) Calling-for-help structures: there is lack of functioning structures for calling-for-help; 2) Calling-for-help processes: the calling-for-help processes are innovative and improvised; and 3) Calling-for-help outcomes: the help that is provided is not as requested. Conclusion: Calling-for-help when taking care of a critically ill patient is a necessary life-saving part of care, but health workers in Tanzanian and Kenyan hospitals experience a range of significant challenges. Hospitals lack functioning structures, processes for calling-for-help are improvised and help that is provided is not as requested. These challenges likely cause delays and decrease the quality of care, potentially resulting in unnecessary mortality and morbidity

    Final Protocol and Statistical Analysis Plan for the SNAP Trial - a randomised, double-blind, placebo-controlled trial of nicotine replacement therapy in pregnancy

    Get PDF
    This NIHR HTA-funded smoking, nicotine and pregnancy (SNAP) trial investigated whether or not nicotine replacement therapy (NRT) is effective, cost-effective and safe when used for smoking cessation by pregnant women. We randomised 1050 women who were between 12 and 24 weeks pregnant as they attended hospital for ante-natal ultrasound scans. Women received either nicotine or placebo transdermal patches with behavioural support. The primary outcome measure was biochemically-validated, self-reported, prolonged and total abstinence from smoking between a quit date (defined before randomisation and set within 2 weeks of this) and delivery. At 6 months after childbirth self-reported maternal smoking status was ascertained and 2 years after childbirth, self-reported maternal smoking status and the behaviour, cognitive development and respiratory symptoms of children born in the trial were compared in both groups. This repository contains the final approved version of the protocol plus the statistical analysis plan (SAP) for both outcomes at delivery and following the 2 year follow up period after birth

    Outcomes of the Botswana national HIV/AIDS treatment programme from 2002 to 2010: a longitudinal analysis

    Get PDF
    Background Short-term mortality rates among patients with HIV receiving antiretroviral therapy (ART) in sub- Saharan Africa are higher than those recorded in high-income countries, but systematic long-term comparisons have not been made because of the scarcity of available data. We analysed the eff ect of the implementation of Botswana’s national ART programme, known as Masa, from 2002 to 2010. Methods The Masa programme started on Jan 21, 2002. Patients who were eligible for ART according to national guidelines had their data collected prospectively through a clinical information system developed by the Botswana Ministry of Health. A dataset of all available electronic records for adults (≥18 years) who had enrolled by April 30, 2010, was extracted and sent to the study team. All data were anonymised before analysis. The primary outcome was mortality. To assess the eff ect of loss to follow-up, we did a series of sensitivity analyses assuming varying proportions of the population lost to follow-up to be dead. Findings We analysed the records of 126 263 patients, of whom 102 713 had documented initiation of ART. Median follow-up time was 35 months (IQR 14–56), with a median of eight follow-up visits (4–14). 15 270 patients were deemed lost to follow-up by the end of the study. 63% (78 866) of the study population were women; median age at baseline was 34 years for women (IQR 29–41) and 38 years for men (33–45). 10 230 (8%) deaths were documented during the 9 years of the study. Mortality was highest during the fi rst 3 months after treatment initiation at 12·8 deaths per 100 person-years (95% CI 12·4–13·2), but decreased to 1·16 deaths per 100 person-years (1·12–1·2) in the second year of treatment, and to 0·15 deaths per 100 person-years (0·09–0·25) over the next 7 years of follow-up. In each calendar year after the start of the Masa programme in 2002, average CD4 cell counts at enrolment increased (from 101 cells/μL [IQR 44–156] in 2002, to 191 cells/μL [115–239] in 2010). In each year, the proportion of the total enrolled population who died in that year decreased, from 63% (88 of 140) in 2002, to 0·8% (13 of 1599) in 2010. A sensitivity analysis assuming that 60% of the population lost to follow-up had died gave 3000 additional deaths, increasing overall mortality from 8% to 11–13%. Interpretation The Botswana national HIV/AIDS treatment programme reduced mortality among adults with HIV to levels much the same as in other low-income or middle-income countries

    Risk algorithm using serial biomarker measurements doubles the number of screen-detected cancers compared with a single-threshold rule in the United Kingdom collaborative trial of ovarian cancer screening

    Get PDF
    PURPOSE: Cancer screening strategies have commonly adopted single-biomarker thresholds to identify abnormality. We investigated the impact of serial biomarker change interpreted through a risk algorithm on cancer detection rates. PATIENTS AND METHODS: In the United Kingdom Collaborative Trial of Ovarian Cancer Screening, 46,237 women, age 50 years or older underwent incidence screening by using the multimodal strategy (MMS) in which annual serum cancer antigen 125 (CA-125) was interpreted with the risk of ovarian cancer algorithm (ROCA). Women were triaged by the ROCA: normal risk, returned to annual screening; intermediate risk, repeat CA-125; and elevated risk, repeat CA-125 and transvaginal ultrasound. Women with persistently increased risk were clinically evaluated. All participants were followed through national cancer and/or death registries. Performance characteristics of a single-threshold rule and the ROCA were compared by using receiver operating characteristic curves. RESULTS: After 296,911 women-years of annual incidence screening, 640 women underwent surgery. Of those, 133 had primary invasive epithelial ovarian or tubal cancers (iEOCs). In all, 22 interval iEOCs occurred within 1 year of screening, of which one was detected by ROCA but was managed conservatively after clinical assessment. The sensitivity and specificity of MMS for detection of iEOCs were 85.8% (95% CI, 79.3% to 90.9%) and 99.8% (95% CI, 99.8% to 99.8%), respectively, with 4.8 surgeries per iEOC. ROCA alone detected 87.1% (135 of 155) of the iEOCs. Using fixed CA-125 cutoffs at the last annual screen of more than 35, more than 30, and more than 22 U/mL would have identified 41.3% (64 of 155), 48.4% (75 of 155), and 66.5% (103 of 155), respectively. The area under the curve for ROCA (0.915) was significantly (P = .0027) higher than that for a single-threshold rule (0.869). CONCLUSION: Screening by using ROCA doubled the number of screen-detected iEOCs compared with a fixed cutoff. In the context of cancer screening, reliance on predefined single-threshold rules may result in biomarkers of value being discarded

    New Vectors for Urea-Inducible Recombinant Protein Production

    Get PDF
    We have developed a novel urea-inducible recombinant protein production system by exploiting the Proteus mirabilis urease ureR-ureD promoter region and the ureR AraC-family transcriptional regulator. Experiments using the expression of β-galactosidase and green fluorescent protein (GFP) showed that promoter activity is tightly regulated and that varying the concentration of urea can give up to 100-fold induction. Production of proteins of biopharmaceutical interest has been demonstrated, including human growth hormone (hGH), a single chain antibody fragment (scFv) against interleukin-1β and a potential Neisserial vaccine candidate (BamAENm). Expression levels can be fine-tuned by temperature and different urea concentrations, and can be induced with readily available garden fertilisers and even urine. As urea is an inexpensive, stable inducer, a urea-induced expression system has the potential to considerably reduce the costs of large-scale recombinant protein production
    corecore