1,135 research outputs found
Recommended from our members
Dopamine, time perception, and future time perspective.
RationaleImpairment in time perception, a critical component of decision-making, represents a risk factor for psychiatric conditions including substance abuse. A therapeutic that ameliorates this impairment could be advantageous in the treatment of impulsivity and decision-making disorders.ObjectivesHere we hypothesize that the catechol-O-methyltransferase (COMT) inhibitor tolcapone, which increases dopamine tone in frontal cortex (Ceravolo et al Synapse 43:201-207, 2002), improves time perception, with predictive behavioral, genetic, and neurobiological components.MethodsSubjects (n = 66) completed a duration estimation task and other behavioral testing in each of two sessions after receiving a single oral dose of tolcapone (200 mg) or placebo in randomized, double-blind, counterbalanced, crossover fashion. Resting state fMRI data were obtained in a subset of subjects (n = 40). Subjects were also genotyped for the COMT (rs4680) polymorphism.ResultsTime perception was significantly improved across four proximal time points ranging from 5 to 60 s (T(524) = 2.04, p = 0.042). The degree of this improvement positively correlated with subjective measures of stress, depression, and alcohol consumption and was most robust in carriers of the COMT Val158 allele. Using seed regions defined by a previous meta-analysis (Wiener et al Neuroimage 49:1728-1740, 2010), we found not only that a connection from right inferior frontal gyrus (RIFG) to right putamen decreases in strength on tolcapone versus placebo (p < 0.05, corrected), but also that the strength of this decrease correlates inversely with the increase in duration estimation on tolcapone versus placebo (r = - 0.37, p = 0.02).ConclusionsCompressed time perception can be ameliorated by administration of tolcapone. Additional studies should be conducted to determine whether COMT inhibitors may be effective in treating decision-making disorders and addictive behaviors
The Impact of Water, Sanitation and Hygiene Interventions to Control Cholera: A Systematic Review.
BACKGROUND AND METHODS: Cholera remains a significant threat to global public health with an estimated 100,000 deaths per year. Water, sanitation and hygiene (WASH) interventions are frequently employed to control outbreaks though evidence regarding their effectiveness is often missing. This paper presents a systematic literature review investigating the function, use and impact of WASH interventions implemented to control cholera. RESULTS: The review yielded eighteen studies and of the five studies reporting on health impact, four reported outcomes associated with water treatment at the point of use, and one with the provision of improved water and sanitation infrastructure. Furthermore, whilst the reporting of function and use of interventions has become more common in recent publications, the quality of studies remains low. The majority of papers (>60%) described water quality interventions, with those at the water source focussing on ineffective chlorination of wells, and the remaining being applied at the point of use. Interventions such as filtration, solar disinfection and distribution of chlorine products were implemented but their limitations regarding the need for adherence and correct use were not fully considered. Hand washing and hygiene interventions address several transmission routes but only 22% of the studies attempted to evaluate them and mainly focussed on improving knowledge and uptake of messages but not necessarily translating this into safer practices. The use and maintenance of safe water storage containers was only evaluated once, under-estimating the considerable potential for contamination between collection and use. This problem was confirmed in another study evaluating methods of container disinfection. One study investigated uptake of household disinfection kits which were accepted by the target population. A single study in an endemic setting compared a combination of interventions to improve water and sanitation infrastructure, and the resulting reductions in cholera incidence. DISCUSSION AND RECOMMENDATIONS: This review highlights a focus on particular routes of transmission, and the limited number of interventions tested during outbreaks. There is a distinct gap in knowledge of which interventions are most appropriate for a given context and as such a clear need for more robust impact studies evaluating a wider array of WASH interventions, in order to ensure effective cholera control and the best use of limited resources
Speaking and cognitive distractions during EEG-based brain control of a virtual neuroprosthesis-arm
BACKGROUND: Brain-computer interface (BCI) systems have been developed to provide paralyzed individuals the ability to command the movements of an assistive device using only their brain activity. BCI systems are typically tested in a controlled laboratory environment were the user is focused solely on the brain-control task. However, for practical use in everyday life people must be able to use their brain-controlled device while mentally engaged with the cognitive responsibilities of daily activities and while compensating for any inherent dynamics of the device itself. BCIs that use electroencephalography (EEG) for movement control are often assumed to require significant mental effort, thus preventing users from thinking about anything else while using their BCI. This study tested the impact of cognitive load as well as speaking on the ability to use an EEG-based BCI. FINDINGS: Six participants controlled the two-dimensional (2D) movements of a simulated neuroprosthesis-arm under three different levels of cognitive distraction. The two higher cognitive load conditions also required simultaneously speaking during BCI use. On average, movement performance declined during higher levels of cognitive distraction, but only by a limited amount. Movement completion time increased by 7.2%, the percentage of targets successfully acquired declined by 11%, and path efficiency declined by 8.6%. Only the decline in percentage of targets acquired and path efficiency were statistically significant (p < 0.05). CONCLUSION: People who have relatively good movement control of an EEG-based BCI may be able to speak and perform other cognitively engaging activities with only a minor drop in BCI-control performance
The Low Quiescent X-Ray Luminosity of the Transient X-Ray Burster EXO 1747-214
We report on X-ray and optical observations of the X-ray burster EXO
1747-214. This source is an X-ray transient, and its only known outburst was
observed in 1984-1985 by the EXOSAT satellite. We re-analyzed the EXOSAT data
to derive the source position, column density, and a distance upper limit using
its peak X-ray burst flux. We observed the EXO 1747-214 field in 2003 July with
the Chandra X-ray Observatory to search for the quiescent counterpart. We found
one possible candidate just outside the EXOSAT error circle, but we cannot rule
out the possibility that the source is unrelated to EXO 1747-214. Our
conclusion is that the upper limit on the unabsorbed 0.3-8 keV luminosity is L
< 7E31 erg/s, making EXO 1747-214 one of the faintest neutron star transients
in quiescence. We compare this luminosity upper limit to the quiescent
luminosities of 19 neutron star and 14 black hole systems and discuss the
results in the context of the differences between neutron stars and black
holes. Based on the theory of deep crustal heating by Brown and coworkers, the
luminosity implies an outburst recurrence time of >1300 yr unless some form of
enhanced cooling occurs within the neutron star. The position of the possible
X-ray counterpart is consistent with three blended optical/IR sources with
R-magnitudes between 19.4 and 19.8 and J-magnitudes between 17.2 and 17.6. One
of these sources could be the quiescent optical/IR counterpart of EXO 1747-214.Comment: 7 pages, accepted by the Astrophysical Journa
Recommended from our members
Trends in Medical Aid in Dying in Oregon and Washington.
ImportanceThe combined 28 years of data of medical aid in dying (MAID) between Oregon (OR) and Washington (WA) are the most comprehensive in North America. No reports to date have compared MAID use in different US states.ObjectiveTo evaluate and compare patterns of MAID use between the states with the longest-running US death with dignity programs.Design, setting, and participantsA retrospective observational cohort study of OR and WA patients with terminal illness who received prescriptions as part of their states' legislation allowing MAID. All published annual reports, from 1998 to 2017 in OR and from 2009 to 2017 in WA, were reviewed. A total of 3368 prescriptions were included.Main outcomes and measuresNumber of deaths from self-administration of lethal medication vs number of prescriptions written.ResultsA combined 3368 prescriptions were written in OR and WA, with 2558 patient deaths from lethal ingestion (76.0%). Of the 2558 patients, most were male (1311 [51.3%]), older than 65 years (1851 [72.4%]), and non-Hispanic white (2426 [94.8%]). The most common underlying illnesses were cancer (1955 [76.4%]), neurologic illness (261 [10.2%]), lung disease (144 [5.6%]), and heart disease (117 [4.6%]). Loss of autonomy (2235 [87.4%]), impaired quality of life (2203 [86.1%]), and loss of dignity (1755 [68.6%]) were the most common reasons for pursuing MAID. Time between drug intake to coma ranged from 1 to 660 minutes and time from drug intake to death ranged from 1 to 6240 minutes. In the 1557 patients for whom rates of complications were reported, 1494 (96.0%) did not experience a complication (592 of 626 [94.6%] in OR and 902 of 931 [96.8%] in WA). Eight patients (<0.5%) regained consciousness after drug ingestion in OR. Annual rates per year for percentage of patients who received a prescription ingesting the prescribed medication ranged from 48% to 87%, with no significant time trend in OR (adjusted odds ratio per year, 1.01; 95% CI, 0.99-1.02; P = .59) but with an increase over time in WA (adjusted odds ratio per year, 1.13; 95% CI, 1.08-1.19; P < .001). In both OR and WA there were increases in the number of patient deaths due to MAID per 1000 deaths over time.Conclusions and relevanceIn this study, MAID results in Oregon and Washington were similar, although MAID use measured as a percentage of patients prescribed lethal medications and then self-administering them increased only in WA. Most patients who acquired lethal prescriptions had cancer or terminal illnesses that are difficult to palliate and lead to loss of autonomy, dignity, and quality of life
Perforated Small Intestine: A Case of a Delayed Presentation of an Intra-Abdominal Injury in a Pediatric Patient With a Seatbelt Sign
With the use of seatbelts comes a unique injury profile that has been called "the seatbelt syndrome." The classically described "seatbelt sign" has become a pattern of injury, describing potential underlying damage. As a clinician, clues to the underlying damage follow a thorough physical examination including the removal of all clothing to locate abrasions and bruises to the skin that potentially follow a seatbelt pattern. Delayed presentation of an intra-abdominal injury in the setting of a seatbelt sign has been well documented; however, the question is how long to observe these patients. We present the case of a 17-year-old woman involved in a motor vehicle collision who presented to the emergency department (ED) hemodynamically stable with a lower abdominal wall seatbelt sign. Her initial imaging revealed only an abdominal wall contusion. She was admitted for observation. Approximately 12 h later she started developing abdominal pain, and by 14 h abdominal distention, with repeat imaging showing free fluid and free air. She was taken to the operating room for an exploratory laparotomy and was ultimately discharged back home on day 7
A framework for applying natural language processing in digital health interventions
BACKGROUND: Digital health interventions (DHIs) are poised to reduce target symptoms in a scalable, affordable, and empirically supported way. DHIs that involve coaching or clinical support often collect text data from 2 sources: (1) open correspondence between users and the trained practitioners supporting them through a messaging system and (2) text data recorded during the intervention by users, such as diary entries. Natural language processing (NLP) offers methods for analyzing text, augmenting the understanding of intervention effects, and informing therapeutic decision making.
OBJECTIVE: This study aimed to present a technical framework that supports the automated analysis of both types of text data often present in DHIs. This framework generates text features and helps to build statistical models to predict target variables, including user engagement, symptom change, and therapeutic outcomes.
METHODS: We first discussed various NLP techniques and demonstrated how they are implemented in the presented framework. We then applied the framework in a case study of the Healthy Body Image Program, a Web-based intervention trial for eating disorders (EDs). A total of 372 participants who screened positive for an ED received a DHI aimed at reducing ED psychopathology (including binge eating and purging behaviors) and improving body image. These users generated 37,228 intervention text snippets and exchanged 4285 user-coach messages, which were analyzed using the proposed model.
RESULTS: We applied the framework to predict binge eating behavior, resulting in an area under the curve between 0.57 (when applied to new users) and 0.72 (when applied to new symptom reports of known users). In addition, initial evidence indicated that specific text features predicted the therapeutic outcome of reducing ED symptoms.
CONCLUSIONS: The case study demonstrates the usefulness of a structured approach to text data analytics. NLP techniques improve the prediction of symptom changes in DHIs. We present a technical framework that can be easily applied in other clinical trials and clinical presentations and encourage other groups to apply the framework in similar contexts
Prosthetic thigh arteriovenous access: outcome with SVS/AAVS reporting standards
AbstractPurposeDifferences in the reporting methods of results for arteriovenous (AV) access can dramatically affect apparent outcome. To enable meaningful comparisons in the literature, the Society for Vascular Surgery and the American Association for Vascular Surgery (SVS/AAVS) recently published reporting standards for dialysis access. The purpose of the present study was to determine infection rates, patency rates, and possible predictive factors for prosthetic thigh AV access outcomes with the reporting standards of the SVS/AAVS.MethodsA retrospective analysis was performed of all patients who underwent placement of thigh AV access by the Surgical Teaching Service at Greenville Memorial Hospital between 1989 and 2001. Outcomes were determined based on SVS/AAVS Standards for Reports Dealing with AV Accesses. The rate of revision per year of access patency was also determined; this end point more accurately reflects the true cost and morbidity associated with AV access than do patency or infection rates alone.ResultsOne hundred twenty-five polytetrafluoroethylene thigh AV accesses were placed in 100 patients. Nine accesses were excluded from the study, six because there was no patient follow-up and 3 as a result of deaths unrelated to the access procedure and which occurred less than 30 days after access placement. There were six (4%) late access-related deaths. There were 18 (15%) early access failures, related to infection in 14 cases (12%), thrombosis in three cases (2%), and steal in one case (1%). Early failure was more common in patients with diabetes mellitus (P = .036). The primary and secondary functional patency rates were 19% and 54%, respectively, at 2 years. Infection occurred in 48 (41%) accesses. The patency and infection rates were not influenced by patient age, gender, body mass index, or diabetes mellitus. The median number of interventions per year of access patency was 1.68, and this outcome was positively correlated with body mass index (P < .001).ConclusionsProsthetic AV access in the thigh is associated with higher morbidity compared with that reported for the upper extremity, and should be considered only if no upper extremity AV access option is available. Early access failure and the requirement for an increased number of interventions to reestablish and maintain access patency are more common in patients with diabetes mellitus and obesity. The number of interventions per year of access patency is a valuable end point when assessing the outcome of AV access procedures
Methodological criteria for the assessment of moderators in systematic reviews of randomised controlled trials : a consensus study
Background: Current methodological guidelines provide advice about the assessment of sub-group analysis within
RCTs, but do not specify explicit criteria for assessment. Our objective was to provide researchers with a set of
criteria that will facilitate the grading of evidence for moderators, in systematic reviews.
Method: We developed a set of criteria from methodological manuscripts (n = 18) using snowballing technique,
and electronic database searches. Criteria were reviewed by an international Delphi panel (n = 21), comprising
authors who have published methodological papers in this area, and researchers who have been active in the
study of sub-group analysis in RCTs. We used the Research ANd Development/University of California Los Angeles
appropriateness method to assess consensus on the quantitative data. Free responses were coded for consensus
and disagreement. In a subsequent round additional criteria were extracted from the Cochrane Reviewers’
Handbook, and the process was repeated.
Results: The recommendations are that meta-analysts report both confirmatory and exploratory findings for subgroups
analysis. Confirmatory findings must only come from studies in which a specific theory/evidence based apriori
statement is made. Exploratory findings may be used to inform future/subsequent trials. However, for
inclusion in the meta-analysis of moderators, the following additional criteria should be applied to each study:
Baseline factors should be measured prior to randomisation, measurement of baseline factors should be of
adequate reliability and validity, and a specific test of the interaction between baseline factors and interventions
must be presented.
Conclusions: There is consensus from a group of 21 international experts that methodological criteria to assess
moderators within systematic reviews of RCTs is both timely and necessary. The consensus from the experts
resulted in five criteria divided into two groups when synthesising evidence: confirmatory findings to support
hypotheses about moderators and exploratory findings to inform future research. These recommendations are
discussed in reference to previous recommendations for evaluating and reporting moderator studies
West Nile Virus–infected Mosquitoes, Louisiana, 2002
Culex quinquefasciatus was identified as probable vector
- …