506 research outputs found
Recommended from our members
Large-scale discovery of enhancers from human heart tissue.
Development and function of the human heart depend on the dynamic control of tissue-specific gene expression by distant-acting transcriptional enhancers. To generate an accurate genome-wide map of human heart enhancers, we used an epigenomic enhancer discovery approach and identified ∼6,200 candidate enhancer sequences directly from fetal and adult human heart tissue. Consistent with their predicted function, these elements were markedly enriched near genes implicated in heart development, function and disease. To further validate their in vivo enhancer activity, we tested 65 of these human sequences in a transgenic mouse enhancer assay and observed that 43 (66%) drove reproducible reporter gene expression in the heart. These results support the discovery of a genome-wide set of noncoding sequences highly enriched in human heart enhancers that is likely to facilitate downstream studies of the role of enhancers in development and pathological conditions of the heart
Response to comment on "Human-specific gain of function in a developmental enhancer"
Duret and Galtier argue that human-specific sequence divergence and gain of function in the HACNS1 enhancer result from deleterious biased gene conversion (BGC) with no contribution from positive selection. We reinforce our previous conclusion by analyzing hypothesized BGC
events genomewide and assessing the effect of recombination rates on human-accelerated conserved noncoding sequence ascertainment. We also provide evidence that AT → GC substitution bias can coexist with positive selection
Rising Annual Costs of Dizziness Presentations to U.S. Emergency Departments
Objectives Dizziness and vertigo account for roughly 4% of chief symptoms in the emergency department ( ED ). Little is known about the aggregate costs of ED evaluations for these patients. The authors sought to estimate the annual national costs associated with ED visits for dizziness. Methods This cost study of adult U.S. ED visits presenting with dizziness or vertigo combined public‐use ED visit data (1995 to 2009) from the National Hospital Ambulatory Medical Care Survey ( NHAMCS ) and cost data (2003 to 2008) from the Medical Expenditure Panel Survey ( MEPS ). We calculated total visits, test utilization, and ED diagnoses from NHAMCS . Diagnosis groups were defined using the Healthcare Cost and Utilization Project's Clinical Classifications Software ( HCUP ‐ CCS ). Total visits and the proportion undergoing neuroimaging for future years were extrapolated using an autoregressive forecasting model. The average ED visit cost‐per‐diagnosis‐group from MEPS were calculated, adjusting to 2011 dollars using the Hospital Personal Health Care Expenditures price index. An overall weighted mean across the diagnostic groups was used to estimate total national costs. Year 2011 data are reported in 2011 dollars. Results The estimated number of 2011 US ED visits for dizziness or vertigo was 3.9 million (95% confidence interval [ CI ] = 3.6 to 4.2 million). The proportion undergoing diagnostic imaging by computed tomography ( CT ), magnetic resonance imaging ( MRI ), or both in 2011 was estimated to be 39.9% (39.4% CT , 2.3% MRI ). The mean per‐ ED ‐dizziness‐visit cost was 3.9 billion. HCUP ‐ CCS key diagnostic groups for those presenting with dizziness and vertigo included the following (fraction of dizziness visits, cost‐per‐ ED ‐visit, attributable annual national costs): otologic/vestibular (25.7%; 757 million), cardiovascular (16.5%, 941 million), and cerebrovascular (3.1%; 127 million). Neuroimaging was estimated to account for about 12% of the total costs for dizziness visits in 2011 ( CT scans 110 million). Conclusions Total U.S. national costs for patients presenting with dizziness to the ED are substantial and are estimated to now exceed $4 billion per year (about 4% of total ED costs). Rising costs over time appear to reflect the rising prevalence of ED visits for dizziness and increased rates of imaging use. Future economic studies should focus on the specific breakdown of total costs, emphasizing areas of high cost and use that might be safely reduced. Resumen Incremento Anual de los Costes de las Atenciones por Mareo en los Servicios de Urgencias de Estados Unidos Objectivos El mareo y el vértigo suman aproximadamente el 4% de los motivos de consulta en el servicio de urgencias ( SU ). Se conoce poco sobre los costes globales de las evaluaciones del SU en estos pacientes. Se buscó estimar los costes anuales nacionales asociados con las visitas al SU por mareo. Metodología Este estudio de costes de visitas al SU de adultos norteamericanos que acudieron con mareo o vértigo combinó los datos públicos de las visitas a los SU (1995 a 2009) recogidos por el National Hospital Ambulatory Medical Care Survey ( NHAMCS ) y los costes (2003 a 2008) recogidos por el Medical Expenditure Panel Survey ( MEPS ). Se calcularon el total de visitas, el uso de pruebas diagnósticas y los diagnósticos del SU del NHAMCS . Los grupos diagnósticos se definieron según el Healthcare Cost and Utilization Project's Clinical Classifications Software ( HCUP ‐ CCS ). Los datos del año 2011 se documentaron en dólares de 2011. El total de visitas y la proporción de neuroimagen llevada a cabo en los futuros años se extrapoló usando un modelo predictivo autorregresivo. La media del coste por visita al SU por grupo diagnóstico del MEPS se calculó, ajustándose a dólares de 2011, mediante el índice de precios de los Hospital Personal Health Care Expenditures. Se utilizó una media ponderada global entre los grupos diagnósticos para estimar los costes totales nacionales. Resultados El número de visitas al SU en Estados Unidos en 2011 por mareo o vértigo fue de 3,9 millones ( IC 95% = 3,6 a 4,2 millones). El porcentaje de pruebas diagnósticas de imagen llevadas a cabo por tomografía computarizada ( TC ), resonancia magnética ( RM ) o ambas en 2011 se estimó en un 39,9% (39,4% TC , 2,3% RM ). La media de coste por visita al SU por mareo fue de 1.004 dólares de 2011. Los costes totales, extrapolados para todo el país, fueron de 3.900 millones de dólares. Los grupos diagnósticos HCUP ‐ CCS para aquéllos que presentaron mareo o vértigo incluyeron los siguientes (proporción de visitas por mareo; coste por visita al SU ; costes anuales nacionales atribuibles): otológico/vestibular (25,7%; 768 dólares; 757 millones de dólares), cardiovascular (16,5%, 1.489 dólares; 941 millones de dólares) y cerebrovascular (3,1%; 1.059 dólares; 127 millones de dólares). Se estimó una suma en la neuroimagen del 12% del total de costes para las visitas por mareo en 2011 (360 millones de dólares para la TC y 110 millones de dólares para la RM ). Conclusiones Los costes totales en Estados Unidos para los pacientes que acuden por mareo al SU son sustanciales, y se estima que sobrepasan en estos momentos los 4.000 millones de dólares por año (aproximadamente un 4% de los costes totales del SU ). El incremento de los costes con el paso del tiempo parece reflejar el crecimiento de la prevalencia de las visitas al SU por mareo y el aumento de porcentajes de utilización de la neuroimagen. Futuros estudios económicos deberían centrarse en el desglose de los costes totales, y hacer énfasis en las áreas de alto uso y coste que pueden ser reducidas sin riesgo.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/99059/1/acem12168.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/99059/2/acem12168-sup-0001-DataSupplementS1.pd
Collective Decision Dynamics in the Presence of External Drivers
We develop a sequence of models describing information transmission and
decision dynamics for a network of individual agents subject to multiple
sources of influence. Our general framework is set in the context of an
impending natural disaster, where individuals, represented by nodes on the
network, must decide whether or not to evacuate. Sources of influence include a
one-to-many externally driven global broadcast as well as pairwise
interactions, across links in the network, in which agents transmit either
continuous opinions or binary actions. We consider both uniform and variable
threshold rules on the individual opinion as baseline models for
decision-making. Our results indicate that 1) social networks lead to
clustering and cohesive action among individuals, 2) binary information
introduces high temporal variability and stagnation, and 3) information
transmission over the network can either facilitate or hinder action adoption,
depending on the influence of the global broadcast relative to the social
network. Our framework highlights the essential role of local interactions
between agents in predicting collective behavior of the population as a whole.Comment: 14 pages, 7 figure
Who is dominant? Occupational Health and Safety management in Chinese shipping
This paper investigates the implementation of the International Safety Management
(ISM) Code in the Chinese chemical shipping industry. In particular, it examines the
tension between management focus on speedy production and seafarers’ participation
in safety related decision making and analyses how this tension is managed. It shows
that while on paper companies have policies stating safety commitment in compliance
with the ISM Code, in practice shore management tends to prioritise efficient
production. When Occupational Health and Safety (OHS) and ship’s sailing schedules
are in conflict, managers implicitly request shipmasters to prioritise the ‘core interest’
of the company. Although the ISM Code endows shipmasters with overriding
authorities in relation to shipboard safety management, they tend to read between the
lines and tacitly follow managers’ intentions. The study suggests that if the ISM
implementation makes a difference, it is the practice that managers become more
subtle in giving orders to exert their dominance. The study further reveals that the
management’s practice is not only irresponsive to seafarers’ safety concerns but also
makes rather limited contributions to promote OHS Management
Costs of Testing for Ocular Chlamydia trachomatis Infection Compared to Mass Drug Administration for Trachoma in The Gambia: Application of Results from the PRET Study
Background
Mass drug administration (MDA) treatment of active trachoma with antibiotic is recommended to be initiated in any district where the prevalence of trachoma inflammation, follicular (TF) is ≥10% in children aged 1–9 years, and then to continue for at least three annual rounds before resurvey. In The Gambia the PRET study found that discontinuing MDA based on testing a sample of children for ocular Chlamydia trachomatis(Ct) infection after one MDA round had similar effects to continuing MDA for three rounds. Moreover, one round of MDA reduced disease below the 5% TF threshold. We compared the costs of examining a sample of children for TF, and of testing them for Ct, with those of MDA rounds.
Methods
The implementation unit in PRET The Gambia was a census enumeration area (EA) of 600–800 people. Personnel, fuel, equipment, consumables, data entry and supervision costs were collected for census and treatment of a sample of EAs and for the examination, sampling and testing for Ct infection of 100 individuals within them. Programme costs and resource savings from testing and treatment strategies were inferred for the 102 EAs in the study area, and compared.
Results
Census costs were 108.79. MDA with donated azithromycin cost 796.90 per EA, with Ct testing kits costing 1.38 per result. However stopping or deciding not to initiate treatment in the study area based on testing a sample of EAs for Ct infection (or examining children in a sample of EAs) creates savings relative to further unnecessary treatments.
Conclusion
Resources may be saved by using tests for chlamydial infection or clinical examination to determine that initial or subsequent rounds of MDA for trachoma are unnecessary
Recommended from our members
More frequent, more costly? Health economic modelling aspects of monitoring glaucoma patients in England
BACKGROUND: Chronic open angle glaucoma (COAG) is an age-related eye disease causing irreversible loss of visual field (VF). Health service delivery for COAG is challenging given the large number of diagnosed patients requiring lifelong periodic monitoring by hospital eye services. Yet frequent examination better determines disease worsening and speed of VF loss under treatment. We examine the cost-effectiveness of increasing frequency of VF examinations during follow-up using a health economic model.
METHODS: Two different VF monitoring schemes defined as current practice (annual VF testing) and proposed practice (three VF tests per year in the first 2 years after diagnosis) were examined. A purpose written health economic Markov model is used to test the hypothesis that cost effectiveness improves by implementing proposed practice on groups of patients stratified by age and severity of COAG. Further, a new component of the model, estimating costs of visual impairment, was added. Results were derived from a simulated cohort of 10000 patients with quality-adjusted life years (QALYs) and incremental cost-effectiveness ratios (ICERs) used as main outcome measures.
RESULTS: An ICER of £21,392 per QALY was derived for proposed practice improving to a value of £11,382 once savings for prevented visual impairment was added to the model. Proposed practice was more cost-effective in younger patients. Proposed practice for patients with advanced disease at diagnosis generated ICERs > £60,000 per QALY; these cases would likely be on the most intensive treatment pathway making clinical information on speed of VF loss redundant. Sensitivity analysis indicated results to be robust in relation to hypothetical willingness to pay threshold identified by national guidelines, although greatest uncertainty was allied to estimates of implementation and visual impairment costs.
CONCLUSION: Increasing VF monitoring at the earliest stages of follow-up for COAG appears to be cost-effective depending on reasonable assumptions about implementation costs. Our health economic model highlights benefits of stratifying patients to more or less monitoring based on age and stage of disease at diagnosis; a prospective study is needed to prove these findings. Further, this works highlights gaps in knowledge about long term costs of visual impairment
Magnitude, temporal trends, and projections of the global prevalence of blindness and distance and near vision impairment: a systematic review and meta-analysis
Background: Global and regional prevalence estimates for blindness and vision impairment are important for the development of public health policies. We aimed to provide global estimates, trends, and projections of global blindness and vision impairment.
Methods: We did a systematic review and meta-analysis of population-based datasets relevant to global vision impairment and blindness that were published between 1980 and 2015. We fitted hierarchical models to estimate the prevalence (by age, country, and sex), in 2015, of mild visual impairment (presenting visual acuity worse than 6/12 to 6/18 inclusive), moderate to severe visual impairment (presenting visual acuity worse than 6/18 to 3/60 inclusive), blindness (presenting visual acuity worse than 3/60), and functional presbyopia (defined as presenting near vision worse than N6 or N8 at 40 cm when best-corrected distance visual acuity was better than 6/12).
Findings: Globally, of the 7·33 billion people alive in 2015, an estimated 36·0 million (80% uncertainty interval [UI] 12·9–65·4) were blind (crude prevalence 0·48%; 80% UI 0·17–0·87; 56% female), 216·6 million (80% UI 98·5–359·1) people had moderate to severe visual impairment (2·95%, 80% UI 1·34–4·89; 55% female), and 188·5 million (80% UI 64·5–350·2) had mild visual impairment (2·57%, 80% UI 0·88–4·77; 54% female). Functional presbyopia affected an estimated 1094·7 million (80% UI 581·1–1686·5) people aged 35 years and older, with 666·7 million (80% UI 364·9–997·6) being aged 50 years or older. The estimated number of blind people increased by 17·6%, from 30·6 million (80% UI 9·9–57·3) in 1990 to 36·0 million (80% UI 12·9–65·4) in 2015. This change was attributable to three factors, namely an increase because of population growth (38·4%), population ageing after accounting for population growth (34·6%), and reduction in age-specific prevalence (–36·7%). The number of people with moderate and severe visual impairment also increased, from 159·9 million (80% UI 68·3–270·0) in 1990 to 216·6 million (80% UI 98·5–359·1) in 2015.
Interpretation: There is an ongoing reduction in the age-standardised prevalence of blindness and visual impairment, yet the growth and ageing of the world’s population is causing a substantial increase in number of people affected. These observations, plus a very large contribution from uncorrected presbyopia, highlight the need to scale up vision impairment alleviation efforts at all levels
Comparative Study of Acute Kidney Injury in Liver Transplantation: Donation after Circulatory Death versus Brain Death
BACKGROUND
Acute kidney injury (AKI) after orthotopic liver transplantation (OLT) contributes to morbidity and mortality. Donation after circulatory death (DCD) has been established to increase the pool of organs. While surgical complications are reported to be comparable in DCD and donation after brain death (DBD) OLT, there is a knowledge gap concerning adverse kidney events in these 2 groups.
MATERIAL AND METHODS
In this retrospective cohort study, 154 patients received a DBD and 68 received a DCD organ (2016-2020). The primary outcome was a major adverse kidney event within 30 days (MAKE-30). The secondary outcome was dynamics of AKI and kidney replacement therapy (KRT) during the first postoperative week and on postoperative day 30. Incidence and resolution from AKI and KRT and patient survival (PS) 30 days after OLT were compared between the DCD and DBD recipients.
RESULTS
MAKE-30 incidence after OLT was comparable in DCD (n=27, 40%) vs DBD (n=41, 27%) recipients (risk ratio 1.49 [95% CI 1.01, 2.21], p=0.073). AKI incidence was comparable in DCD (n=58, 94%) vs DBD (n=95, 82%) recipients (risk ratio 1.14 [95% CI: 1.03, 1.27], P=0.057). Overall, 40% (n=88) of patients required KRT, with no difference between DCD (n=27, 40%) vs DBD (n=61, 40%) recipients (risk ratio 1.00 [95% CI 0.71, 1.43], P>0.999). Resolution of AKI by day 30 was lower in DCD (n=29, 50%) than in DBD (n=66, 69%) recipients (risk ratio 0.71 [95% CI: 0.53, 0.95], P=0.032). Survival after 30 days (DCD: n=64, 94% vs DBD: n=146, 95%, risk ratio 0.99 [95% CI 0.93, 1.06], P>0.999) was also comparable.
CONCLUSIONS
MAKE-30, short-term renal outcome, and survival did not significantly differ between DBD and DCD-OLT. Resolution of AKI by day 30 was lower in DCD than in DBD recipients
- …
