191 research outputs found

    Dramatic outcomes in epilepsy: depression, suicide, injuries, and mortality

    Get PDF
    In this narrative review, we will discuss some of the significant risks and dramatic consequences that are associated with epilepsy: depression, suicide, seizure-related injuries, and mortality, both in adults and in children. Considering the high prevalence of depression among people with epilepsy (PWE), routine and periodic screening of all PWE for early detection and appropriate management of depression is recommended. PWE should be screened for suicidal ideation regularly and when needed, patients should be referred for a psychiatric evaluation and treatment. When starting an antiepileptic drug (AED) or switching from one to another AED, patients should be advised to report to their treating physician any change in their mood and existence of suicidal ideation. The risk of injuries for the general epilepsy population is increased only moderately. The risk is higher in selected populations attending epilepsy clinics and referral centers. This being said, there are PWE that may suffer frequent, severe, and sometimes even life-threatening seizure-related injuries. The most obvious way to reduce risk is to strive for improved seizure control. Finally, PWE have a 2–3 times higher mortality rate than the general population. Deaths in PWE may relate to the underlying cause of epilepsy, to seizures (including sudden unexpected death in epilepsy [SUDEP] and seizure related injuries) and to status epilepticus, as well as to other conditions that do not appear directly related to epilepsy. Improving seizure control and patient education may be the most important measures to reduce epilepsy related mortality in general and SUDEP in particular

    Effect of Common Medications on the Expression of SARS-CoV-2 Entry Receptors in Kidney Tissue

    Get PDF
    Besides the respiratory system, severe acute respiratory syndrome-coronavirus 2 (SARS-CoV-2) infection was shown to affect other essential organs such as the kidneys. Early kidney involvement during the course of infection was associated with worse outcomes, which could be attributed to the direct SARS-CoV-2 infection of kidney cells. In this study, the effect of commonly used medications on the expression of SARS-CoV-2 receptor, angiotensin-converting enzyme (ACE)2, and TMPRSS2 protein in kidney tissues was evaluated. This was done by in silico analyses of publicly available transcriptomic databases of kidney tissues of rats treated with multiple doses of commonly used medications. Of 59 tested medications, 56% modified ACE2 expression, whereas 24% modified TMPRSS2 expression. ACE2 was increased with only a few of the tested medication groups, namely the renin-angiotensin inhibitors, such as enalapril, antibacterial agents, such as nitrofurantoin, and the proton pump inhibitor, omeprazole. The majority of the other medications decreased ACE2 expression to variable degrees with allopurinol and cisplatin causing the most noticeable downregulation. The expression level of TMPRSS2 was increased with a number of medications, such as diclofenac, furosemide, and dexamethasone, whereas other medications, such as allopurinol, suppressed the expression of this gene. The prolonged exposure to combinations of these medications could regulate the expression of ACE2 and TMPRSS2 in a way that may affect kidney susceptibility to SARS-CoV-2 infection. Data presented here suggest that we should be vigilant about the potential effects of commonly used medications on kidney tissue expression of ACE2 and TMPRSS2

    Enabling Technologies for Web 3.0: A Comprehensive Survey

    Full text link
    Web 3.0 represents the next stage of Internet evolution, aiming to empower users with increased autonomy, efficiency, quality, security, and privacy. This evolution can potentially democratize content access by utilizing the latest developments in enabling technologies. In this paper, we conduct an in-depth survey of enabling technologies in the context of Web 3.0, such as blockchain, semantic web, 3D interactive web, Metaverse, Virtual reality/Augmented reality, Internet of Things technology, and their roles in shaping Web 3.0. We commence by providing a comprehensive background of Web 3.0, including its concept, basic architecture, potential applications, and industry adoption. Subsequently, we examine recent breakthroughs in IoT, 5G, and blockchain technologies that are pivotal to Web 3.0 development. Following that, other enabling technologies, including AI, semantic web, and 3D interactive web, are discussed. Utilizing these technologies can effectively address the critical challenges in realizing Web 3.0, such as ensuring decentralized identity, platform interoperability, data transparency, reducing latency, and enhancing the system's scalability. Finally, we highlight significant challenges associated with Web 3.0 implementation, emphasizing potential solutions and providing insights into future research directions in this field

    Multiple early introductions of SARS-CoV-2 into a global travel hub in the Middle East

    Get PDF
    International travel played a significant role in the early global spread of SARS-CoV-2. Understanding transmission patterns from different regions of the world will further inform global dynamics of the pandemic. Using data from Dubai in the United Arab Emirates (UAE), a major international travel hub in the Middle East, we establish SARS-CoV-2 full genome sequences from the index and early COVID-19 patients in the UAE. The genome sequences are analysed in the context of virus introductions, chain of transmissions, and possible links to earlier strains from other regions of the world. Phylogenetic analysis showed multiple spatiotemporal introductions of SARS-CoV-2 into the UAE from Asia, Europe, and the Middle East during the early phase of the pandemic. We also provide evidence for early community-based transmission and catalogue new mutations in SARS-CoV-2 strains in the UAE. Our findings contribute to the understanding of the global transmission network of SARS-CoV-2

    The Dawn of Open Access to Phylogenetic Data

    Get PDF
    The scientific enterprise depends critically on the preservation of and open access to published data. This basic tenet applies acutely to phylogenies (estimates of evolutionary relationships among species). Increasingly, phylogenies are estimated from increasingly large, genome-scale datasets using increasingly complex statistical methods that require increasing levels of expertise and computational investment. Moreover, the resulting phylogenetic data provide an explicit historical perspective that critically informs research in a vast and growing number of scientific disciplines. One such use is the study of changes in rates of lineage diversification (speciation - extinction) through time. As part of a meta-analysis in this area, we sought to collect phylogenetic data (comprising nucleotide sequence alignment and tree files) from 217 studies published in 46 journals over a 13-year period. We document our attempts to procure those data (from online archives and by direct request to corresponding authors), and report results of analyses (using Bayesian logistic regression) to assess the impact of various factors on the success of our efforts. Overall, complete phylogenetic data for ~60% of these studies are effectively lost to science. Our study indicates that phylogenetic data are more likely to be deposited in online archives and/or shared upon request when: (1) the publishing journal has a strong data-sharing policy; (2) the publishing journal has a higher impact factor, and; (3) the data are requested from faculty rather than students. Although the situation appears dire, our analyses suggest that it is far from hopeless: recent initiatives by the scientific community -- including policy changes by journals and funding agencies -- are improving the state of affairs

    Does publication bias inflate the apparent efficacy of psychological treatment for major depressive disorder? A systematic review and meta-analysis of US national institutes of health-funded trials

    Get PDF
    Background The efficacy of antidepressant medication has been shown empirically to be overestimated due to publication bias, but this has only been inferred statistically with regard to psychological treatment for depression. We assessed directly the extent of study publication bias in trials examining the efficacy of psychological treatment for depression. Methods and Findings We identified US National Institutes of Health grants awarded to fund randomized clinical trials comparing psychological treatment to control conditions or other treatments in patients diagnosed with major depressive disorder for the period 1972–2008, and we determined whether those grants led to publications. For studies that were not published, data were requested from investigators and included in the meta-analyses. Thirteen (23.6%) of the 55 funded grants that began trials did not result in publications, and two others never started. Among comparisons to control conditions, adding unpublished studies (Hedges’ g = 0.20; CI95% -0.11~0.51; k = 6) to published studies (g = 0.52; 0.37~0.68; k = 20) reduced the psychotherapy effect size point estimate (g = 0.39; 0.08~0.70) by 25%. Moreover, these findings may overestimate the "true" effect of psychological treatment for depression as outcome reporting bias could not be examined quantitatively. Conclusion The efficacy of psychological interventions for depression has been overestimated in the published literature, just as it has been for pharmacotherapy. Both are efficacious but not to the extent that the published literature would suggest. Funding agencies and journals should archive both original protocols and raw data from treatment trials to allow the detection and correction of outcome reporting bias. Clinicians, guidelines developers, and decision makers should be aware that the published literature overestimates the effects of the predominant treatments for depression

    Public Availability of Published Research Data in High-Impact Journals

    Get PDF
    BACKGROUND: There is increasing interest to make primary data from published research publicly available. We aimed to assess the current status of making research data available in highly-cited journals across the scientific literature. METHODS AND RESULTS: We reviewed the first 10 original research papers of 2009 published in the 50 original research journals with the highest impact factor. For each journal we documented the policies related to public availability and sharing of data. Of the 50 journals, 44 (88%) had a statement in their instructions to authors related to public availability and sharing of data. However, there was wide variation in journal requirements, ranging from requiring the sharing of all primary data related to the research to just including a statement in the published manuscript that data can be available on request. Of the 500 assessed papers, 149 (30%) were not subject to any data availability policy. Of the remaining 351 papers that were covered by some data availability policy, 208 papers (59%) did not fully adhere to the data availability instructions of the journals they were published in, most commonly (73%) by not publicly depositing microarray data. The other 143 papers that adhered to the data availability instructions did so by publicly depositing only the specific data type as required, making a statement of willingness to share, or actually sharing all the primary data. Overall, only 47 papers (9%) deposited full primary raw data online. None of the 149 papers not subject to data availability policies made their full primary data publicly available. CONCLUSION: A substantial proportion of original research papers published in high-impact journals are either not subject to any data availability policies, or do not adhere to the data availability instructions in their respective journals. This empiric evaluation highlights opportunities for improvement

    Theoretical and technological building blocks for an innovation accelerator

    Get PDF
    The scientific system that we use today was devised centuries ago and is inadequate for our current ICT-based society: the peer review system encourages conservatism, journal publications are monolithic and slow, data is often not available to other scientists, and the independent validation of results is limited. Building on the Innovation Accelerator paper by Helbing and Balietti (2011) this paper takes the initial global vision and reviews the theoretical and technological building blocks that can be used for implementing an innovation (in first place: science) accelerator platform driven by re-imagining the science system. The envisioned platform would rest on four pillars: (i) Redesign the incentive scheme to reduce behavior such as conservatism, herding and hyping; (ii) Advance scientific publications by breaking up the monolithic paper unit and introducing other building blocks such as data, tools, experiment workflows, resources; (iii) Use machine readable semantics for publications, debate structures, provenance etc. in order to include the computer as a partner in the scientific process, and (iv) Build an online platform for collaboration, including a network of trust and reputation among the different types of stakeholders in the scientific system: scientists, educators, funding agencies, policy makers, students and industrial innovators among others. Any such improvements to the scientific system must support the entire scientific process (unlike current tools that chop up the scientific process into disconnected pieces), must facilitate and encourage collaboration and interdisciplinarity (again unlike current tools), must facilitate the inclusion of intelligent computing in the scientific process, must facilitate not only the core scientific process, but also accommodate other stakeholders such science policy makers, industrial innovators, and the general public

    Data sharing: not as simple as it seems

    Get PDF
    In recent years there has been a major change on the part of funders, particularly in North America, so that data sharing is now considered to be the norm rather than the exception. We believe that data sharing is a good idea. However, we also believe that it is inappropriate to prescribe exactly when or how researchers should preserve and share data, since these issues are highly specific to each study, the nature of the data collected, who is requesting it, and what they intend to do with it. The level of ethical concern will vary according to the nature of the information, and the way in which it is collected - analyses of anonymised hospital admission records may carry a quite different ethical burden than analyses of potentially identifiable health information collected directly from the study participants. It is striking that most discussions about data sharing focus almost exclusively on issues of ownership (by the researchers or the funders) and efficiency (on the part of the funders). There is usually little discussion of the ethical issues involved in data sharing, and its implications for the study participants. Obtaining prior informed consent from the participants does not solve this problem, unless the informed consent process makes it completely clear what is being proposed, in which case most study participants would not agree. Thus, the undoubted benefits of data sharing does not remove the obligations and responsibilities that the original investigators hold for the people they invited to participate in the study
    corecore