476,758 research outputs found

    Q-SEA - a tool for quality assessment of ethics analyses conducted as part of health technology assessments

    Get PDF
    Introduction: Assessment of ethics issues is an important part of health technology assessments (HTA). However, in terms of existence of quality assessment tools, ethics for HTA is methodologically under-developed in comparison to other areas of HTA, such as clinical or cost effectiveness.Objective: To methodologically advance ethics for HTA by: (1) proposing and elaborating Q-SEA, the first instrument for quality assessment of ethics analyses, and (2) applying Q-SEA to a sample systematic review of ethics for HTA, in order to illustrate and facilitate its use. Methods: To develop a list of items for the Q-SEA instrument, we sys-tematically reviewed the literature on methodology in ethics for HTA, reviewed HTA organizations’ websites, and solicited views from 32 ex-perts in the field of ethics for HTA at two 2-day workshops. We sub-sequently refined Q-SEA through its application to an ethics analysis conducted for HTA.Results: Q-SEA instrument consists of two domains – the process do-main and the output domain. The process domain consists of 5 ele-ments: research question, literature search, inclusion/exclusion criteria, perspective, and ethics framework. The output domain consists of 5 elements: completeness, bias, implications, conceptual clarification, and conflicting values.Conclusion: Q-SEA is the first instrument for quality assessment of ethics analyses in HTA. Further refinements to the instrument to enhance its usability continue

    Sentiment analysis of clinical narratives: A scoping review

    Get PDF
    A clinical sentiment is a judgment, thought or attitude promoted by an observation with respect to the health of an individual. Sentiment analysis has drawn attention in the healthcare domain for secondary use of data from clinical narratives, with a variety of applications including predicting the likelihood of emerging mental illnesses or clinical outcomes. The current state of research has not yet been summarized. This study presents results from a scoping review aiming at providing an overview of sentiment analysis of clinical narratives in order to summarize existing research and identify open research gaps. The scoping review was carried out in line with the PRISMA-ScR (Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews) guideline. Studies were identified by searching 4 electronic databases (e.g., PubMed, IEEE Xplore) in addition to conducting backward and forward reference list checking of the included studies. We extracted information on use cases, methods and tools applied, used datasets and performance of the sentiment analysis approach. Of 1,200 citations retrieved, 29 unique studies were included in the review covering a period of 8 years. Most studies apply general domain tools (e.g. TextBlob) and sentiment lexicons (e.g. SentiWordNet) for realizing use cases such as prediction of clinical outcomes; others proposed new domain-specific sentiment analysis approaches based on machine learning. Accuracy values between 71.5-88.2% are reported. Data used for evaluation and test are often retrieved from MIMIC databases or i2b2 challenges. Latest developments related to artificial neural networks are not yet fully considered in this domain. We conclude that future research should focus on developing a gold standard sentiment lexicon, adapted to the specific characteristics of clinical narratives. Efforts have to be made to either augment existing or create new high-quality labeled data sets of clinical narratives. Last, the suitability of state-of-the-art machine learning methods for natural language processing and in particular transformer-based models should be investigated for their application for sentiment analysis of clinical narratives

    Cross-docking: A systematic literature review

    Get PDF
    This paper identifies the major research concepts, techniques, and models covered in the cross-docking literature. A systematic literature review is conducted using the BibExcel bibliometric analysis and Gephi network analysis tools. A research focus parallelship network (RFPN) analysis and keyword co-occurrence network (KCON) analysis are used to identify the primary research themes. The RFPN results suggest that vehicle routing, inventory control, scheduling, warehousing, and distribution are most studied. Of the optimization and simulation techniques applied in cross-docking, linear and integer programming has received much attention. The paper informs researchers interested in investigating cross-docking through an integrated perspective of the research gaps in this domain. This paper systematically reviews the literature on cross-docking, identifies the major research areas, and provides a survey of the techniques and models adopted by researchers in the areas related to cross-docking

    How do systematic reviews incorporate risk of bias assessments into the synthesis of evidence? A methodological study

    Get PDF
    Background: Systematic reviews (SRs) are expected to critically appraise included studies and privilege those at lowest risk of bias (RoB) in the synthesis. This study examines if and how critical appraisals inform the synthesis and interpretation of evidence in SRs.<p></p> Methods: All SRs published in March–May 2012 in 14 high-ranked medical journals and a sample from the Cochrane library were systematically assessed by two reviewers to determine if and how: critical appraisal was conducted; RoB was summarised at study, domain and review levels; and RoB appraisals informed the synthesis process.<p></p> Results: Of the 59 SRs studied, all except six (90%) conducted a critical appraisal of the included studies, with most using or adapting existing tools. Almost half of the SRs reported critical appraisal in a manner that did not allow readers to determine which studies included in a review were most robust. RoB assessments were not incorporated into synthesis in one-third (20) of the SRs, with their consideration more likely when reviews focused on randomised controlled trials. Common methods for incorporating critical appraisals into the synthesis process were sensitivity analysis, narrative discussion and exclusion of studies at high RoB. Nearly half of the reviews which investigated multiple outcomes and carried out study-level RoB summaries did not consider the potential for RoB to vary across outcomes.<p></p> Conclusions: The conclusions of the SRs, published in major journals, are frequently uninformed by the critical appraisal process, even when conducted. This may be particularly problematic for SRs of public health topics that often draw on diverse study designs

    Visualizing a Field of Research: A Methodology of Systematic Scientometric Reviews

    Full text link
    Systematic scientometric reviews, empowered by scientometric and visual analytic techniques, offer opportunities to improve the timeliness, accessibility, and reproducibility of conventional systematic reviews. While increasingly accessible science mapping tools enable end users to visualize the structure and dynamics of a research field, a common bottleneck in the current practice is the construction of a collection of scholarly publications as the input of the subsequent scientometric analysis and visualization. End users often have to face a dilemma in the preparation process: the more they know about a knowledge domain, the easier it is for them to find the relevant data to meet their needs adequately; the little they know, the harder the problem is. What can we do to avoid missing something valuable but beyond our initial description? In this article, we introduce a flexible and generic methodology, cascading citation expansion, to increase the quality of constructing a bibliographic dataset for systematic reviews. Furthermore, the methodology simplifies the conceptualization of globalism and localism in science mapping and unifies them on a consistent and continuous spectrum. We demonstrate an application of the methodology to the research of literature-based discovery and compare five datasets constructed based on three use scenarios, namely a conventional keyword-based search (one dataset), an expansion process starting with a groundbreaking article of the knowledge domain (two datasets), and an expansion process starting with a recently published review article by a prominent expert in the domain (two datasets). The unique coverage of each of the datasets is inspected through network visualization overlays with reference to other datasets in a broad and integrated context.Comment: 17 figures, 3 table

    “Hybrid war(fare)” in the digital media under the national domain of the Republic of Croatia: a systematic review

    Get PDF
    Systematic reviews synthesize data from primary sources and offer a different form of insight into the problem. In this regard, the review presented provides results of a detailed analysis of digital media contents, available in national domain of the Republic of Croatia, that used the terms “hybrid war(fare)“ – often a label of contemporary wars. It was undertaken in the early 2020 and considers the total of 360 individual contents identified using Google tools. Results provide information regarding the sources, frequencies as well as typical cases and types of information in which the terms appeared. Initially used for the right purpose use of these terms peaked by the end of 2017 when they became a sort of a buzzword that began to be abused. Therefore finally, the authors propose possible measures to efficiently counteract this problem likewise hoping that this evidence-based analysis contributed to the body of knowledge in the field

    Effectiveness of Generative Artificial Intelligence for Scientific Content Analysis

    Get PDF
    Generative artificial intelligence (GenAI) in general, and large language models (LLMs) in particular, are highly fashionable. As they have the ability to generate coherent output based on prompts in natural language, they are promoted as tools to free knowledge workers from tedious tasks such as content writing, customer support and routine computer code generation. Unsurprisingly, their application is also attractive to professionals in the research domain, where mundane and laborious tasks, such as literature screening, are commonplace. We evaluate Vertex AI ‘text-bison’, a foundational LLM model, in a real-world academic scenario by replicating parts of a popular systematic review in the information management domain. By comparing the results of a zero-shot LLM-based approach with those of the original study, we gather evidence on the suitability of state-of-the-art general-purpose LLMs for the analysis of scientific content. We show that the LLM-based approach delivers good scientific content analysis performance for a general classification problem (ACC = 0.9), acceptable performance for a domain-specific classification problem (ACC = 0.8) and borderline performance for a text comprehension problem (ACC ≈ 0.69). We conclude that some content analysis tasks with moderate accuracy requirements may be supported by current LLMs. As the technology will evolve rapidly in the foreseeable future, studies on large corpora, where some inaccuracies are tolerable, or workflows that prepare large data sets for human processing, may increasingly benefit from the capabilities of GenAI
    corecore