207,697 research outputs found
A Patient-Centered Framework for Evaluating Digital Maturity of Health Services: A Systematic Review
© Kelsey Flott, Ryan Callahan, Ara Darzi, Erik Mayer.Background: Digital maturity is the extent to which digital technologies are used as enablers to deliver a high-quality health service. Extensive literature exists about how to assess the components of digital maturity, but it has not been used to design a comprehensive framework for evaluation. Consequently, the measurement systems that do exist are limited to evaluating digital programs within one service or care setting, meaning that digital maturity evaluation is not accounting for the needs of patients across their care pathways. Objective: The objective of our study was to identify the best methods and metrics for evaluating digital maturity and to create a novel, evidence-based tool for evaluating digital maturity across patient care pathways. Methods: We systematically reviewed the literature to find the best methods and metrics for evaluating digital maturity. We searched the PubMed database for all papers relevant to digital maturity evaluation. Papers were selected if they provided insight into how to appraise digital systems within the health service and if they indicated the factors that constitute or facilitate digital maturity. Papers were analyzed to identify methodology for evaluating digital maturity and indicators of digitally mature systems. We then used the resulting information about methodology to design an evaluation framework. Following that, the indicators of digital maturity were extracted and grouped into increasing levels of maturity and operationalized as metrics within the evaluation framework. Results: We identified 28 papers as relevant to evaluating digital maturity, from which we derived 5 themes. The first theme concerned general evaluation methodology for constructing the framework (7 papers). The following 4 themes were the increasing levels of digital maturity: resources and ability (6 papers), usage (7 papers), interoperability (3 papers), and impact (5 papers). The framework includes metrics for each of these levels at each stage of the typical patient care pathway. Conclusions: The framework uses a patient-centric model that departs from traditional service-specific measurements and allows for novel insights into how digital programs benefit patients across the health system
Large-scale event extraction from literature with multi-level gene normalization
Text mining for the life sciences aims to aid database curation, knowledge summarization and information retrieval through the automated processing of biomedical texts. To provide comprehensive coverage and enable full integration with existing biomolecular database records, it is crucial that text mining tools scale up to millions of articles and that their analyses can be unambiguously linked to information recorded in resources such as UniProt, KEGG, BioGRID and NCBI databases. In this study, we investigate how fully automated text mining of complex biomolecular events can be augmented with a normalization strategy that identifies biological concepts in text, mapping them to identifiers at varying levels of granularity, ranging from canonicalized symbols to unique gene and proteins and broad gene families. To this end, we have combined two state-of-the-art text mining components, previously evaluated on two community-wide challenges, and have extended and improved upon these methods by exploiting their complementary nature. Using these systems, we perform normalization and event extraction to create a large-scale resource that is publicly available, unique in semantic scope, and covers all 21.9 million PubMed abstracts and 460 thousand PubMed Central open access full-text articles. This dataset contains 40 million biomolecular events involving 76 million gene/protein mentions, linked to 122 thousand distinct genes from 5032 species across the full taxonomic tree. Detailed evaluations and analyses reveal promising results for application of this data in database and pathway curation efforts. The main software components used in this study are released under an open-source license. Further, the resulting dataset is freely accessible through a novel API, providing programmatic and customized access (http://www.evexdb.org/api/v001/). Finally, to allow for large-scale bioinformatic analyses, the entire resource is available for bulk download from http://evexdb.org/download/, under the Creative Commons -Attribution - Share Alike (CC BY-SA) license
Recommended from our members
Evaluating telemedicine: A focus on patient pathways
Evaluations of telemedicine have sought to assess various measures of effectiveness (e.g., diagnostic accuracy), efficiency (e.g., cost), and engagement (e.g., patient satisfaction) to determine its success. Few studies, however, have looked at evaluating the organizational impact of telemedicine, which involves technology and process changes that affect the way that it is used and accepted by patients and clinicians alike. This study reviews and discusses the conceptual issues in telemedicine research and proposes a fresh approach for evaluating telemedicine. First, we advance a patient pathway perspective, as most of the existing studies view telemedicine as a support to a singular rather than multiple aspects of a health care process. Second, to conceptualize patient pathways and understand how telemedicine impacts upon them, we propose simulation as a tool to enhance understanding of the traditional and telemedicine patient pathway
Stable Feature Selection for Biomarker Discovery
Feature selection techniques have been used as the workhorse in biomarker
discovery applications for a long time. Surprisingly, the stability of feature
selection with respect to sampling variations has long been under-considered.
It is only until recently that this issue has received more and more attention.
In this article, we review existing stable feature selection methods for
biomarker discovery using a generic hierarchal framework. We have two
objectives: (1) providing an overview on this new yet fast growing topic for a
convenient reference; (2) categorizing existing methods under an expandable
framework for future research and development
Fast Identification of Biological Pathways Associated with a Quantitative Trait Using Group Lasso with Overlaps
Where causal SNPs (single nucleotide polymorphisms) tend to accumulate within
biological pathways, the incorporation of prior pathways information into a
statistical model is expected to increase the power to detect true associations
in a genetic association study. Most existing pathways-based methods rely on
marginal SNP statistics and do not fully exploit the dependence patterns among
SNPs within pathways. We use a sparse regression model, with SNPs grouped into
pathways, to identify causal pathways associated with a quantitative trait.
Notable features of our "pathways group lasso with adaptive weights" (P-GLAW)
algorithm include the incorporation of all pathways in a single regression
model, an adaptive pathway weighting procedure that accounts for factors
biasing pathway selection, and the use of a bootstrap sampling procedure for
the ranking of important pathways. P-GLAW takes account of the presence of
overlapping pathways and uses a novel combination of techniques to optimise
model estimation, making it fast to run, even on whole genome datasets. In a
comparison study with an alternative pathways method based on univariate SNP
statistics, our method demonstrates high sensitivity and specificity for the
detection of important pathways, showing the greatest relative gains in
performance where marginal SNP effect sizes are small.Comment: 29 page
Speaking for Themselves: Advocates' Perspectives on Evaluation
"Speaking for Themselves: Advocates' Perspectives on Evaluation" will give you a better understanding of advocates' views on evaluation, the advocacy strategies and capacities they find effective, and current evaluation practices. Based on Innovation's Network's research, the report includes recommendations for advocates, funders, and evaluators. Both the research and publication were made possible by the Annie E. Casey Foundation and The Atlantic Philanthropies
N-terminal modification of proteins with o-aminophenols.
The synthetic modification of proteins plays an important role in chemical biology and biomaterials science. These fields provide a constant need for chemical tools that can introduce new functionality in specific locations on protein surfaces. In this work, an oxidative strategy is demonstrated for the efficient modification of N-terminal residues on peptides and N-terminal proline residues on proteins. The strategy uses o-aminophenols or o-catechols that are oxidized to active coupling species in situ using potassium ferricyanide. Peptide screening results have revealed that many N-terminal amino acids can participate in this reaction, and that proline residues are particularly reactive. When applied to protein substrates, the reaction shows a stronger requirement for the proline group. Key advantages of the reaction include its fast second-order kinetics and ability to achieve site-selective modification in a single step using low concentrations of reagent. Although free cysteines are also modified by the coupling reaction, they can be protected through disulfide formation and then liberated after N-terminal coupling is complete. This allows access to doubly functionalized bioconjugates that can be difficult to access using other methods
External Evaluation of Event Extraction Classifiers for Automatic Pathway Curation: An extended study of the mTOR pathway
This paper evaluates the impact of various event extraction systems on
automatic pathway curation using the popular mTOR pathway. We quantify the
impact of training data sets as well as different machine learning classifiers
and show that some improve the quality of automatically extracted pathways
- …