34 research outputs found

    Wikiglass: a learning analytic tool for visualizing collaborative wikis of secondary school students

    Get PDF
    Poster SessionThis demo presents Wikiglass, a learning analytic tool for visualizing the statistics and timelines of collaborative Wikis built by secondary school students during their group project in inquiry-based learning. The tool adopts a modular structure for the flexibility of reuse with different data sources. The client side is built with the Model-View-Controller framework and the AngularJS library whereas the server side manages the database and data sources. The tool is currently used by secondary teachers in Hong Kong and is undergoing evaluation and improvement.published_or_final_versio

    Quality of care as an individual concept: Proposition of a three-level concept for clinical practice.

    Get PDF
    BACKGROUND Quality in health care is a complex framework with many components. The word "quality" is used in different official settings and different contexts (public health, certification, patient safety). On individual and team levels, the perception of quality is heterogenous, and the term is often used beyond the theoretical framework. Therefore, it remains a challenge to describe the perceived quality of care in the clinical setting. The aim of this paper is to present a simple concept that can be used to visually define the perceived quality of care for the individual health care professional. METHODS/CONCEPT An experience-based concept that uses different levels of "quality of care" individually to guide the supervision of health care professionals (residents) and quality goal setting in teams is presented, with the assumption that the ambition of any health care professional is to provide excellence in care. Three perceived levels of quality of care are defined, described, and visualized, namely, a) security, b) comfort, and c) perfection. The "comfort level" defines a sustainable level of care where the optimal balance between good patient care and resource use is achieved. Excellence of care is located between the comfort and the perfection level. The practical application of this proposed concept is described in three settings, namely, 1) the threshold for asking advice from the supervisor (resident physicians), 2) in supervision/coaching discussions between residents and supervisors, and 3) in the analysis of perceived quality of care and goals setting within the team. CONCLUSION A simplified, purpose-built but well-defined concept to visually depict the perception of quality of care by clinicians can be useful in clinical practice, for the supervision of residents and for team dynamics

    Scater: pre-processing, quality control, normalization and visualization of single-cell RNA-seq data in R.

    Get PDF
    MOTIVATION: Single-cell RNA sequencing (scRNA-seq) is increasingly used to study gene expression at the level of individual cells. However, preparing raw sequence data for further analysis is not a straightforward process. Biases, artifacts and other sources of unwanted variation are present in the data, requiring substantial time and effort to be spent on pre-processing, quality control (QC) and normalization. RESULTS: We have developed the R/Bioconductor package scater to facilitate rigorous pre-processing, quality control, normalization and visualization of scRNA-seq data. The package provides a convenient, flexible workflow to process raw sequencing reads into a high-quality expression dataset ready for downstream analysis. scater provides a rich suite of plotting tools for single-cell data and a flexible data structure that is compatible with existing tools and can be used as infrastructure for future software development. AVAILABILITY AND IMPLEMENTATION: The open-source code, along with installation instructions, vignettes and case studies, is available through Bioconductor at http://bioconductor.org/packages/scater . CONTACT: [email protected]. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online

    Weighted Gene Co-expression Network Analysis of Glioblastoma Gene Expression Microarray Data

    Get PDF
    Glioblastoma is a highly aggressive and lethal form of brain cancer characterized by its complex molecular landscape. Understanding the underlying gene expression patterns and their relationships is essential for unraveling the mechanisms driving this disease. In this study, we conducted a Weighted Gene Co-expression Network Analysis (WGCNA) on Glioblastoma gene expression microarray data to identify co-expressed gene modules and potential key regulatory genes associated with the disease. Utilizing a comprehensive dataset of Glioblastoma samples, we performed quality control and preprocessing to ensure the reliability of the data. WGCNA was employed to construct a weighted gene co-expression network, enabling the identification of modules of co-expressed genes. The correlation between these modules and clinical characteristics such as patient survival, tumor grade, and other relevant factors was assessed. Additionally, we conducted functional enrichment analysis to gain insights into the biological processes and pathways associated with the identified gene modules. Our findings revealed distinct gene modules associated with Glioblastoma progression and patient outcomes. Notably, we identified key hub genes within these modules, which may serve as potential biomarkers or therapeutic targets. Furthermore, functional enrichment analysis provided a comprehensive understanding of the biological processes and pathways influenced by these co-expressed gene modules. In conclusion, our Weighted Gene Co-expression Network analysis of Glioblastoma gene expression microarray data has shed light on the complex gene interactions and regulatory networks underlying this aggressive brain cancer. This knowledge may ultimately contribute to the development of novel diagnostic and therapeutic strategies, improving the prognosis for Glioblastoma patients

    Behavior Change Techniques for Reducing Interviewer Contributions to Total Survey Error

    Get PDF
    In the total survey error paradigm, nonsampling errors can be difficult to quantify, especially errors that occur in the data collection phase of face-to-face surveys. Field interviewers play “… dual roles as recruiters and data collectors…” (West et al, 2018), and are therefore potential contributors to both nonresponse error and measurement error. Recent advances in technology, paradata, and performance dashboards offer an opportunity to observe interviewer effects almost at the source in real time, and to intervene quickly to curtail them. Edwards, Maitland, and Connor (2017) report on an experimental program using rapid feedback of CARI coding results (within 72 hours of the field interview) to improve interviewers’ question-asking behavior. Mohadjer and Edwards (2018) describe a system for visualizing quality metrics and displaying alerts that inform field supervisors of anomalies (such as very short interviews) detected in data transmitted in the previous 24 hours. These features allow supervisors to quickly investigate, intervene and correct interviewer behavior that departs from the data collection protocol. From the interviewer’s perspective these interactions can be viewed as a form of learning “on the job”, consistent with the literature on best practices in adult learning, and from the survey manager’s perspective, they can be an important feature in a continuous quality improvement program for repeating cross-sectional and longitudinal surveys. We build on these initiatives to focus on specific areas where interviewer error can be a major contributor to TSE. We plan an experiment to investigate ways rapid feedback based on CARI coding can impact a survey’s key statistics. The experiment will be embedded in a continuous national face-to-face household survey which has an ongoing protocol of weekly CARI coding and interviewer feedback. The treatment will be rapid feedback on question-asking behavior for several critical items in the CAPI instrument, items that are known to be problematic for interviewers and respondents, and that produce data that do not benchmark well to other sources. The survey’s field staff is organized into a number of reporting regions. The treatment group will be a subset of interviewers who work in several of these regions. The control group will be the remaining regions, following the existing protocol. We also plan a descriptive study of contact attempt records, based on other surveys that equip interviewers with smart phones. The interviewers can enter records on the phone or on the laptop computer used to conduct CAPI interviews. Entry on the smart phone was designed to increase the proportion of records entered shortly after the event occurred and to increase recording accuracy. The records are available for review by supervisors, who monitor smartphone usage and advise interviewers on contact strategies, based in part on the contact record history. We will investigate whether interviewers who enter records primarily on the phones generate more records, more paradata per record, and more accurate paradata. Other studies have shown that the quality of paradata on contact attempts can be quite poor, but it is a primary input for propensity modeling (an element of many responsive survey designs). Thus the quality of the contact records can have an indirect role in nonresponse error

    PROTEOFORMER 2.0 : further developments in the ribosome profiling-assisted proteogenomic hunt for new proteoforms

    Get PDF
    PROTEOFORMER is a pipeline that enables the automated processing of data derived from ribosome profiling (RIBO-seq, i. e. the sequencing of ribosome-protected mRNA fragments). As such, genome-wide ribosome occupancies lead to the delineation of data-specific translation product candidates and these can improve the mass spectrometry-based identification. Since its first publication, different upgrades, new features and extensions have been added to the PROTEOFORMER pipeline. Some of the most important upgrades include P-site offset calculation during mapping, comprehensive data preexploration, the introduction of two alternative proteoform calling strategies and extended pipeline output features. These novelties are illustrated by analyzing ribosome profiling data of human HCT116 and Jurkat data. The different proteoform calling strategies are used alongside one another and in the end combined together with reference sequences from UniProt. Matching mass spectrometry data are searched against this extended search space with MaxQuant. Overall, besides annotated proteoforms, this pipeline leads to the identification and validation of different categories of new proteoforms, including translation products of up-and downstream open reading frames, 5 and 3 extended and truncated proteoforms, single amino acid variants, splice variants and translation products of so-called noncoding regions. Further, proof-of-concept is reported for the improvement of spectrum matching by including Prosit, a deep neural network strategy that adds extra fragmentation spectrum intensity features to the analysis. In the light of ribosome profiling-driven proteogenomics, it is shown that this allows validating the spectrum matches of newly identified proteoforms with elevated stringency. These updates and novel conclusions provide new insights and lessons for the ribosome profiling-based proteogenomic research field. More practical information on the pipeline, raw code, the user manual (README) and explanations on the different modes of availability can be found at the GitHub repository of PROTEOFORMER: https://github. com/ Biobix/proteoformer

    Computational design and optimization of electro-physiological sensors

    Get PDF
    Electro-physiological sensing devices are becoming increasingly common in diverse applications. However, designing such sensors in compact form factors and for high-quality signal acquisition is a challenging task even for experts, is typically done using heuristics, and requires extensive training. Our work proposes a computational approach for designing multi-modal electro-physiological sensors. By employing an optimization-based approach alongside an integrated predictive model for multiple modalities, compact sensors can be created which offer an optimal trade-off between high signal quality and small device size. The task is assisted by a graphical tool that allows to easily specify design preferences and to visually analyze the generated designs in real-time, enabling designer-in-the-loop optimization. Experimental results show high quantitative agreement between the prediction of the optimizer and experimentally collected physiological data. They demonstrate that generated designs can achieve an optimal balance between the size of the sensor and its signal acquisition capability, outperforming expert generated solutions

    The Impact of Social Media Sentiment on Market Share for Higher Education Institutions

    Get PDF
    In recent years, university enrollment and market share have been discussed among administrators. With declining populations and increased educational pathways for students, the need to capture the attention of prospective students is of increased interest. At the same time, social media has become a significant factor in the lives of current and potentially future generations. This factor influences not only trends but also decision-making. As a result, higher education institutions must ensure a requisite social media presence and manage their social media reputation to impact potential students’ intent to enroll. This study explores these components and how one influences the other. A quantitative exploratory study utilizing social media data was deployed for this research study. This allowed for the examination of the level of influence social media posts have on a student’s decision to apply to an institution of higher education. Social media sentiment of various institutions was used to develop a net sentiment score. This score was then compared to the number of applications received yearly. It was posited that the two items would be positively correlated. Regression, correlation, and time series analyses were used to explore the relationship between the variables. This study contributes to practice and theory by identifying tools to assist institutions in monitoring social media sentiment, forecasting applicant pool size, and highlighting social media reputation as a statistically significant element in students’ college choices. The inclusion of social media sentiment as a factor in the information component of choice models adds a brick to the current literature around college choice. Therefore, this study provides a valuable contribution to understanding social media and its impact on higher education institutions’ reputation and applicant pool size
    corecore