5,099 research outputs found

    Investigating the meaning of 'good' or 'very good' patient evaluations of care in English general practice: A mixed methods study

    Get PDF
    This is the final version. Available from the publisher via the DOI in this record.The data set is available on request from the authors: please email Jenni Burt ( [email protected]) for details.Objective: To examine concordance between responses to patient experience survey items evaluating doctors' interpersonal skills, and subsequent patient interview accounts of their experiences of care. Design: Mixed methods study integrating data from patient questionnaires completed immediately after a video-recorded face-to-face consultation with a general practitioner (GP) and subsequent interviews with the same patients which included playback of the recording. Setting: 12 general practices in rural, urban and inner city locations in six areas in England. Participants: 50 patients (66% female, aged 19-96 years) consulting face-to-face with 32 participating GPs. Main outcome measures: Positive responses to interpersonal skills items in a postconsultation questionnaire ('good' and 'very good') were compared with experiences reported during subsequent video elicitation interview (categorised as positive, negative or neutral by independent clinical raters) when reviewing that aspect of care. Results: We extracted 230 textual statements from 50 interview transcripts which related to the evaluation of GPs' interpersonal skills. Raters classified 70.9% (n=163) of these statements as positive, 19.6% (n=45) neutral and 9.6% (n=22) negative. Comments made by individual patients during interviews did not always express the same sentiment as their responses to the questionnaire. Where questionnaire responses indicated that interpersonal skills were 'very good', 84.6% of interview statements concerning that item were classified as positive. However, where patients rated interpersonal skills as 'good', only 41.9% of interview statements were classified as positive, and 18.9% as negative. Conclusions: Positive responses on patient experience questionnaires can mask important negative experiences which patients describe in subsequent interviews. The interpretation of absolute patient experience scores in feedback and public reporting should be done with caution, and clinicians should not be complacent following receipt of 'good' feedback. Relative scores are more easily interpretable when used to compare the performance of providers.NHS Cambridgeshire and Peterborough CCGNational Institute for Health Researc

    Heart rate Encapsulation and Response Tool using Sentiment Analysis

    Get PDF
    Users of every system expect it to get better. Providing feedback to the owners or management was difficult but with the advent of technology, it has become handy. Users can now post their comments through online blogs, android apps and websites. Due to the enormous data piling up every second causes a problem in analyzing it. In this paper, sentiment analysis is used for analyzing comments and reviews for hospital management system are demonstrated with real time data. The tools, algorithms and methodology that could fetch accurate results is described. Experimental results indicate 90% of accuracy in proposed system. The review report generated would help the hospital management to identify the positive and negative feedback which further assists them in improving their facilities that could not only create customer satisfaction but also enhanced business processes

    Patients’ online descriptions of their experiences as a measure of healthcare quality

    Get PDF
    Introduction Patients are describing their healthcare experiences online using rating websites. There has been substantial professional opposition to this, but the government in England has promoted the idea as a mechanism to improve healthcare quality. Little is known about the content and effect of healthcare rating and review sites. This thesis aims to look at comments left online and assess whether they might be a useful measure of healthcare quality. Method I used a variety of different approaches to examine patients’ comments and ratings about care online. I performed an examination of the comments left on the NHS Choices website, and analysed whether there was a relationship between the comments and traditional patient surveys or other measures of clinical quality. I used discrete choice experiments to look at the value patients place on online care reviews when making decisions about which hospital to go to. I used natural language processing techniques to explore the comments left in free text reviews. I analysed the tweets sent to NHS hospitals in England over a year to see if they contained useful information for understanding care quality. Results The analysis of ratings on NHS Choices demonstrates that reviews left online are largely positive. There are associations between online ratings and both traditional survey methods of patient experience and outcome measures. There is evidence of a selection bias in those who both read and contribute ratings online – with younger age groups and those with higher educational attainment more likely to use them. Discrete choice experiments suggest that people will use online ratings in their decisions about where to seek care, and the effect is similar to that of a recommendation by friends and family. I found that sentiment analysis techniques can be used classify free text comments left online into meaningful information that relates to data in the national patient surveys. However, the analysis of comments on Twitter found that only 11% of tweets were related to care quality. Conclusions Patients rating their care online may have a useful role as a measure of care quality. It has some drawbacks, not least the non-random group of people who leave their comments. However, it provides information that is complementary to current approaches to measuring quality and patient experiences, may be used by patients in their decision-making, and provides timely information for quality improvement. I hypothesise that it is possible to measure a ‘cloud of patient experience’ from all of the sources where patients describe their care online, including social media, and use this to make inferences about care quality. I find this idea has potential, but there are many technical and practical limitations that need to be overcome before it is useful.Open Acces

    Development of machine learning sentiment analyzer and quality classifier (MLSAQC) and its application in analysing hospital patient satisfaction from Facebook reviews in Malaysia

    Get PDF
    Background: Patient online reviews (POR) on social media platforms have been proposed as novel strategies for assessing patient satisfaction and monitoring healthcare quality. Social media data, on the other hand, is unstructured and huge in volume. Furthermore, no empirical study has been undertaken in Malaysia on the use of social media data and the perceived quality of care in hospitals based on POR, as well as the relationship between these variables and hospital accreditation. The objectives of this study were to (1) develop a machine learning system for automatically classifying Facebook (FB) reviews of public hospitals in Malaysia using service quality (SERVQUAL) dimensions and sentiment analysis, (2) determine the validity of FB Reviews as a supplement to a standard patient satisfaction survey, (3) investigate associations between SERVQUAL dimensions and sentiment and patient satisfaction and (4) determine the associations between hospital accreditation status and patient satisfaction and sentiment. Method: Between 2017 and 2019, we collected comments from 48 official public hospital FB pages. By manually annotating many batches of randomly chosen reviews, we constructed a machine learning quality classifier (MLQC) based on the SERVQUAL model and a machine learning sentiment analyzer (MLSA). The classifiers were trained using logistic regression (LR), naïve Bayes (NB), support vector machine (SVM), and other approaches. Each classifier's performance was evaluated using 5-fold cross validation. We used logistic regression analysis to determine the associations. Results: The average F1-score for topic classification was between 0.687 and 0.757 for all models. In addition, SVM consistently outperformed other approaches in a 5-fold cross validation of each SERVQUAL dimension and in sentiment analysis. We analysed 1852 reviews in total and discovered that 72.1% of positive reviews and 27.9% of negative reviews were accurately recognised by MLSA. Also, 73.5% of respondents reported being satisfied with public hospital services, while 26.5% reported being dissatisfied. 240 reviews were classified as tangible, 1257 as reliability, 125 as responsive, 356 as assurance, and 1174 as empathetic using the MLQC. After adjusting for hospital covariates, all SERVQUAL indicators except tangible were associated with positive sentiment. Furthermore, after correcting for hospital variables, it was shown that all SERVQUAL dimensions except tangible and assurance were significantly linked with patient dissatisfaction. However, no statistically significant association between hospital accreditation and internet sentiment and patient satisfaction has been identified. Conclusion: Using data acquired from FB reviews and machine learning algorithms, a pragmatic and practical strategy for eliciting patient perceptions of service quality and supplementing standard patient satisfaction surveys has been created. Additionally, online patient reviews provide a hitherto untapped measure of quality, which may benefit all healthcare stakeholders. Our findings complement earlier studies and the use of FB reviews, in addition to other approaches for assessing the quality of hospital care in Malaysia. Additionally, the findings give critical data that will assist hospital administrators in capitalising on POR through real-time monitoring and evaluation of service quality

    Stakeholders in safety: patient reports on unsafe clinical behaviors distinguish hospital mortality rates

    Get PDF
    Patient safety research has adapted concepts and methods from the workplace safety literature (safety climate, incident reporting) to explain why patients experience unintentional harm during clinical treatment in hospital (adverse events). Consequently, patient safety has primarily been studied through data generated by health care staff. However, because adverse events relate to patient injuries, it is suggested that patients and their families may also have valuable insights for investigating patient safety in hospitals. We conceptualized this idea by proposing that patients are stakeholders in hospital safety who, through their experiences of treatments and independence from institutional culture, can provide valid and supplementary data on unsafe clinical care. In 59 United Kingdom hospitals we investigated whether patient evaluations of care (N = 23,287 surveys) and the safety information contained in health care complaints (N = 2,017, containing 2.5 million words) explained variance in excess patient deaths (hospital mortality) beyond staff evaluations of care (N = 49,302 surveys) and incident reports (N = 242,859). The severity of reports on unsafe clinical behaviors (error and neglect) communicated in patient' health care complaints explained additional variance in hospital-level mortality rates beyond that of staff-generated data. The results indicate that patients provide valid and supplementary data on unsafe care in hospitals. Generalized to other organizational domains, the findings suggest that nonemployee stakeholders should be included in assessments of safety performance if they experience or observe unsafe behaviors. Theoretically, it is necessary to further examine how concepts such as safety climate can incorporate the observations and outcomes of stakeholders in safety

    KLOSURE: Closing in on open–ended patient questionnaires with text mining

    Get PDF
    Background: Knee injury and Osteoarthritis Outcome Score (KOOS) is an instrument used to quantify patients' perceptions about their knee condition and associated problems. It is administered as a 42-item closed-ended questionnaire in which patients are asked to self-assess five outcomes: pain, other symptoms, activities of daily living, sport and recreation activities, and quality of life. We developed KLOG as a 10-item open-ended version of the KOOS questionnaire in an attempt to obtain deeper insight into patients' opinions including their unmet needs. However, the open–ended nature of the questionnaire incurs analytical overhead associated with the interpretation of responses. The goal of this study was to automate such analysis. We implemented KLOSURE as a system for mining free–text responses to the KLOG questionnaire. It consists of two subsystems, one concerned with feature extraction and the other one concerned with classification of feature vectors. Feature extraction is performed by a set of four modules whose main functionalities are linguistic pre-processing, sentiment analysis, named entity recognition and lexicon lookup respectively. Outputs produced by each module are combined into feature vectors. The structure of feature vectors will vary across the KLOG questions. Finally, Weka, a machine learning workbench, was used for classification of feature vectors. Results: The precision of the system varied between 62.8% and 95.3%, whereas the recall varied from 58.3% to 87.6% across the 10 questions. The overall performance in terms of F–measure varied between 59.0% and 91.3% with an average of 74.4% and a standard deviation of 8.8. Conclusions: We demonstrated the feasibility of mining open-ended patient questionnaires. By automatically mapping free text answers onto a Likert scale, we can effectively measure the progress of rehabilitation over time. In comparison to traditional closed-ended questionnaires, our approach offers much richer information that can be utilised to support clinical decision making. In conclusion, we demonstrated how text mining can be used to combine the benefits of qualitative and quantitative analysis of patient experiences

    Understanding Customer Insights Through Big Data: Innovations in Brand Evaluation in the Automotive Industry

    Get PDF
    Abstract. Insights gained from social media platforms are pivotal for businesses to understand their products’ present position. While it is possible to use consulting services focusing on surveys about a product or brand, such methods may yield limited insights. By contrast, on social media, people frequently express their individual and unique feelings about products openly and informally. With this in mind, we aim to provide rigorous methodologies to enable businesses to gain significant insights on their brands and products in terms of representations on social media. This study employs conjoint analysis to lay the analytical groundwork for developing positive and negative sentiment frameworks to evaluate the brands of three prominent emerging automotive companies in Indonesia, anonymized as “HMI,” “YMI,” and “SMI.” We conducted a survey with a sample size of n=67 to analyze the phrasings of importance for our wording dictionary construction. A series of data processing operations were carried out, including the collection, capture, formatting, cleansing, and transformation of data. Our study’s findings indicate a distinct ranking of the most positively and negatively perceived companies among social media users. As a direct management-related implication, our proposed data analysis methods could assist the industry in applying the same rigor to evaluating companies’ products and brands directly from social media users’ perspective. Keywords:  Brand image, social media, data analytics, sentiment analysis, conjoint analysi

    Understanding patient experience from online medium

    Get PDF
    Improving patient experience at hospitals leads to better health outcomes. To improve this, we must first understand and interpret patients' written feedback. Patient-generated texts such as patient reviews found on RateMD, or online health forums found on WebMD are venues where patients post about their experiences. Due to the massive amounts of patient-generated texts that exist online, an automated approach to identifying the topics from patient experience taxonomy is the only realistic option to analyze these texts. However, not only is there a lack of annotated taxonomy on these media, but also word usage is colloquial, making it challenging to apply standardized NLP technique to identify the topics that are present in the patient-generated texts. Furthermore, patients may describe multiple topics in the patient-generated texts which drastically increases the complexity of the task. In this thesis, we address the challenges in comprehensively and automatically understanding the patient experience from patient-generated texts. We first built a set of rich semantic features to represent the corpus which helps capture meanings that may not typically be captured by the bag-of-words (BOW) model. Unlike the BOW model, semantic feature representation captures the context and in-depth meaning behind each word in the corpus. To the best of our knowledge, no existing work in understanding patient experience from patient-generated texts delves into which semantic features help capture the characteristics of the corpus. Furthermore, patients generally talk about multiple topics when they write in patient-generated texts, and these are frequently interdependent of each other. There are two types of topic interdependencies, those that are semantically similar, and those that are not. We built a constraint-based deep neural network classifier to capture the two types of topic interdependencies and empirically show the classification performance improvement over the baseline approaches. Past research has also indicated that patient experiences differ depending on patient segments [1-4]. The segments can be based on demographics, for instance, by race, gender, or geographical location. Similarly, the segments can be based on health status, for example, whether or not the patient is taking medication, whether or not the patient has a particular disease, or whether or not the patient is readmitted to the hospital. To better understand patient experiences, we built an automated approach to identify patient segments with a focus on whether the person has stopped taking the medication or not. The technique used to identify the patient segment is general enough that we envision the approach to be applicable to other types of patient segments. With a comprehensive understanding of patient experiences, we envision an application system where clinicians can directly read the most relevant patient-generated texts that pertain to their interest. The system can capture topics from patient experience taxonomy that is of interest to each clinician or designated expert, and we believe the system is one of many approaches that can ultimately help improve the patient experience

    A review of opinion mining and sentiment classification framework in social networks

    Get PDF
    The Web has dramatically changed the way we express opinions on certain products that we have purchased and used, or for services that we have received in the various industries. Opinions and reviews can be easily posted on the Web. such as in merchant sites, review portals, blogs, Internet forums, and much more. These data are commonly referred to as usergenerated content or user-generated media. Both the product manufacturers, as well as potential customers are very interested in this online 'word-of-mouth', as it provides product manufacturers information on their customers likes and dislikes, as well as the positive and negative comments on their products whenever available, giving them better knowledge of their products limitations and advantages over competitors; and also providing potential customers with useful and 'first-hand' information on the products and/or services to aid in their purchase decision making process. This paper discusses the existing works on opinion mining and sentiment classification of customer feedback and reviews online, and evaluates the different techniques used for the process. It focuses on thc areas covered by the evaluated papers, points out the areas that are well covered by many researchers and areas that are neglected in opinion mining and sentiment classification which are open for future research opportunity
    corecore