451 research outputs found

    Learning Therapy Strategies from Demonstration Using Latent Dirichlet Allocation

    Full text link
    The use of robots in stroke rehabilitation has become a pop-ular trend in rehabilitation robotics. However, despite the ac-knowledged value of customized service for individual pa-tients, research on programming adaptive therapy for indi-vidual patients has received little attention. The goal of the current study is to model teletherapy sessions in the form of a generative process for autonomous therapy that approxi-mate the demonstrations of the therapist. The resulting au-tonomous programs for therapy may imitate the strategy that the therapist might have employed and reinforce therapeutic exercises between teletherapy sessions. We propose to en-code the therapist’s decision criteria in terms of the patient’s motor performance features. Specifically, in this work, we apply Latent Dirichlet Allocation on the batch data collected during teletherapy sessions between a single stroke patient and a single therapist. Using the resulting models, the thera-peutic exercise targets are generated and are verified with the same therapist who generated the data

    Cooperative Semantic Information Processing for Literature-Based Biomedical Knowledge Discovery

    Get PDF
    Given that data is increasing exponentially everyday, extracting and understanding the information, themes and relationships from large collections of documents is more and more important to researchers in many areas. In this paper, we present a cooperative semantic information processing system to help biomedical researchers understand and discover knowledge in large numbers of titles and abstracts from PubMed query results. Our system is based on a prevalent technique, topic modeling, which is an unsupervised machine learning approach for discovering the set of semantic themes in a large set of documents. In addition, we apply a natural language processing technique to transform the “bag-of-words” assumption of topic models to the “bag-of-important-phrases” assumption and build an interactive visualization tool using a modified, open-source, Topic Browser. In the end, we conduct two experiments to evaluate the approach. The first, evaluates whether the “bag-of-important-phrases” approach is better at identifying semantic themes than the standard “bag-of-words” approach. This is an empirical study in which human subjects evaluate the quality of the resulting topics using a standard “word intrusion test” to determine whether subjects can identify a word (or phrase) that does not belong in the topic. The second is a qualitative empirical study to evaluate how well the system helps biomedical researchers explore a set of documents to discover previously hidden semantic themes and connections. The methodology for this study has been successfully used to evaluate other knowledge-discovery tools in biomedicine

    Harvesting Wisdom on Social Media for Business Decision Making

    Get PDF
    The proliferation of social media provides significant opportunities for organizations to obtain wisdom of the crowds (WOC)-type data for decision making. However, critical challenges associated with collecting such data exist. For example, the openness of social media tends to increase the possibility of social influence, which may diminish group diversity, one of the conditions of WOC. In this research-in-progress paper, a new social media data analytics framework is proposed. It is equipped with well-designed mechanisms (e.g., using different discussion processes to overcome social influence issues and boost social learning) to generate data and employs state-of-the-art big data technologies, e.g., Amazon EMR, for data processing and storage. Design science research methodology is used to develop the framework. This paper contributes to the WOC and social media adoption literature by providing a practical approach for organizations to effectively generate WOC-type data from social media to support their decision making

    What Does Twitter Say About Self-Regulated Learning? Mapping Tweets From 2011 to 2021

    Get PDF
    Social network services such as Twitter are important venues that can be used as rich data sources to mine public opinions about various topics. In this study, we used Twitter to collect data on one of the most growing theories in education, namely Self-Regulated Learning (SRL) and carry out further analysis to investigate What Twitter says about SRL? This work uses three main analysis methods, descriptive, topic modeling, and geocoding analysis. The searched and collected dataset consists of a large volume of relevant SRL tweets equal to 54,070 tweets between 2011 and 2021. The descriptive analysis uncovers a growing discussion on SRL on Twitter from 2011 till 2018 and then markedly decreased till the collection day. For topic modeling, the text mining technique of Latent Dirichlet allocation (LDA) was applied and revealed insights on computationally processed topics. Finally, the geocoding analysis uncovers a diverse community from all over the world, yet a higher density representation of users from the Global North was identified. Further implications are discussed in the paper.publishedVersio

    Integrating Natural Language Processing and Interpretive Thematic Analyses to Gain Human-Centered Design Insights on HIV Mobile Health: Proof-of-Concept Analysis

    Full text link
    Background: HIV mobile health (mHealth) interventions often incorporate interactive peer-to-peer features. The user-generated content (UGC) created by these features can offer valuable design insights by revealing what topics and life events are most salient for participants, which can serve as targets for subsequent interventions. However, unstructured, textual UGC can be difficult to analyze. Interpretive thematic analyses can preserve rich narratives and latent themes but are labor-intensive and therefore scale poorly. Natural language processing (NLP) methods scale more readily but often produce only coarse descriptive results. Recent calls to advance the field have emphasized the untapped potential of combined NLP and qualitative analyses toward advancing user attunement in next-generation mHealth. Objective: In this proof-of-concept analysis, we gain human-centered design insights by applying hybrid consecutive NLP-qualitative methods to UGC from an HIV mHealth forum. Methods: UGC was extracted from Thrive With Me, a web app intervention for men living with HIV that includes an unstructured peer-to-peer support forum. In Python, topics were modeled by latent Dirichlet allocation. Rule-based sentiment analysis scored interactions by emotional valence. Using a no v el ranking standard, the experientially richest and most emotionally polarized segments of UGC were condensed and then analyzed thematically in Dedoose. Design insights were then distilled from these themes. Results: The refined topic model detected K=3 topics: A: disease coping; B: social adversities; C: salutations and check-ins. Strong intratopic themes included HIV medication adherence, survivorship, and relationship challenges. Negative UGC often involved strong negative reactions to external media events. Positive UGC often focused on gratitude for survival, well-being, and fellow users’ support. Conclusions: With routinization, hybrid NLP-qualitative methods may be viable to rapidly characterize UGC in mHealth environments. Design principles point to ward opportunities to align mHealth intervention features with the organically occurring uses captured in these analyses, for example, by foregrounding inspiring personal narratives and expressions of gratitude, or de-emphasizing anger-inducing media

    Chatbots: History, uses, classification and response pool generation techniques

    Get PDF
    Η ραγδαία ανάπτυξη της τεχνολογίας έφερε τα chatbots (ή πράκτορες συζήτησης) πάλι στο προσκήνιο. Παρόλο που τα chatbots προϋπήρχαν για πολλές δεκαετίες ξεκινώντας από το Turing Test και τον πράκτορα συζήτησης ELIZA, σύγχρονες τεχνικές Μηχανικής Μάθησης έχουν επιτρέψει πιο σύνθετες και ακριβείς υλοποιήσεις με αποτέλεσμα την πολύ συχνή εμφάνιση εφαρμογών chatbots σε εικονικούς βοηθούς και υποστήριξη πελατών επιχειρήσεων, αλλά και σε σημαντικούς τομείς της σύγχρονης ζωής, όπως στην υγεία και εκπαίδευση. H εργασία θα εξετάσει πρώτα τις εφαρμογές chatbot, το ιστορικό τους, καθώς και τις δημοφιλείς χρήσεις τους και τα πεδία στα οποία μπορούν να χρησιμοποιηθούν. Στη συνέχεια, θα δοθεί έμφαση στην ταξινόμησή τους με βάση την υλοποίηση και με βάση το εύρος των θεμάτων που ειδικεύονται. Έπειτα, θα εξεταστούν οι μέθοδοι αξιολόγησης ενός chatbot και θα αναλυθεί ένα κλειστό μοντέλο ανάκτησης chatbot. Τέλος, στο επίκεντρο της εργασίας, αναλύουμε και παρουσιάζουμε τις δικές μας τεχνικές για τη δημιουργία μιας λίστας απαντήσεων για ένα πιο αποτελεσματικό μοντέλο ανάκτησης και αξιολογούμε τα αποτελέσματα.Rapidly evolving technology has brought chatbots (or computer conversational agents) back in the limelight. Although chatbots have existed for many decades tracing back to the Turing Test and the ELIZA chatbot, Machine Learning techniques have allowed more complex and more accurate chatbot implementations, resulting in chatbots dominating the customer support field in many businesses and showing up in virtual assistants, but also with a high potential of benefiting education, health and other important facets of modern life. This thesis will first delve into chatbot applications, their history, as well as their popular uses and fields they should be used in. Subsequently, emphasis will be placed on their classification based on implementation and based on range of topics they specialize in. Then, the methods of evaluating a chatbot will be elaborated upon and a closed-domain retrieval chatbot model will be analyzed. Finally, in the focus of the thesis, we analyze and present our own techniques for generating a response pool for a more efficient retrieval model and we evaluate the results

    Bibliometric Studies and Worldwide Research Trends on Global Health

    Get PDF
    Global health, conceived as a discipline, aims to train, research and respond to problems of a transboundary nature, in order to improve health and health equity at the global level. The current worldwide situation is ruled by globalization, and therefore the concept of global health involves not only health-related issues, but also those related to the environment and climate change. Therefore, in this Special Issue, the problems related to global health have been addressed from a bibliometric approach in four main areas: environmental issues, diseases, health, education and society

    Service design from staffing to outsourcing

    Get PDF
    The term outsourcing has become a conventional means of describing anything associated with the transaction of services that enables client organisations to blur core activities and thereby reduce their internal workforce and costs. The main objective of this study is confirming a gap in detailed and spe-cific reviews of formats and economic transactions through non-standard forms of employment, namely in a service design model from Staffing to Outsourcing. The literature review was performed using text mining and topic modelling techniques to group relevant topics and decreases the likelihood of human bias, while bringing robustness to the analysis. The results are reflected in a conceptual state of the art diagram that will serve as a basis to new discussions.info:eu-repo/semantics/publishedVersio

    Challenges in designing an online healthcare platform for personalised patient analytics

    Get PDF
    The growing number and size of clinical medical records (CMRs) represents new opportunities for finding meaningful patterns and patient treatment pathways while at the same time presenting a huge challenge for clinicians. Indeed, CMR repositories share many characteristics of the classical ‘big data’ problem, requiring specialised expertise for data management, extraction, and modelling. In order to help clinicians make better use of their time to process data, they will need more adequate data processing and analytical tools, beyond the capabilities offered by existing general purpose database management systems or database servers. One modelling technique that can readily benefit from the availability of big data, yet remains relatively unexplored is personalised analytics where a model is built for each patient. In this paper, we present a strategy for designing a secure healthcare platform for personalised analytics by focusing on three aspects: (1) data representation, (2) data privacy and security, and (3) personalised analytics enabled by machine learning algorithms
    corecore