167 research outputs found

    Predicting Paid Certification in Massive Open Online Courses

    Get PDF
    Massive open online courses (MOOCs) have been proliferating because of the free or low-cost offering of content for learners, attracting the attention of many stakeholders across the entire educational landscape. Since 2012, coined as “the Year of the MOOCs”, several platforms have gathered millions of learners in just a decade. Nevertheless, the certification rate of both free and paid courses has been low, and only about 4.5–13% and 1–3%, respectively, of the total number of enrolled learners obtain a certificate at the end of their courses. Still, most research concentrates on completion, ignoring the certification problem, and especially its financial aspects. Thus, the research described in the present thesis aimed to investigate paid certification in MOOCs, for the first time, in a comprehensive way, and as early as the first week of the course, by exploring its various levels. First, the latent correlation between learner activities and their paid certification decisions was examined by (1) statistically comparing the activities of non-paying learners with course purchasers and (2) predicting paid certification using different machine learning (ML) techniques. Our temporal (weekly) analysis showed statistical significance at various levels when comparing the activities of non-paying learners with those of the certificate purchasers across the five courses analysed. Furthermore, we used the learner’s activities (number of step accesses, attempts, correct and wrong answers, and time spent on learning steps) to build our paid certification predictor, which achieved promising balanced accuracies (BAs), ranging from 0.77 to 0.95. Having employed simple predictions based on a few clickstream variables, we then analysed more in-depth what other information can be extracted from MOOC interaction (namely discussion forums) for paid certification prediction. However, to better explore the learners’ discussion forums, we built, as an original contribution, MOOCSent, a cross- platform review-based sentiment classifier, using over 1.2 million MOOC sentiment-labelled reviews. MOOCSent addresses various limitations of the current sentiment classifiers including (1) using one single source of data (previous literature on sentiment classification in MOOCs was based on single platforms only, and hence less generalisable, with relatively low number of instances compared to our obtained dataset;) (2) lower model outputs, where most of the current models are based on 2-polar iii iv classifier (positive or negative only); (3) disregarding important sentiment indicators, such as emojis and emoticons, during text embedding; and (4) reporting average performance metrics only, preventing the evaluation of model performance at the level of class (sentiment). Finally, and with the help of MOOCSent, we used the learners’ discussion forums to predict paid certification after annotating learners’ comments and replies with the sentiment using MOOCSent. This multi-input model contains raw data (learner textual inputs), sentiment classification generated by MOOCSent, computed features (number of likes received for each textual input), and several features extracted from the texts (character counts, word counts, and part of speech (POS) tags for each textual instance). This experiment adopted various deep predictive approaches – specifically that allow multi-input architecture - to early (i.e., weekly) investigate if data obtained from MOOC learners’ interaction in discussion forums can predict learners’ purchase decisions (certification). Considering the staggeringly low rate of paid certification in MOOCs, this present thesis contributes to the knowledge and field of MOOC learner analytics with predicting paid certification, for the first time, at such a comprehensive (with data from over 200 thousand learners from 5 different discipline courses), actionable (analysing learners decision from the first week of the course) and longitudinal (with 23 runs from 2013 to 2017) scale. The present thesis contributes with (1) investigating various conventional and deep ML approaches for predicting paid certification in MOOCs using learner clickstreams (Chapter 5) and course discussion forums (Chapter 7), (2) building the largest MOOC sentiment classifier (MOOCSent) based on learners’ reviews of the courses from the leading MOOC platforms, namely Coursera, FutureLearn and Udemy, and handles emojis and emoticons using dedicated lexicons that contain over three thousand corresponding explanatory words/phrases, (3) proposing and developing, for the first time, multi-input model for predicting certification based on the data from discussion forums which synchronously processes the textual (comments and replies) and numerical (number of likes posted and received, sentiments) data from the forums, adapting the suitable classifier for each type of data as explained in detail in Chapter 7

    Unveiling the frontiers of deep learning: innovations shaping diverse domains

    Full text link
    Deep learning (DL) enables the development of computer models that are capable of learning, visualizing, optimizing, refining, and predicting data. In recent years, DL has been applied in a range of fields, including audio-visual data processing, agriculture, transportation prediction, natural language, biomedicine, disaster management, bioinformatics, drug design, genomics, face recognition, and ecology. To explore the current state of deep learning, it is necessary to investigate the latest developments and applications of deep learning in these disciplines. However, the literature is lacking in exploring the applications of deep learning in all potential sectors. This paper thus extensively investigates the potential applications of deep learning across all major fields of study as well as the associated benefits and challenges. As evidenced in the literature, DL exhibits accuracy in prediction and analysis, makes it a powerful computational tool, and has the ability to articulate itself and optimize, making it effective in processing data with no prior training. Given its independence from training data, deep learning necessitates massive amounts of data for effective analysis and processing, much like data volume. To handle the challenge of compiling huge amounts of medical, scientific, healthcare, and environmental data for use in deep learning, gated architectures like LSTMs and GRUs can be utilized. For multimodal learning, shared neurons in the neural network for all activities and specialized neurons for particular tasks are necessary.Comment: 64 pages, 3 figures, 3 table

    A Review of Deep Learning Models for Twitter Sentiment Analysis: Challenges and Opportunities

    Get PDF
    Microblogging site Twitter (re-branded to X since July 2023) is one of the most influential online social media websites, which offers a platform for the masses to communicate, expresses their opinions, and shares information on a wide range of subjects and products, resulting in the creation of a large amount of unstructured data. This has attracted significant attention from researchers who seek to understand and analyze the sentiments contained within this massive user-generated text. The task of sentiment analysis (SA) entails extracting and identifying user opinions from the text, and various lexicon-and machine learning-based methods have been developed over the years to accomplish this. However, deep learning (DL)-based approaches have recently become dominant due to their superior performance. This study briefs on standard preprocessing techniques and various word embeddings for data preparation. It then delves into a taxonomy to provide a comprehensive summary of DL-based approaches. In addition, the work compiles popular benchmark datasets and highlights evaluation metrics employed for performance measures and the resources available in the public domain to aid SA tasks. Furthermore, the survey discusses domain-specific practical applications of SA tasks. Finally, the study concludes with various research challenges and outlines future outlooks for further investigation

    Workshop Proceedings of the 12th edition of the KONVENS conference

    Get PDF
    The 2014 issue of KONVENS is even more a forum for exchange: its main topic is the interaction between Computational Linguistics and Information Science, and the synergies such interaction, cooperation and integrated views can produce. This topic at the crossroads of different research traditions which deal with natural language as a container of knowledge, and with methods to extract and manage knowledge that is linguistically represented is close to the heart of many researchers at the Institut für Informationswissenschaft und Sprachtechnologie of Universität Hildesheim: it has long been one of the institute’s research topics, and it has received even more attention over the last few years

    Formalizing Multimedia Recommendation through Multimodal Deep Learning

    Full text link
    Recommender systems (RSs) offer personalized navigation experiences on online platforms, but recommendation remains a challenging task, particularly in specific scenarios and domains. Multimodality can help tap into richer information sources and construct more refined user/item profiles for recommendations. However, existing literature lacks a shared and universal schema for modeling and solving the recommendation problem through the lens of multimodality. This work aims to formalize a general multimodal schema for multimedia recommendation. It provides a comprehensive literature review of multimodal approaches for multimedia recommendation from the last eight years, outlines the theoretical foundations of a multimodal pipeline, and demonstrates its rationale by applying it to selected state-of-the-art approaches. The work also conducts a benchmarking analysis of recent algorithms for multimedia recommendation within Elliot, a rigorous framework for evaluating recommender systems. The main aim is to provide guidelines for designing and implementing the next generation of multimodal approaches in multimedia recommendation

    Geographic information extraction from texts

    Get PDF
    A large volume of unstructured texts, containing valuable geographic information, is available online. This information – provided implicitly or explicitly – is useful not only for scientific studies (e.g., spatial humanities) but also for many practical applications (e.g., geographic information retrieval). Although large progress has been achieved in geographic information extraction from texts, there are still unsolved challenges and issues, ranging from methods, systems, and data, to applications and privacy. Therefore, this workshop will provide a timely opportunity to discuss the recent advances, new ideas, and concepts but also identify research gaps in geographic information extraction

    Information Refinement Technologies for Crisis Informatics: User Expectations and Design Implications for Social Media and Mobile Apps in Crises

    Get PDF
    In the past 20 years, mobile technologies and social media have not only been established in everyday life, but also in crises, disasters, and emergencies. Especially large-scale events, such as 2012 Hurricane Sandy or the 2013 European Floods, showed that citizens are not passive victims but active participants utilizing mobile and social information and communication technologies (ICT) for crisis response (Reuter, Hughes, et al., 2018). Accordingly, the research field of crisis informatics emerged as a multidisciplinary field which combines computing and social science knowledge of disasters and is rooted in disciplines such as human-computer interaction (HCI), computer science (CS), computer supported cooperative work (CSCW), and information systems (IS). While citizens use personal ICT to respond to a disaster to cope with uncertainty, emergency services such as fire and police departments started using available online data to increase situational awareness and improve decision making for a better crisis response (Palen & Anderson, 2016). When looking at even larger crises, such as the ongoing COVID-19 pandemic, it becomes apparent the challenges of crisis informatics are amplified (Xie et al., 2020). Notably, information is often not available in perfect shape to assist crisis response: the dissemination of high-volume, heterogeneous and highly semantic data by citizens, often referred to as big social data (Olshannikova et al., 2017), poses challenges for emergency services in terms of access, quality and quantity of information. In order to achieve situational awareness or even actionable information, meaning the right information for the right person at the right time (Zade et al., 2018), information must be refined according to event-based factors, organizational requirements, societal boundary conditions and technical feasibility. In order to research the topic of information refinement, this dissertation combines the methodological framework of design case studies (Wulf et al., 2011) with principles of design science research (Hevner et al., 2004). These extended design case studies consist of four phases, each contributing to research with distinct results. This thesis first reviews existing research on use, role, and perception patterns in crisis informatics, emphasizing the increasing potentials of public participation in crisis response using social media. Then, empirical studies conducted with the German population reveal positive attitudes and increasing use of mobile and social technologies during crises, but also highlight barriers of use and expectations towards emergency services to monitor and interact in media. The findings led to the design of innovative ICT artefacts, including visual guidelines for citizens’ use of social media in emergencies (SMG), an emergency service web interface for aggregating mobile and social data (ESI), an efficient algorithm for detecting relevant information in social media (SMO), and a mobile app for bidirectional communication between emergency services and citizens (112.social). The evaluation of artefacts involved the participation of end-users in the application field of crisis management, pointing out potentials for future improvements and research potentials. The thesis concludes with a framework on information refinement for crisis informatics, integrating event-based, organizational, societal, and technological perspectives

    Scalable Bayesian sparse learning in high-dimensional model

    Full text link
    Nowadays, high-dimensional models, where the number of parameters or features can even be larger than the number of observations are encountered on a fairly regular basis due to advancements in modern computation. For example, in gene expression datasets, we often encounter datasets with observations in the order of at most a few hundred and with predictors from thousands of genes. One of the goals is to identify the genes which are relevant to the expression. Another example is model compression, which aims to alleviate the costs of large model sizes. The former example is the variable or feature selection problem, while the latter is the model selection problem. In the Bayesian framework, we often specify shrinkage priors that induce sparsity in the model. The sparsity-inducing prior will have a high concentration around zero to identify the zero coefficient and heavy tails to capture the non-zero element. In this thesis, we first provide an overview of the most well-known sparsity-inducing priors. Then we propose to use L12L_{\frac{1}{2}} prior with a partially collapsed Gibbs (PCG) sampler 2 to explore the high dimensional parameter space in linear regression models and variable selection is achieved through credible intervals. We also develop a coordinate-wise optimization for posterior mode search with theoretical guarantees. We then extend the PCG sampler to develop a scalable ordinal regression model with a real application in the study of student evaluation of surveys. Next, we move to modern deep learning. A constrained variational Adam (CVA) algorithm has been introduced to optimize the Bayesian neural network and its connection to stochastic gradient Hamiltonian Monte Carlo has been discussed. We then generalize our algorithm to constrained variational Adam with expectation maximization (CVA-EM), which incorporates the spike-and-slab prior to capturing the sparsity of the neural network. Both nonlinear high dimensional variable selection and network pruning can be achieved by this algorithm. We further show that the CVA-EM algorithm can extend to the graph neural networks to produce both sparse graphs and sparse weights. Finally, we discuss the sparse VAE with L12L_{\frac{1}{2}} prior as potential future work

    MĂ©todo semi-supervisado para detectar, clasificar y anotar en un corpus de suicidio textos extraĂ­dos de entornos digitales

    Get PDF
    La presente tesis doctoral, con un enfoque cualicuantitativo (mixto), se enmarca en la línea del análisis de sentimientos en redes sociales, forma parte del proyecto Life, que busca crear una plataforma integral para detectar y brindar apoyo especializado a usuarios de redes sociales que publican textos con contenido suicida. Por ello se desarrolló el Corpus Life para realizar experimentos con algoritmos de aprendizaje automático, mismo que originalmente constaba de 102 mensajes suicidas (71 textos en inglés y 31 textos en español), 70 de estas muestras Sin Riesgo y 32 con Riesgo. Pero debido al escaso número de muestras y al desbalance entre ellas, los resultados generados no eran confiables. Por ello esta investigación tuvo como objetivo general desarrollar un método semi-supervisado para detectar, clasificar y anotar en el Corpus Life, textos extraídos de entornos digitales, con el fin de incrementar su número de anotaciones, mediante un proceso de evaluación automática de su calidad, previo a su inclusión o exclusión. Anotaciones que fueron evaluadas manualmente, utilizando para ello la medida de concordancia Cohen´s Kappa, con la participación de anotadores especializados quienes evaluaron los textos, alcanzando un nivel de acuerdo entre anotadores de 0,86, cercano al 0,78-0,81 de significancia estadística alcanzado automáticamente por medio del índice macro f1, con el método semi-supervisado. Lo que conllevo a alcanzar experimentos de un mayor grado de confiabilidad, por medio de un método estructurado con actividades, roles y procesos bien definidos y enlazados.This doctoral thesis with a qualitative-quantitative (mixed) approach is part of the analysis of feelings in social networks that publish texts with suicidal content. For this reason, Corpus life was developed to carry out experiments with machine learning algorithms, which originally consisted of 102 suicide messages (71 texts in English and 31 texts in Spanish), 70 of these samples without risk and 32 with risk. But due to the small number of samples and the imbalance between them, the generated outcome was not reliable. Therefore, this research had the general objective of developing a semi-supervised method to detect, classify and annotate in the Corpus Life, texts extracted from digital environments, in order to increase their number of annotations, through a process of automatic assessments of their quality, prior to their inclusion or exclusion. Records which were tested manually, using the Cohen's Kappa concordance measure, with the participation of specialized annotators who evaluated the texts, reaching a level of agreement between annotators of 0.86, close to 0.78-0.81 of statistically significant reaching automatically by means of the f1 macro index, with the semi-supervised method. This led to achieving experiments with a higher degree of reliability, through a structured method with well-defined and linked activities, roles and processes
    • …
    corecore