273 research outputs found

    La evaluación de un programa de atención temprana en un caso de prematuridad

    Get PDF
    En este trabajo se presentan los resultados obtenidos tras la intervención de un programa individual de atención temprana, de una niña prematura de 2 años y 8 meses (32 meses). En un principio, se realizó una evaluación inicial con la aplicación de la escala de desarrollo de Brunet-Lezine revisado, para valorar sus necesidades desde el punto de vista psicológico. Se realizó la intervención durante un año y se volvió a evaluar a la niña. En los resultados se demuestra la eficacia de ese programa, pues se aprecia una mejora en el desarrollo en el que se han trabajado diferentes áreas cómo la psicomotricidad, el área cognitiva, el área de lenguaje y el área social.This paper presents the results obtained after the intervention of an individual program of early intervention, a premature child of 2 years and 8 months (32 months). Initially, we performed an initial assessment to implementation of the development scale of Brunet-Lezine reviewed to assess their needs from the psychological point of view. Intervention was performed for a year and was reevaluated the child. The results demonstrated the effectiveness of this program, because an improvement is seen in development in which different areas have worked how psychomotor, cognitive, language area and social area

    Toward data-driven research: preliminary study to predict surface roughness in material extrusion using previously published data with machine learning

    Get PDF
    Purpose. Material extrusion is one of the most commonly used approaches within the additive manufacturing processes available. Despite its popularity and related technical advancements, process reliability and quality assurance remain only partially solved. In particular, the surface roughness caused by this process is a key concern. To solve this constraint, experimental plans have been exploited to optimize surface roughness in recent years. However, the latter empirical trial and error process is extremely time- and resource consuming. Thus, this study aims to avoid using large experimental programs to optimize surface roughness in material extrusion. Design/methodology/approach. This research provides an in-depth analysis of the effect of several printing parameters: layer height, printing temperature, printing speed and wall thickness. The proposed data-driven predictive modeling approach takes advantage of Machine Learning (ML) models to automatically predict surface roughness based on the data gathered from the literature and the experimental data generated for testing. Findings. Using ten-fold cross-validation of data gathered from the literature, the proposed ML solution attains a 0.93 correlation with a mean absolute percentage error of 13%. When testing with our own data, the correlation diminishes to 0.79 and the mean absolute percentage error reduces to 8%. Thus, the solution for predicting surface roughness in extrusion-based printing offers competitive results regarding the variability of the analyzed factors. Research limitations/implications. There are limitations in obtaining large volumes of reliable data, and the variability of the material extrusion process is relatively high. Originality/value. Although ML is not a novel methodology in additive manufacturing, the use of published data from multiple sources has barely been exploited to train predictive models. As available manufacturing data continue to increase on a daily basis, the ability to learn from these large volumes of data is critical in future manufacturing and science. Specifically, the power of ML helps model surface roughness with limited experimental tests.Xunta de Galicia | Ref. ED481B-2021–118Xunta de Galicia | Ref. ED481B-2022–09

    Interpretable Classification of Wiki-Review Streams

    Get PDF
    Wiki articles are created and maintained by a crowd of editors, producing a continuous stream of reviews. Reviews can take the form of additions, reverts, or both. This crowdsourcing model is exposed to manipulation since neither reviews nor editors are automatically screened and purged. To protect articles against vandalism or damage, the stream of reviews can be mined to classify reviews and profile editors in real-time. The goal of this work is to anticipate and explain which reviews to revert. This way, editors are informed why their edits will be reverted. The proposed method employs stream-based processing, updating the profiling and classification models on each incoming event. The profiling uses side and content-based features employing Natural Language Processing, and editor profiles are incrementally updated based on their reviews. Since the proposed method relies on self-explainable classification algorithms, it is possible to understand why a review has been classified as a revert or a non-revert. In addition, this work contributes an algorithm for generating synthetic data for class balancing, making the final classification fairer. The proposed online method was tested with a real data set from Wikivoyage, which was balanced through the aforementioned synthetic data generation. The results attained near-90 % values for all evaluation metrics (accuracy, precision, recall, and F-measure).info:eu-repo/semantics/publishedVersio

    Interpretable classification of Wiki-review streams

    Get PDF
    Wiki articles are created and maintained by a crowd of editors, producing a continuous stream of reviews. Reviews can take the form of additions, reverts, or both. This crowdsourcing model is exposed to manipulation since neither reviews nor editors are automatically screened and purged. To protect articles against vandalism or damage, the stream of reviews can be mined to classify reviews and profle editors in real-time. The goal of this work is to anticipate and explain which reviews to revert. This way, editors are informed why their edits will be reverted. The proposed method employs stream-based processing, updating the profling and classifcation models on each incoming event. The profling uses side and content-based features employing Natural Language Processing, and editor profles are incrementally updated based on their reviews. Since the proposed method relies on self-explainable classifcation algorithms, it is possible to understand why a review has been classifed as a revert or a non-revert. In addition, this work contributes an algorithm for generating synthetic data for class balancing, making the fnal classifcation fairer. The proposed online method was tested with a real data set from Wikivoyage, which was balanced through the aforementioned synthetic data generation. The results attained near-90 % values for all evaluation metrics (accuracy, precision, recall, and F-measure)Fundação para a Ciência e a Tecnologia | Ref. UIDB/50014/2020Xunta de Galicia | Ref. ED481B-2021-11

    “A piece of cake”: sustainable education practices with transformer models

    Get PDF
    New advances in Artificial Intelligence (AI), particularly transformer language models, have disruptively changed the current education and training practices. The latter advancements have forced a shift towards a more practical and sustainable learning process with the ultimate objective of creating active, high-quality, and continuous educational services. However, human intervention is still essential to keep control over the outcomes of AI-based solutions. Thus, this work contributes with a fully functional and well-designed Conversational Assistant for Knowledge Enhancement (CAKE), which integrates a transformer model in a chatbot platform with empathetic capabilities. The proposed environment is fully immersive and incorporates gamification. Natural Language Processing techniques are exploited through prompts. The most relevant features of CAKE are its flexibility regarding language, areas and topics covered, and the education level of the end users. Results obtained in two evaluation scenarios endorse the performance of the proposed solution and motivate us to continue this research line toward real-time monitoring of smart learning

    Explanation plug-in for stream-based collaborative filtering

    Get PDF
    Collaborative filtering is a widely used recommendation technique, which often relies on rating information shared by users, i.e., crowdsourced data. These filters rely on predictive algorithms, such as, memory or model based predictors, to build direct or latent user and item profiles from crowdsourced data. To predict unknown ratings, memory-based approaches rely on the similarity between users or items, whereas model-based mechanisms explore user and item latent profiles. However, many of these filters are opaque by design, leaving users with unexplained recommendations. To overcome this drawback, this paper introduces Explug, a local model-agnostic plug-in that works alongside stream-based collaborative filters to reorder and explain recommendations. The explanations are based on incremental user Trust & Reputation profiling and co-rater relationships. Experiments performed with crowdsourced data from TripAdvisor show that Explug explains and improves the quality of stream-based collaborative filter recommendations.Xunta de Galicia | Ref. ED481B-2021-118Fundação para a Ciência e a Tecnologia | Ref. UIDB/50014/202

    A review on the use of large language models as virtual tutors

    Get PDF
    Transformer architectures contribute to managing long-term dependencies for natural language processing, representing one of the most recent changes in the field. These architectures are the basis of the innovative, cutting-edge large language models (LLMs) that have produced a huge buzz in several fields and industrial sectors, among the ones education stands out. Accordingly, these generative artificial intelligence-based solutions have directed the change in techniques and the evolution in educational methods and contents, along with network infrastructure, towards high-quality learning. Given the popularity of LLMs, this review seeks to provide a comprehensive overview of those solutions designed specifically to generate and evaluate educational materials and which involve students and teachers in their design or experimental plan. To the best of our knowledge, this is the first review of educational applications (e.g., student assessment) of LLMs. As expected, the most common role of these systems is as virtual tutors for automatic question generation. Moreover, the most popular models are GPT-3 and BERT. However, due to the continuous launch of new generative models, new works are expected to be published shortly.Xunta de Galicia | Ref. ED481B-2021-118Xunta de Galicia | Ref. ED481B-2022-093Universidade de Vigo/CISU

    Unsupervised explainable activity prediction in competitive nordic walking from experimental data

    Get PDF
    Artificial Intelligence ( ai ) has found application in Human Activity Recognition ( har ) in competitive sports. To date, most Machine Learning ( ml ) approaches for har have relied on offline (batch) training, imposing higher computational and tagging burdens compared to online processing unsupervised approaches. Additionally, the decisions behind traditional ml predictors are opaque and require human interpretation. In this work, we apply an online processing unsupervised clustering approach based on low-cost wearable Inertial Measurement Units ( imu s). The outcomes generated by the system allow for the automatic expansion of limited tagging available ( e.g. , by referees) within those clusters, producing pertinent information for the explainable classification stage. Specifically, our work focuses on achieving automatic explainability for predictions related to athletes' activities, distinguishing between correct, incorrect, and cheating practices in Nordic Walking. The proposed solution achieved performance metrics of close to 100% on averageXunta de Galicia | Ref. ED481B-2021-118Xunta de Galicia | Ref. ED481B-2022-093Xunta de Galicia | Ref. ED431C 2022/0

    Targeted aspect-based emotion analysis to detect opportunities and precaution in financial Twitter messages

    Get PDF
    Microblogging platforms, of which Twitter is a representative example, are valuable information sources for market screening and financial models. In them, users voluntarily provide relevant information, including educated knowledge on investments, reacting to the state of the stock markets in real-time and, often, influencing this state. We are interested in the user forecasts in financial, social media messages expressing opportunities and precautions about assets. We propose a novel Targeted Aspect-Based Emotion Analysis (tabea) system that can individually discern the financial emotions (positive and negative forecasts) on the different stock market assets in the same tweet (instead of making an overall guess about that whole tweet). It is based on Natural Language Processing (nlp) techniques and Machine Learning streaming algorithms. The system comprises a constituency parsing module for parsing the tweets and splitting them into simpler declarative clauses; an offline data processing module to engineer textual, numerical and categorical features and analyse and select them based on their relevance; and a stream classification module to continuously process tweets on-the-fly. Experimental results on a labelled data set endorse our solution. It achieves over 90% precision for the target emotions, financial opportunity, and precaution on Twitter. To the best of our knowledge, no prior work in the literature has addressed this problem despite its practical interest in decision-making, and we are not aware of any previous nlp nor online Machine Learning approaches to tabea.Xunta de Galicia | Ref. ED481B-2021-118Xunta de Galicia | Ref. ED481B-2022-093Financiado para publicación en acceso aberto: Universidade de Vigo/CISU
    corecore