17 research outputs found

    The role of information technology in STEM education

    Get PDF
    The ubiquity of IT (Information technology) for teaching at large is a reality that can be observed, including STEM education, which is the field of study of this research. In view of this situation, this work is intended to determine the role of IT in STEM (Science, Technology, Engineering, Mathematics) education. It was decided to conduct a systematic review based on PRISMA model and adding information obtained from the analysis of fugitive literature. The literature review was carried out on a total of 16 articles. The main inclusion criteria were a temporal selection from 2015 to March 2023, the inclusion of the terms IT and STEM in the title, abstract or keywords of the articles. The main results show an increasing tendency of this topic, especially in English research. Most relevant conclusions of the systematic review evidence a positive relationship between IT and STEM education, although some negative aspects are also highlighted as there is still a lack of resources and teacher training, leading to ineffective application of IT in STEM classes. The research results have important practical implications, it motivates teachers to research, propose and implement measures to enhance the role of IT in STEM education, while minimizing the limitations that have been identified

    Analysing the Impact of Machine Learning to Model Subjective Mental Workload: A Case Study in Third-Level Education

    Get PDF
    Mental workload measurement is a complex multidisciplinary research area that includes both the theoretical and practical development of models. These models are aimed at aggregating those factors, believed to shape mental workload, and their interaction, for the purpose of human performance prediction. In the literature, models are mainly theory-driven: their distinct development has been influenced by the beliefs and intuitions of individual scholars in the disciplines of Psychology and Human Factors. This work presents a novel research that aims at reversing this tendency. Specifically, it employs a selection of learning techniques, borrowed from machine learning, to induce models of mental workload from data, with no theoretical assumption or hypothesis. These models are subsequently compared against two well-known subjective measures of mental workload, namely the NASA Task Load Index and the Workload Profile. Findings show how these data-driven models are convergently valid and can explain overall perception of mental workload with a lower error

    Computer Supported Education: 10th International Conference, CSEDU 2018, Funchal, Madeira, Portugal, March 15–17, 2018, Revised Selected Papers

    No full text
    For students, capstone project represents the culmination of their studies and is typically one of the last milestones before graduation. Participating in a capstone project can be an inspiring learning opportunity or a struggle due various reasons yet a very educative learning experience. During the IT capstone project students practice and develop their professional skills in designing and implementing a solution to a complex, ill-defined real-life problem as a team. This paper reflects on organizing IT capstone projects in computer science and software engineering Master programmes in a Sino-Finnish setup, where the projects are executed in a framework provided by a capstone project course. We describe the course framework and discuss the challenges in finding and providing ill-defined challenges with meaningful real-life connection for project topics. Based on our observations complemented with students’ feedback we also propose areas for future development.</p

    Adaptive and Re-adaptive Pedagogies in Higher Education: A Comparative, Longitudinal Study of Their Impact on Professional Competence Development across Diverse Curricula

    Get PDF
    This study addresses concerns that traditional, lecture-based teaching methods may not sufficiently develop the integrated competencies demanded by modern professional practice. A disconnect exists between conventional pedagogy and desired learning outcomes, prompting increased interest in innovative, student-centered instructional models tailored to competence growth. Despite this, nuanced differences in competence development across diverse university curricula remain underexplored, with research predominantly relying on students’ self-assessments. To address these gaps, this study employs longitudinal mixed-methods approaches with regard to theory triangulation and investigator triangulation to better understand how professional knowledge, skills, and dispositions evolve across varied curricula and contexts. This research emphasizes adaptive and re-adaptive teaching approaches incorporating technology, individualization, and experiential learning, which may uniquely integrate skill development with contextual conceptual learning. Specific attention is paid to professional education paths like design, media, and communications degrees, where contemporary competence models stress capabilities beyond core conceptual knowledge. Results from this study aim to guide reform efforts to optimize professional competence development across diverse academic areas

    A Comparison of Instructional Efficiency Models in Third Level Education

    Get PDF
    This study investigates the validity and sensitivity of a novel model of instructional efficiency: the parabolic model. The novel model is compared against state-of-the-art models present in instructional design today; Likelihood model, Deviational model and Multidimensional model. This models is based on the assumption that optimal mental workload and high performance leads to high efficiency, while other models assume that low mental workload and high performance leads to high efficiency. The investigation makes use of two instructional design conditions: a direct instructions approach to learning and its extension with a collaborative activity. A control group received the former instructional design while an experimental group received the latter design. A performance score was extracted for evaluation. The models of efficiency compared were based upon both a unidimensional and a multidimensional measure of mental workload, which were acquired through self-reporting from the participants. These mental load measures in conjunction with the performance score contribute to the calculation of efficiency scores for each model. The aim of this study is to determine whether the novel model is able to better differentiate between the control and experimental groups based on the resulting efficiency when compared to the other models. The models were analysed and compared using various statistical tests and techniques. Empirical evidence partially supports the proposed hypothesis that parabolic model demonstrates validity, however lacks sufficient statistical evidence to suggest that the model has better sensitivity and its capacity to differentiate between the two groups

    An Empirical Evaluation of the Inferential Capacity of Defeasible Argumentation, Non-monotonic Fuzzy Reasoning and Expert Systems

    Get PDF
    Several non-monotonic formalisms exist in the field of Artificial Intelligence for reasoning under uncertainty. Many of these are deductive and knowledge-driven, and also employ procedural and semi-declarative techniques for inferential purposes. Nonetheless, limited work exist for the comparison across distinct techniques and in particular the examination of their inferential capacity. Thus, this paper focuses on a comparison of three knowledge-driven approaches employed for non-monotonic reasoning, namely expert systems, fuzzy reasoning and defeasible argumentation. A knowledge-representation and reasoning problem has been selected: modelling and assessing mental workload. This is an ill-defined construct, and its formalisation can be seen as a reasoning activity under uncertainty. An experimental work was performed by exploiting three deductive knowledge bases produced with the aid of experts in the field. These were coded into models by employing the selected techniques and were subsequently elicited with data gathered from humans. The inferences produced by these models were in turn analysed according to common metrics of evaluation in the field of mental workload, in specific validity and sensitivity. Findings suggest that the variance of the inferences of expert systems and fuzzy reasoning models was higher, highlighting poor stability. Contrarily, that of argument-based models was lower, showing a superior stability of its inferences across knowledge bases and under different system configurations. The originality of this research lies in the quantification of the impact of defeasible argumentation. It contributes to the field of logic and non-monotonic reasoning by situating defeasible argumentation among similar approaches of non-monotonic reasoning under uncertainty through a novel empirical comparison

    An Evaluation Of Learning Employing Natural Language Processing And Cognitive Load Assessment

    Get PDF
    One of the key goals of Pedagogy is to assess learning. Various paradigms exist and one of this is Cognitivism. It essentially sees a human learner as an information processor and the mind as a black box with limited capacity that should be understood and studied. With respect to this, an approach is to employ the construct of cognitive load to assess a learner\u27s experience and in turn design instructions better aligned to the human mind. However, cognitive load assessment is not an easy activity, especially in a traditional classroom setting. This research proposes a novel method for evaluating learning both employing subjective cognitive load assessment and natural language processing. It makes use of primary, empirical and deductive methods. In details, on one hand, cognitive load assessment is performed using well-known self-reporting instruments, borrowed from Human Factors, namely the Nasa Task Load Index and the Workload Profile. On the other hand, Natural Language Processing techniques, borrowed from Artificial Intelligence, are employed to calculate semantic similarity of textual information, provided by learners after attending a typical third-level class, and the content of the class itself. Subsequently, an investigation of the relationship of cognitive load assessment and textual similarity is performed to assess learning

    Políticas de Copyright de Publicações Científicas em Repositórios Institucionais: O Caso do INESC TEC

    Get PDF
    A progressiva transformação das práticas científicas, impulsionada pelo desenvolvimento das novas Tecnologias de Informação e Comunicação (TIC), têm possibilitado aumentar o acesso à informação, caminhando gradualmente para uma abertura do ciclo de pesquisa. Isto permitirá resolver a longo prazo uma adversidade que se tem colocado aos investigadores, que passa pela existência de barreiras que limitam as condições de acesso, sejam estas geográficas ou financeiras. Apesar da produção científica ser dominada, maioritariamente, por grandes editoras comerciais, estando sujeita às regras por estas impostas, o Movimento do Acesso Aberto cuja primeira declaração pública, a Declaração de Budapeste (BOAI), é de 2002, vem propor alterações significativas que beneficiam os autores e os leitores. Este Movimento vem a ganhar importância em Portugal desde 2003, com a constituição do primeiro repositório institucional a nível nacional. Os repositórios institucionais surgiram como uma ferramenta de divulgação da produção científica de uma instituição, com o intuito de permitir abrir aos resultados da investigação, quer antes da publicação e do próprio processo de arbitragem (preprint), quer depois (postprint), e, consequentemente, aumentar a visibilidade do trabalho desenvolvido por um investigador e a respetiva instituição. O estudo apresentado, que passou por uma análise das políticas de copyright das publicações científicas mais relevantes do INESC TEC, permitiu não só perceber que as editoras adotam cada vez mais políticas que possibilitam o auto-arquivo das publicações em repositórios institucionais, como também que existe todo um trabalho de sensibilização a percorrer, não só para os investigadores, como para a instituição e toda a sociedade. A produção de um conjunto de recomendações, que passam pela implementação de uma política institucional que incentive o auto-arquivo das publicações desenvolvidas no âmbito institucional no repositório, serve como mote para uma maior valorização da produção científica do INESC TEC.The progressive transformation of scientific practices, driven by the development of new Information and Communication Technologies (ICT), which made it possible to increase access to information, gradually moving towards an opening of the research cycle. This opening makes it possible to resolve, in the long term, the adversity that has been placed on researchers, which involves the existence of barriers that limit access conditions, whether geographical or financial. Although large commercial publishers predominantly dominate scientific production and subject it to the rules imposed by them, the Open Access movement whose first public declaration, the Budapest Declaration (BOAI), was in 2002, proposes significant changes that benefit the authors and the readers. This Movement has gained importance in Portugal since 2003, with the constitution of the first institutional repository at the national level. Institutional repositories have emerged as a tool for disseminating the scientific production of an institution to open the results of the research, both before publication and the preprint process and postprint, increase the visibility of work done by an investigator and his or her institution. The present study, which underwent an analysis of the copyright policies of INESC TEC most relevant scientific publications, allowed not only to realize that publishers are increasingly adopting policies that make it possible to self-archive publications in institutional repositories, all the work of raising awareness, not only for researchers but also for the institution and the whole society. The production of a set of recommendations, which go through the implementation of an institutional policy that encourages the self-archiving of the publications developed in the institutional scope in the repository, serves as a motto for a greater appreciation of the scientific production of INESC TEC

    Automatic generation of software interfaces for supporting decisionmaking processes. An application of domain engineering & machine learning

    Get PDF
    [EN] Data analysis is a key process to foster knowledge generation in particular domains or fields of study. With a strong informative foundation derived from the analysis of collected data, decision-makers can make strategic choices with the aim of obtaining valuable benefits in their specific areas of action. However, given the steady growth of data volumes, data analysis needs to rely on powerful tools to enable knowledge extraction. Information dashboards offer a software solution to analyze large volumes of data visually to identify patterns and relations and make decisions according to the presented information. But decision-makers may have different goals and, consequently, different necessities regarding their dashboards. Moreover, the variety of data sources, structures, and domains can hamper the design and implementation of these tools. This Ph.D. Thesis tackles the challenge of improving the development process of information dashboards and data visualizations while enhancing their quality and features in terms of personalization, usability, and flexibility, among others. Several research activities have been carried out to support this thesis. First, a systematic literature mapping and review was performed to analyze different methodologies and solutions related to the automatic generation of tailored information dashboards. The outcomes of the review led to the selection of a modeldriven approach in combination with the software product line paradigm to deal with the automatic generation of information dashboards. In this context, a meta-model was developed following a domain engineering approach. This meta-model represents the skeleton of information dashboards and data visualizations through the abstraction of their components and features and has been the backbone of the subsequent generative pipeline of these tools. The meta-model and generative pipeline have been tested through their integration in different scenarios, both theoretical and practical. Regarding the theoretical dimension of the research, the meta-model has been successfully integrated with other meta-model to support knowledge generation in learning ecosystems, and as a framework to conceptualize and instantiate information dashboards in different domains. In terms of the practical applications, the focus has been put on how to transform the meta-model into an instance adapted to a specific context, and how to finally transform this later model into code, i.e., the final, functional product. These practical scenarios involved the automatic generation of dashboards in the context of a Ph.D. Programme, the application of Artificial Intelligence algorithms in the process, and the development of a graphical instantiation platform that combines the meta-model and the generative pipeline into a visual generation system. Finally, different case studies have been conducted in the employment and employability, health, and education domains. The number of applications of the meta-model in theoretical and practical dimensions and domains is also a result itself. Every outcome associated to this thesis is driven by the dashboard meta-model, which also proves its versatility and flexibility when it comes to conceptualize, generate, and capture knowledge related to dashboards and data visualizations

    Evaluating the Impact of Defeasible Argumentation as a Modelling Technique for Reasoning under Uncertainty

    Get PDF
    Limited work exists for the comparison across distinct knowledge-based approaches in Artificial Intelligence (AI) for non-monotonic reasoning, and in particular for the examination of their inferential and explanatory capacity. Non-monotonicity, or defeasibility, allows the retraction of a conclusion in the light of new information. It is a similar pattern to human reasoning, which draws conclusions in the absence of information, but allows them to be corrected once new pieces of evidence arise. Thus, this thesis focuses on a comparison of three approaches in AI for implementation of non-monotonic reasoning models of inference, namely: expert systems, fuzzy reasoning and defeasible argumentation. Three applications from the fields of decision-making in healthcare and knowledge representation and reasoning were selected from real-world contexts for evaluation: human mental workload modelling, computational trust modelling, and mortality occurrence modelling with biomarkers. The link between these applications comes from their presumptively non-monotonic nature. They present incomplete, ambiguous and retractable pieces of evidence. Hence, reasoning applied to them is likely suitable for being modelled by non-monotonic reasoning systems. An experiment was performed by exploiting six deductive knowledge bases produced with the aid of domain experts. These were coded into models built upon the selected reasoning approaches and were subsequently elicited with real-world data. The numerical inferences produced by these models were analysed according to common metrics of evaluation for each field of application. For the examination of explanatory capacity, properties such as understandability, extensibility, and post-hoc interpretability were meticulously described and qualitatively compared. Findings suggest that the variance of the inferences produced by expert systems and fuzzy reasoning models was higher, highlighting poor stability. In contrast, the variance of argument-based models was lower, showing a superior stability of its inferences across different system configurations. In addition, when compared in a context with large amounts of conflicting information, defeasible argumentation exhibited a stronger potential for conflict resolution, while presenting robust inferences. An in-depth discussion of the explanatory capacity showed how defeasible argumentation can lead to the construction of non-monotonic models with appealing properties of explainability, compared to those built with expert systems and fuzzy reasoning. The originality of this research lies in the quantification of the impact of defeasible argumentation. It illustrates the construction of an extensive number of non-monotonic reasoning models through a modular design. In addition, it exemplifies how these models can be exploited for performing non-monotonic reasoning and producing quantitative inferences in real-world applications. It contributes to the field of non-monotonic reasoning by situating defeasible argumentation among similar approaches through a novel empirical comparison
    corecore