2,784 research outputs found
Framework to Enhance Teaching and Learning in System Analysis and Unified Modelling Language
Cowling, MA ORCiD: 0000-0003-1444-1563; Munoz Carpio, JC ORCiD: 0000-0003-0251-5510Systems Analysis modelling is considered foundational for Information and Communication Technology (ICT) students, with introductory and advanced units included in nearly all ICT and computer science degrees. Yet despite this, novice systems analysts (learners) find modelling and systems thinking quite difficult to learn and master. This makes the process of teaching the fundamentals frustrating and time intensive. This paper will discuss the foundational problems that learners face when learning Systems Analysis modelling. Through a systematic literature review, a framework will be proposed based on the key problems that novice learners experience. In this proposed framework, a sequence of activities has been developed to facilitate understanding of the requirements, solutions and incremental modelling. An example is provided illustrating how the framework could be used to incorporate visualization and gaming elements into a Systems Analysis classroom; therefore, improving motivation and learning. Through this work, a greater understanding of the approach to teaching modelling within the computer science classroom will be provided, as well as a framework to guide future teaching activities
Educational practices and strategies with immersive learning environments: mapping of reviews for using the metaverse
The educational metaverse promises fulfilling ambitions of immersive learning, leveraging technology-based presence alongside narrative and/or challenge-based deep mental absorption. Most reviews of immersive learning research were outcomes-focused, few considered the educational practices and strategies. These are necessary to provide theoretical and pedagogical frameworks to situate outcomes within a context where technology is in concert with educational approaches. We sought a broader perspective of the practices and strategies used in immersive learning environments, and conducted a mapping survey of reviews, identifying 47 studies. Extracted accounts of educational practices and strategies under thematic analysis yielded 45 strategies and 21 practices, visualized as a network clustered by conceptual proximity. Resulting clusters “Active context”, “Collaboration”, “Engagement and Scaffolding”, “Presence”, and “Real and virtual multimedia learning” expose the richness of practices and strategies within the field. The visualization maps the field, supporting decision-making when combining practices and strategies for using the metaverse in education, highlights which practices and strategies are supported by the literature, and the presence and absence of diversity within clusters.info:eu-repo/semantics/acceptedVersio
Towards a Cognitive Architecture for Socially Adaptive Human-Robot Interaction
People have a natural predisposition to interact in an adaptive manner with others, by instinctively changing their actions, tones and speech according to the perceived needs of their peers. Moreover, we are not only capable of registering the affective and cognitive state of our partners, but over a prolonged period of interaction we also learn which behaviours are the most appropriate and well-suited for each one of them individually. This universal trait that we share regardless of our different personalities is referred to as social adaptation (adaptability). Humans are always capable of adapting to the others although our personalities may influence the speed and efficacy of the adaptation. This means that in our everyday lives we are accustomed to partake in complex and personalized interactions with our peers.
Carrying this ability to personalize to human-robot interaction (HRI) is highly desirable since it would provide user-personalized interaction, a crucial element in many HRI scenarios - interactions with older adults, assistive or rehabilitative robotics, child-robot interaction (CRI), and many others. For a social robot to be able to recreate this same kind of rich, human-like interaction, it should be aware of our needs and affective states and be capable of continuously adapting its behaviour to them.
Equipping a robot with these functionalities however is not a straightforward task. A robust approach for solving this is implementing a framework for the robot supporting social awareness and adaptation. In other words, the robot needs to be equipped with the basic cognitive functionalities, which would allow the robot to learn how to select the behaviours that would maximize the pleasantness of the interaction for its peers, while being guided by an internal motivation system that would provide autonomy to its decision-making process.
The goal of this research was threefold: attempt to design a cognitive architecture supporting social HRI and implement it on a robotic platform; study how an adaptive framework of this kind would function when tested in HRI studies with users; and explore how including the element of adaptability and personalization in a cognitive framework would in reality affect the users - would it bring an additional richness to the human-robot interaction as hypothesized, or would it instead only add uncertainty and unpredictability that would not be accepted by the robot`s human peers?
This thesis covers the work done on developing a cognitive framework for human-robot interaction; analyzes the various challenges of implementing the cognitive functionalities, porting the framework on several robotic platforms and testing potential validation scenarios; and finally presents the user studies performed with the robotic platforms of iCub and MiRo, focused on understanding how a cognitive framework behaves in a free-form HRI context and if humans can be aware and appreciate the adaptivity of the robot.
In summary, this thesis had the task of approaching the complex field of cognitive HRI and attempt to shed some light on how cognition and adaptation develop from both the human and the robot side in an HRI scenario
Agent AI: Surveying the Horizons of Multimodal Interaction
Multi-modal AI systems will likely become a ubiquitous presence in our
everyday lives. A promising approach to making these systems more interactive
is to embody them as agents within physical and virtual environments. At
present, systems leverage existing foundation models as the basic building
blocks for the creation of embodied agents. Embedding agents within such
environments facilitates the ability of models to process and interpret visual
and contextual data, which is critical for the creation of more sophisticated
and context-aware AI systems. For example, a system that can perceive user
actions, human behavior, environmental objects, audio expressions, and the
collective sentiment of a scene can be used to inform and direct agent
responses within the given environment. To accelerate research on agent-based
multimodal intelligence, we define "Agent AI" as a class of interactive systems
that can perceive visual stimuli, language inputs, and other
environmentally-grounded data, and can produce meaningful embodied actions. In
particular, we explore systems that aim to improve agents based on
next-embodied action prediction by incorporating external knowledge,
multi-sensory inputs, and human feedback. We argue that by developing agentic
AI systems in grounded environments, one can also mitigate the hallucinations
of large foundation models and their tendency to generate environmentally
incorrect outputs. The emerging field of Agent AI subsumes the broader embodied
and agentic aspects of multimodal interactions. Beyond agents acting and
interacting in the physical world, we envision a future where people can easily
create any virtual reality or simulated scene and interact with agents embodied
within the virtual environment
Study of Augmented Reality based manufacturing for further integration of quality control 4.0: a systematic literature review
Augmented Reality (AR) has gradually become a mainstream technology enabling Industry 4.0 and its maturity has also grown over time. AR has been applied to support different processes on the shop-floor level, such as assembly, maintenance, etc. As various processes in manufacturing require high quality and near-zero error rates to ensure the demands and safety of end-users, AR can also equip operators with immersive interfaces to enhance productivity, accuracy and autonomy in the quality sector. However, there is currently no systematic review paper about AR technology enhancing the quality sector. The purpose of this paper is to conduct a systematic literature review (SLR) to conclude about the emerging interest in using AR as an assisting technology for the quality sector in an industry 4.0 context. Five research questions (RQs), with a set of selection criteria, are predefined to support the objectives of this SLR. In addition, different research databases are used for the paper identification phase following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) methodology to find the answers for the predefined RQs. It is found that, in spite of staying behind the assembly and maintenance sector in terms of AR-based solutions, there is a tendency towards interest in developing and implementing AR-assisted quality applications. There are three main categories of current AR-based solutions for quality sector, which are AR-based apps as a virtual Lean tool, AR-assisted metrology and AR-based solutions for in-line quality control. In this SLR, an AR architecture layer framework has been improved to classify articles into different layers which are finally integrated into a systematic design and development methodology for the development of long-term AR-based solutions for the quality sector in the future
CUBICA: An Example of Mixed Reality
Nowadays, one of the hot issues in the agenda is, undoubtedly, the concept of
Sustainable Computing. There are several technologies in the intersection of Sustainable
Computing and Ambient Intelligence. Among them we may mention “Human-Centric
Interfaces for Ambient Intelligence” and “Collaborative Smart Objects” technologies. In this
paper we present our efforts in developing these technologies for “Mixed Reality”, a paradigm
where Virtual Reality and Ambient Intelligence meet. Cubica is a mixed reality educational
application that integrates virtual worlds with tangible interfaces. The application is focused on
teaching computer science, in particular “sorting algorithms”. The tangible interface is used to
simplify the abstract concept of array, while the virtual world is used for delivering
explanations. This educational application has been tested with students at different educational
levels in secondary education, having obtained promising results in terms of increased
motivation for learning and better understanding of abstract concepts.We want to thank the faculty of the I.E.S. “Joan Coromines”, at Benicarló (Valencia,
Spain), for their support while evaluating the system. In particular, the contribution of
Lorenzo Otero was essential during that process. The work described in this paper
was partially funded by the Spanish National Plan of I+D+i (TIN2010-17344), and by
the Spanish Ministry of Industry (TSI-020100-2010-743)
- …