8 research outputs found

    PENGEMBANGAN AWAL EKSPRESI WAJAH KARAKTER 3D MENGGUNAKAN FACIAL ACTION CODING SYSTEM UNTUK MENJADI AGEN PEDAGOGIS

    Get PDF
    Teknologi telah memberikan inovasi yang signifikan dalam penedidikan, termasuk sebagai media pembelajaran yang memudahkan siswa untuk memahami materi pembelajaran dan menciptakan suasana belajar yang lebih menarik. Pembelajaran online telah membuka akses dan fleksibilitas bagi siswa dalam mengakses materi pembelajaran dari berbagai situasi. Studi kasus menunjukkan bahwa penggunaan pembelajaran online yang tepat dapat berdampak positif pada pencapaian akademik siswa. Namun kurangnya motivasi dalam pembelajaran online menjadi tantangan yang perlu diatasi dengan strategi pengajaran yang sesuai. Penelitian sebelumnya menunjukkan bahwa agen pedagogis dapat digunakan untuk meningkatkan motivasi belajar. Akan tetapi, agen pedagogis harus dapat mengekspresikan emosi secara realistis agar dapat digunakan sebagai agen pedagogis. Untuk mengatasi hal tersebut, peneliti mengembangkan agen pedagogis berupa karakter 3D yang mampu mengembangkan ekspresi wajah secara realistis agar dapat digunakan sebagai agen pedagogis. Penggunaan Facial Action Coding System (FACS) memungkinkan pengembangan ekspresi wajah yang lebih realistis pada karakter 3D, yang diharapkan dapt meningkatkan motivasi pelajar dalam pembelajaran online. Penelitian ini bertujuan untuk mengembangkan karakter 3D dan mengimplementasikannya sebagai agen pedagogis dalam pembelajaran online dan menguji kelayakannya. Setelah karakter 3D dikembangkan dilakukan uji validasi terhadap ahli media dan mendapatkan perolehan skor 80% yang menunjukkan bahwa karakter 3D dengan ekspresi wajahnya yang dikembangkan sudah baik untuk diimplementasikan. Setelah diimplementasikan sebagai agen pedagogis pada website pembelajaran online dilakukan pengujian terhadap pengguna dan mendapatkan perolehan skor 71,18% yang menunjukkan bahwa karakter 3D sudah layak untuk diimplementasikan dalam website pembelajaran online. Namun, masih terdapat kekurangan dan dapat dioptimalkan agar menjadi lebih baik lagi. ----------- Technology has introduced significant innovations in education, particularly as a medium of instruction that aids students in comprehending learning materials while creating a more engaging learning environment. Online learning has granted students access and flexibility to engage with learning materials across various situations. Case studies have indicated that appropriate use of online learning can have a positive impact on students' academic achievements. However, the challenge of motivation in online learning necessitates suitable teaching strategies. Previous research suggests that pedagogical agents can enhance learning motivation. However, these agents must express emotions realistically to effectively serve as pedagogical aides. To address this, the research develops 3D character-based pedagogical agents capable of realistic facial expression through the utilization of the Facial Action Coding System (FACS). This technology enables the creation of authentic facial expressions in 3D characters, with the expectation of elevating student motivation in online learning. This study aims to develop 3D characters and implement them as pedagogical agents within online learning platforms, subsequently testing their viability. Upon the development of the 3D characters, media experts validate the characters, achieving a score of 80%. This score affirms the suitability of the developed 3D characters with facial expressions for implementation. After being integrated as pedagogical agents on the online learning website, user testing yields a score of 71.18%, indicating that the 3D characters are appropriate for online learning implementation. However, further optimization is needed to enhance their effectiveness

    Classifying Smart Personal Assistants: An Empirical Cluster Analysis

    Get PDF
    The digital age has yielded systems that increasingly reduce the complexity of our everyday lives. As such, smart personal assistants such as Amazon’s Alexa or Apple’s Siri combine the comfort of intuitive natural language interaction with the utility of personalized and situation-dependent information and service provision. However, research on SPAs is becoming increasingly complex and opaque. To reduce complexity, this paper introduces a classification system for SPAs. Based on a systematic literature review, a cluster analysis reveals five SPA archetypes: Adaptive Voice (Vision) Assistants, Chatbot Assistants, Embodied Virtual Assistants, Passive Pervasive Assistants, and Natural Conversation Assistants

    The attitude of Polish young adults to mobile chatbots in e-commerce : selected conditions

    Get PDF
    The development of technology, including work on artificial intelligence, gives marketers new opportunities regarding communication with customers with the help of chatbots (including e-commerce). The aim of the research was to determine the attitude of Polish young adults to (mobile) chatbots in accordance with the TAM model (intention, attitude, ease and convenience of use) and links with consumer innovation. Statistical analyses (ANOVA and regression analysis) confirmed that innovation measured using the DSI scale (Goldsmith and Hofacker) is related to the attitude to chatbots in the surveyed group. Respondents manifest a sceptical attitude towards this new technology, while having little experience with it

    Advanced Content and Interface Personalization through Conversational Behavior and Affective Embodied Conversational Agents

    Get PDF
    Conversation is becoming one of the key interaction modes in HMI. As a result, the conversational agents (CAs) have become an important tool in various everyday scenarios. From Apple and Microsoft to Amazon, Google, and Facebook, all have adapted their own variations of CAs. The CAs range from chatbots and 2D, carton-like implementations of talking heads to fully articulated embodied conversational agents performing interaction in various concepts. Recent studies in the field of face-to-face conversation show that the most natural way to implement interaction is through synchronized verbal and co-verbal signals (gestures and expressions). Namely, co-verbal behavior represents a major source of discourse cohesion. It regulates communicative relationships and may support or even replace verbal counterparts. It effectively retains semantics of the information and gives a certain degree of clarity in the discourse. In this chapter, we will represent a model of generation and realization of more natural machine-generated output

    A user-perception based approach to create smiling embodied conversational agents

    Get PDF
    International audienceIn order to improve the social capabilities of embodied conversational agents, we propose a computational model to enable agents to automatically select and display appropriate smiling behavior during human-machine interaction. A smile may convey different communicative intentions depending on subtle characteristics of the facial expression and contextual cues. So, to construct such a model, as a first step, we explore the morphological and dynamic characteristics of different types of smile (polite, amused and embarrassed smiles) that an embodied conversational agent may display. The resulting lexicon of smiles is based on a corpus of virtual agent's smiles directly created by users and analyzed through a machine learning technique. Moreover, during an interaction, the expression of smile impacts on the observer's perception of the interpersonal stance of the speaker. As a second step, we propose a probabilistic model to automatically compute the user's potential perception of the embodied conversational agent's social stance depending on its smiling behavior and on its physical appearance. This model, based on a corpus of users' perception of smiling and non-smiling virtual agents, enables a virtual agent to determine the appropriate smiling behavior to adopt given the interpersonal stance it wants to express. An experiment using real human-virtual agent interaction provided some validation of the proposed model

    A user-perception based approach to create smiling embodied conversational agents

    No full text
    International audienceno abstrac

    Value Co-Creation in Smart Services: A Functional Affordances Perspective on Smart Personal Assistants

    Get PDF
    In the realm of smart services, smart personal assistants (SPAs) have become a popular medium for value co-creation between service providers and users. The market success of SPAs is largely based on their innovative material properties, such as natural language user interfaces, machine learning-powered request handling and service provision, and anthropomorphism. In different combinations, these properties offer users entirely new ways to intuitively and interactively achieve their goals and thus co-create value with service providers. But how does the nature of the SPA shape value co-creation processes? In this paper, we look through a functional affordances lens to theorize about the effects of different types of SPAs (i.e., with different combinations of material properties) on users’ value co-creation processes. Specifically, we collected SPAs from research and practice by reviewing scientific literature and web resources, developed a taxonomy of SPAs’ material properties, and performed a cluster analysis to group SPAs of a similar nature. We then derived 2 general and 11 cluster-specific propositions on how different material properties of SPAs can yield different affordances for value co-creation. With our work, we point out that smart services require researchers and practitioners to fundamentally rethink value co-creation as well as revise affordances theory to address the dynamic nature of smart technology as a service counterpart

    To Affinity and Beyond: Interactive Digital Humans as a Human Computer Interface

    Get PDF
    The field of human computer interaction is increasingly exploring the use of more natural, human-like user interfaces to build intelligent agents to aid in everyday life. This is coupled with a move to people using ever more realistic avatars to represent themselves in their digital lives. As the ability to produce emotionally engaging digital human representations is only just now becoming technically possible, there is little research into how to approach such tasks. This is due to both technical complexity and operational implementation cost. This is now changing as we are at a nexus point with new approaches, faster graphics processing and enabling new technologies in machine learning and computer vision becoming available. I articulate the issues required for such digital humans to be considered successfully located on the other side of the phenomenon known as the Uncanny Valley. My results show that a complex mix of perceived and contextual aspects affect the sense making on digital humans and highlights previously undocumented effects of interactivity on the affinity. Users are willing to accept digital humans as a new form of user interface and they react to them emotionally in previously unanticipated ways. My research shows that it is possible to build an effective interactive digital human that crosses the Uncanny Valley. I directly explore what is required to build a visually realistic digital human as a primary research question and I explore if such a realistic face provides sufficient benefit to justify the challenges involved in building it. I conducted a Delphi study to inform the research approaches and then produced a complex digital human character based on these insights. This interactive and realistic digital human avatar represents a major technical undertaking involving multiple teams around the world. Finally, I explored a framework for examining the ethical implications and signpost future research areas
    corecore