3,884 research outputs found

    Emotional engineering of artificial representations of sign languages

    Get PDF
    The fascination and challenge of making an appropriate digital representation of sign language for a highly specialised and culturally rich community such as the Deaf, has brought about the development and production of several digital representations of sign language (DRSL). These range from pictorial depictions of sign language, filmed video recordings to animated avatars (virtual humans). However, issues relating to translating and representing sign language in the digital-domain and the effectiveness of various approaches, has divided the opinion of the target audience. As a result there is still no universally accepted digital representation of sign language. For systems to reach their full potential, researchers have postulated that further investigation is needed into the interaction and representational issues associated with the mapping of sign language into the digital domain. This dissertation contributes a novel approach that investigates the comparative effectiveness of digital representations of sign language within different information delivery contexts. The empirical studies presented have supported the characterisation of the prescribed properties of DRSL's that make it an effective communication system, which when defined by the Deaf community, was often referred to as "emotion". This has led to and supported the developed of the proposed design methodology for the "Emotional Engineering of Artificial Sign Languages", which forms the main contribution of this thesis

    The Role of Emotional and Facial Expression in Synthesised Sign Language Avatars

    Get PDF
    This thesis explores the role that underlying emotional facial expressions might have in regards to understandability in sign language avatars. Focusing specifically on Irish Sign Language (ISL), we examine the Deaf community’s requirement for a visual-gestural language as well as some linguistic attributes of ISL which we consider fundamental to this research. Unlike spoken language, visual-gestural languages such as ISL have no standard written representation. Given this, we compare current methods of written representation for signed languages as we consider: which, if any, is the most suitable transcription method for the medical receptionist dialogue corpus. A growing body of work is emerging from the field of sign language avatar synthesis. These works are now at a point where they can benefit greatly from introducing methods currently used in the field of humanoid animation and, more specifically, the application of morphs to represent facial expression. The hypothesis underpinning this research is: augmenting an existing avatar (eSIGN) with various combinations of the 7 widely accepted universal emotions identified by Ekman (1999) to deliver underlying facial expressions, will make that avatar more human-like. This research accepts as true that this is a factor in improving usability and understandability for ISL users. Using human evaluation methods (Huenerfauth, et al., 2008) the research compares an augmented set of avatar utterances against a baseline set with regards to 2 key areas: comprehension and naturalness of facial configuration. We outline our approach to the evaluation including our choice of ISL participants, interview environment, and evaluation methodology. Remarkably, the results of this manual evaluation show that there was very little difference between the comprehension scores of the baseline avatars and those augmented withEFEs. However, after comparing the comprehension results for the synthetic human avatar “Anna” against the caricature type avatar “Luna”, the synthetic human avatar Anna was the clear winner. The qualitative feedback allowed us an insight into why comprehension scores were not higher in each avatar and we feel that this feedback will be invaluable to the research community in the future development of sign language avatars. Other questions asked in the evaluation focused on sign language avatar technology in a more general manner. Significantly, participant feedback in regard to these questions indicates a rise in the level of literacy amongst Deaf adults as a result of mobile technology

    Application-driven visual computing towards industry 4.0 2018

    Get PDF
    245 p.La Tesis recoge contribuciones en tres campos: 1. Agentes Virtuales Interactivos: autónomos, modulares, escalables, ubicuos y atractivos para el usuario. Estos IVA pueden interactuar con los usuarios de manera natural.2. Entornos de RV/RA Inmersivos: RV en la planificación de la producción, el diseño de producto, la simulación de procesos, pruebas y verificación. El Operario Virtual muestra cómo la RV y los Co-bots pueden trabajar en un entorno seguro. En el Operario Aumentado la RA muestra información relevante al trabajador de una manera no intrusiva. 3. Gestión Interactiva de Modelos 3D: gestión online y visualización de modelos CAD multimedia, mediante conversión automática de modelos CAD a la Web. La tecnología Web3D permite la visualización e interacción de estos modelos en dispositivos móviles de baja potencia.Además, estas contribuciones han permitido analizar los desafíos presentados por Industry 4.0. La tesis ha contribuido a proporcionar una prueba de concepto para algunos de esos desafíos: en factores humanos, simulación, visualización e integración de modelos

    THE REALISM OF ALGORITHMIC HUMAN FIGURES A Study of Selected Examples 1964 to 2001

    Get PDF
    It is more than forty years since the first wireframe images of the Boeing Man revealed a stylized hu-man pilot in a simulated pilot's cabin. Since then, it has almost become standard to include scenes in Hollywood movies which incorporate virtual human actors. A trait particularly recognizable in the games industry world-wide is the eagerness to render athletic muscular young men, and young women with hour-glass body-shapes, to traverse dangerous cyberworlds as invincible heroic figures. Tremendous efforts in algorithmic modeling, animation and rendering are spent to produce a realistic and believable appearance of these algorithmic humans. This thesis develops two main strands of research by the interpreting a selection of examples. Firstly, in the computer graphics context, over the forty years, it documents the development of the creation of the naturalistic appearance of images (usually called photorealism ). In particular, it de-scribes and reviews the impact of key algorithms in the course of the journey of the algorithmic human figures towards realism . Secondly, taking a historical perspective, this work provides an analysis of computer graphics in relation to the concept of realism. A comparison of realistic images of human figures throughout history with their algorithmically-generated counterparts allows us to see that computer graphics has both learned from previous and contemporary art movements such as photorealism but also taken out-of-context elements, symbols and properties from these art movements with a questionable naivety. Therefore, this work also offers a critique of the justification of the use of their typical conceptualization in computer graphics. Although the astounding technical achievements in the field of algorithmically-generated human figures are paralleled by an equally astounding disregard for the history of visual culture, from the beginning 1964 till the breakthrough 2001, in the period of the digital information processing machine, a new approach has emerged to meet the apparently incessant desire of humans to create artificial counterparts of themselves. Conversely, the theories of traditional realism have to be extended to include new problems that those active algorithmic human figures present

    Towards a Linguistically Motivated Irish Sign Language Conversational Avatar

    Get PDF
    Avatars are life-like characters that exist in a virtual world on our computer monitors. They are synthetic actors that have, in more recent times, received a significant amount of investigation and development. This is primarily due to leverage gained from advances in computing power and 3D animation technologies. Since the release of the movie “Avatar” last year, there is also a broader awareness and interest in avatars in the public domain. Ishizuka and Prendinger (2004) describe how researchers, while endeavouring to develop a creature that is believable and capable of intelligible communication, use a wide variety of terms to describe their work: avatars, anthropomorphic agents, creatures, synthetic actors, non-player characters, embodied conversational agents, bots, intelligent agents. While most of these terms are inspired from the character specific applications, some intend to draw attention to a particular aspect of the life-like character. To date it seems that there is no universal agreement with regard to terminology. The term avatar can be used to refer to the visual representation of a human being within a virtual environment whereas the term embodied conversational agent refers to a character that visually incorporates knowledge with regard to the conversational process. For the purpose of this research, the term embodied conversational agent is deemed an appropriate descriptor for the synthetic agent undergoing development. The value that RRG contributes to this is that it is a theory of grammar that is concerned with the interaction of syntax, semantics and pragmatics across grammatical systems. RRG can be characterised as a descriptive framework for the analysis of languages and also an explanatory framework for the analysis of language acquisition (Van Valin, 2008). As a lexicalist theory of grammar, RRG can be described as being well motivated cross-linguistically. The grammar model links the syntactic structure of a sentence to the semantic structure by means of a linking algorithm, which is bi-directional in nature. With respect to cognitive issues, RRG adopts the criterion of psychological adequacy formulated in Dik (1991), which states that a theory should be compatible with the results of psycholinguistic research on the acquisition, processing, production, interpretation and memorisation of linguistic expressions. It also accepts the criterion put forward in Bresnan and Kaplan (1982), that theories of linguistic structure should be directly relatable to testable theories of language production and comprehension. RRG incorporates many of the viewpoints of current functional grammar theories. RRG takes language to be a system of communicative social action, and accordingly, analysing the communicative functions of grammatical structures plays a vital role in grammatical description and theory from this perspective. The view of the lexicon in RRG is such that lexical entries for verbs should contain unique information only, while as much information as possible should be derived from general lexical rules. It is envisaged that the RRG parser/generator described in this paper will later be used as a component in the development of a computational framework for the embodied conversational agent for ISL. This poses significant technical and theoretical difficulties within both RRG and for software (Nolan and Salem 2009, Salem, Hensman and Nolan 2009). As ISL is a visual gestural language without any aural or written form, like all other sign languages, the challenge is to extend the RRG view of the lexicon and the layered structure of the word, indeed the model itself, to accommodate sign languages. In particular, the morphology of sign languages is concerned with manual and non-manual features, handshapes across the dominant and non-dominant hand in simultaneous signed constructions, head, eyebrows and mouth shape. These are the morphemes and lexemes of sign language. How can these fit into the RRG lexicon and what are the difficulties this presents for RRG at the semantic-morphosyntax interface? This paper will discuss this research as a work in progress to date. It is envisaged that the embodied conversational agent undergoing development in this research will later be employed for real-time sign language visualisation for Irish Sign Language (ISL)

    Web-based animation software to improve the speaking Skills in preschoolers at escuela de educación básica Educa. Salinas, province of Santa Elena, school year 2016-2017.

    Get PDF
    The objective of this research work was to improve the speaking skills in preschoolers at escuela de educación básica Educa. Salinas, province of Santa Elena. School year 2016-2017, through a web-based animation software. In This research work was applied the qualitative and quantitative methods. The researcher worked collaboratively with specialist, principal, English teachers and the preschoolers in order to improve the deficiency of the speaking skills that was the main issue during this research. The participants of this research object were 15 preschoolers of escuela de educación básica Educa. The data collections of this study were qualitative and quantitative. The qualitative data were obtained by observing the teaching learning process during and interviewing the specialists, principal and English teachers about the implementation of a web based animation software in English classes. Furthermore the quantitative data were gained by assessing the students’ speaking skills through the pre and post -test which contributed to identify how the preschoolers increase their proficiency of the English language focus on the speaking ability. Moreover, the application of this strategy in education seems to be a good pedagogical tool for English teachers who seek how to teach English with technology in an easy and fun way. The research results showed that the implementation of a web based animation software was effective to improve the speaking ability and also it contributed to increase the motivation to learn the English language in preschoolers at escuela de educación básica Educa. Salinas, province of Santa Elena, school year 2016-2017

    Agents for educational games and simulations

    Get PDF
    This book consists mainly of revised papers that were presented at the Agents for Educational Games and Simulation (AEGS) workshop held on May 2, 2011, as part of the Autonomous Agents and MultiAgent Systems (AAMAS) conference in Taipei, Taiwan. The 12 full papers presented were carefully reviewed and selected from various submissions. The papers are organized topical sections on middleware applications, dialogues and learning, adaption and convergence, and agent applications
    corecore