332 research outputs found

    DESIGN WITH EMOTION: IMPROVING WEB SEARCH EXPERIENCE FOR OLDER ADULTS

    Get PDF
    Research indicates that older adults search for information all together about 15% less than younger adults prior to making decisions. Prior research findings associated such behavior mainly with age-related cognitive difficulties. However, recent studies indicate that emotion is linked to influence search decision quality. This research approaches questions about why older adults search less and how this search behavior could be improved. The research is motivated by the broader issues of older users\u27 search behavior, while focusing on the emotional usability of search engine user interfaces. Therefore, this research attempts to accomplish the following three objectives: a) to explore the usage of low level design elements as emotion manipulation tools b) to seamlessly integrate these design elements into currently existing search engine interfaces, and finally c) to evaluate the impact of emotional design elements on search performance and user satisfaction. To achieve these objectives, two usability studies were conducted. The aim of the first study was to explore emotion induction capabilities of colors, shapes, and combination of both. The study was required to determine if the proposed design elements have strong mood induction capabilities. The results demonstrated that low level design elements such as color and shape have high visceral effects that could be used as potentially viable alternatives to induce the emotional states of users without the users having knowledge of their presence. The purpose of the second study was to evaluate alternative search engine user interfaces, derived from this research, for search thoroughness and user preference. In general, search based performance variables showed that participants searched more thoroughly using interface types that integrate angular shape features. In addition, user preference variables also indicated that participants seemed to enjoy search tasks using search engine interfaces that used color/shape combinations. Overall, the results indicated that seamless integration of low level emotional design elements into currently existing search engine interfaces could potentially improve web search experience

    Personality Assessment Using Biosignals and Human Computer Interaction applied to Medical Decision Making

    Get PDF
    Clinical decision-making for patients with multiple acute or chronic diseases (i.e. multimorbidity) is complex. There is often no ’right’ or optimal treatment due to the potentially harmful effects of multiple interactions between drugs and diseases. This makes it necessary to establish trade-offs between the benefits and risks of different treatment strategies. This means also that there may be high levels of risk and uncertainty when making decisions. One factor that can influence how decisions are made under conditions of risk and uncertainty is the decision maker’s personality. The studies of this dissertation used biosignals and eye-tracking methods and developed pointer tracking techniques to monitor human computer interaction to assess, using machine learning techniques, the individual personality of decision makers. Data acquisition systems were designed and prepared to collect and synchronize: 1) physiological data - electrocardiogram, blood volume pulse and electrodermal activity; 2) human-computer interaction data - pointer movements, eye tracking and pupil diameter; 3) decision-making task data; and 4) personality questionnaire’ results. A set of processing tools was developed to ensure the correct extraction of psychophysiologyrelated features that could manifest personality. These features were combined by several machine learning algorithms to predict the Big-Five personality traits: Openness, Conscientiousness, Extraversion, Agreeableness and Conscientiousness. The five personality traits were well modelled by, at least, one of the sets of features extracted. With a sample of 88 students, features from the pointer movements in online surveys predicted four personality traits with a mean squared error (MSE)<0.46. The blood volume pulse responses in a decision-making task trained in a distinct sample of 79 students predicted four personality traits with a MSE<0.49. The application of the personality models based on the pointer movements in the personality questionnaire in a sample of 12 medical doctors achieved a MSE<0.40 for three personality traits. These were the best results achieved in each context of this thesis. The outcomes of this work demonstrate the huge potential of broader models that predict personality through human behaviour, with possible application in a wide variety of fields, such as human resources, medical research studies or machine learning approaches

    Interacção multimodal : contribuições para simplificar o desenvolvimento de aplicações

    Get PDF
    Doutoramento em Engenharia InformáticaA forma como interagimos com os dispositivos que nos rodeiam, no nosso diaa- dia, está a mudar constantemente, consequência do aparecimento de novas tecnologias e métodos que proporcionam melhores e mais aliciantes formas de interagir com as aplicações. No entanto, a integração destas tecnologias, para possibilitar a sua utilização alargada, coloca desafios significativos e requer, da parte de quem desenvolve, um conhecimento alargado das tecnologias envolvidas. Apesar de a literatura mais recente apresentar alguns avanços no suporte ao desenho e desenvolvimento de sistemas interactivos multimodais, vários aspectos chave têm ainda de ser resolvidos para que se atinja o seu real potencial. Entre estes aspectos, um exemplo relevante é o da dificuldade em desenvolver e integrar múltiplas modalidades de interacção. Neste trabalho, propomos, desenhamos e implementamos uma framework que permite um mais fácil desenvolvimento de interacção multimodal. A nossa proposta mantém as modalidades de interacção completamente separadas da aplicação, permitindo um desenvolvimento, independente de cada uma das partes. A framework proposta já inclui um conjunto de modalidades genéricas e módulos que podem ser usados em novas aplicações. De entre as modalidades genéricas, a modalidade de voz mereceu particular atenção, tendo em conta a relevância crescente da interacção por voz, por exemplo em cenários como AAL, e a complexidade associada ao seu desenvolvimento. Adicionalmente, a nossa proposta contempla ainda o suporte à gestão de aplicações multi-dispositivo e inclui um método e respectivo módulo para criar fusão entre eventos. O desenvolvimento da arquitectura e da framework ocorreu num contexto de I&D diversificado, incluindo vários projectos, cenários de aplicação e parceiros internacionais. A framework permitiu o desenho e desenvolvimento de um conjunto alargado de aplicações multimodais, sendo um exemplo digno de nota o assistente pessoal AALFred, do projecto PaeLife. Estas aplicações, por sua vez, serviram um contínuo melhoramento da framework, suportando a recolha iterativa de novos requisitos, e permitido demonstrar a sua versatilidade e capacidades.The way we interact with the devices around us, in everyday life, is constantly changing, boosted by emerging technologies and methods, providing better and more engaging ways to interact with applications. Nevertheless, the integration with these technologies, to enable their widespread use in current systems, presents a notable challenge and requires considerable knowhow from developers. While the recent literature has made some advances in supporting the design and development of multimodal interactive systems, several key aspects have yet to be addressed to enable its full potential. Among these, a relevant example is the difficulty to develop and integrate multiple interaction modalities. In this work, we propose, design and implement a framework enabling easier development of multimodal interaction. Our proposal fully decouples the interaction modalities from the application, allowing the separate development of each part. The proposed framework already includes a set of generic modalities and modules ready to be used in novel applications. Among the proposed generic modalities, the speech modality deserved particular attention, attending to the increasing relevance of speech interaction, for example in scenarios such as AAL, and the complexity behind its development. Additionally, our proposal also tackles the support for managing multi-device applications and includes a method and corresponding module to create fusion of events. The development of the architecture and framework profited from a rich R&D context including several projects, scenarios, and international partners. The framework successfully supported the design and development of a wide set of multimodal applications, a notable example being AALFred, the personal assistant of project PaeLife. These applications, in turn, served the continuous improvement of the framework by supporting the iterative collection of novel requirements, enabling the proposed framework to show its versatility and potential

    Design and semantics of form and movement (DeSForM 2006)

    Get PDF
    Design and Semantics of Form and Movement (DeSForM) grew from applied research exploring emerging design methods and practices to support new generation product and interface design. The products and interfaces are concerned with: the context of ubiquitous computing and ambient technologies and the need for greater empathy in the pre-programmed behaviour of the ‘machines’ that populate our lives. Such explorative research in the CfDR has been led by Young, supported by Kyffin, Visiting Professor from Philips Design and sponsored by Philips Design over a period of four years (research funding £87k). DeSForM1 was the first of a series of three conferences that enable the presentation and debate of international work within this field: • 1st European conference on Design and Semantics of Form and Movement (DeSForM1), Baltic, Gateshead, 2005, Feijs L., Kyffin S. & Young R.A. eds. • 2nd European conference on Design and Semantics of Form and Movement (DeSForM2), Evoluon, Eindhoven, 2006, Feijs L., Kyffin S. & Young R.A. eds. • 3rd European conference on Design and Semantics of Form and Movement (DeSForM3), New Design School Building, Newcastle, 2007, Feijs L., Kyffin S. & Young R.A. eds. Philips sponsorship of practice-based enquiry led to research by three teams of research students over three years and on-going sponsorship of research through the Northumbria University Design and Innovation Laboratory (nuDIL). Young has been invited on the steering panel of the UK Thinking Digital Conference concerning the latest developments in digital and media technologies. Informed by this research is the work of PhD student Yukie Nakano who examines new technologies in relation to eco-design textiles

    Experimental Studies in Learning Technology and Child–Computer Interaction

    Get PDF
    This book is about the ways in which experiments can be employed in the context of research on learning technologies and child–computer interaction (CCI). It is directed at researchers, supporting them to employ experimental studies while increasing their quality and rigor. The book provides a complete and comprehensive description on how to design, implement, and report experiments, with a focus on and examples from CCI and learning technology research. The topics covered include an introduction to CCI and learning technologies as interdisciplinary fields of research, how to design educational interfaces and visualizations that support experimental studies, the advantages and disadvantages of a variety of experiments, methodological decisions in designing and conducting experiments (e.g. devising hypotheses and selecting measures), and the reporting of results. As well, a brief introduction on how contemporary advances in data science, artificial intelligence, and sensor data have impacted learning technology and CCI research is presented. The book details three important issues that a learning technology and CCI researcher needs to be aware of: the importance of the context, ethical considerations, and working with children. The motivation behind and emphasis of this book is helping prospective CCI and learning technology researchers (a) to evaluate the circumstances that favor (or do not favor) the use of experiments, (b) to make the necessary methodological decisions about the type and features of the experiment, (c) to design the necessary “artifacts” (e.g., prototype systems, interfaces, materials, and procedures), (d) to operationalize and conduct experimental procedures to minimize potential bias, and (e) to report the results of their studies for successful dissemination in top-tier venues (such as journals and conferences). This book is an open access publication

    Practical, appropriate, empirically-validated guidelines for designing educational games

    Get PDF
    There has recently been a great deal of interest in the potential of computer games to function as innovative educational tools. However, there is very little evidence of games fulfilling that potential. Indeed, the process of merging the disparate goals of education and games design appears problematic, and there are currently no practical guidelines for how to do so in a coherent manner. In this paper, we describe the successful, empirically validated teaching methods developed by behavioural psychologists and point out how they are uniquely suited to take advantage of the benefits that games offer to education. We conclude by proposing some practical steps for designing educational games, based on the techniques of Applied Behaviour Analysis. It is intended that this paper can both focus educational games designers on the features of games that are genuinely useful for education, and also introduce a successful form of teaching that this audience may not yet be familiar with

    Motion-Based Video Games for Older Adults in Long-Term Care

    Get PDF
    Older adults in residential care often lead sedentary lifestyles despite physical and cognitive activities being crucial for their well-being. Care facilities face the challenge of encouraging their residents to participate in leisure activities, but as the impact of age-related changes grows, few activities remain accessible. Video games in general – and motion-based games in particular – hold the promise of providing mental, physical and social stimulation for older adults. However, the accessibility of commercially available games for older adults is not considered during the development process. Therefore, many older adults are unable to obtain any of the benefits. In my dissertation, this issue is addressed through the development of motion-based game controls that specifically address the needs of older adults. The first part of this thesis lays the foundation by providing an overview of motion-based game interaction for older adults. The second part demonstrates the general feasibility of motion-based game controls for older adults, develops full-body motion-based and wheelchair-based game controls, and provides guidelines for accessible motion-based game interaction for institutionalized older adults. The third part of this thesis builds on these results and presents two case studies. Motion-based controls are applied and further evaluated in game design projects addressing the special needs of older adults in long-term care, with the first case study focusing on long-term player engagement and the role of volunteers in care homes, and the second case study focusing on connecting older adults and caregivers through play. The results of this dissertation show that motion-based game controls can be designed to be accessible to institutionalized older adults. My work also shows that older adults enjoy engaging with motion-based games, and that such games have the potential of positively influencing them by providing a physically and mentally stimulating leisure activity. Furthermore, results from the case studies reveal the benefits and limitations of computer games in long-term care. Fostering inclusive efforts in game design and ensuring that motion-based video games are accessible to broad audiences is an important step toward allowing all players to obtain the full benefits of games, thereby contributing to the quality of life of diverse audiences

    Social gaze

    Get PDF

    Getting the Upper Hand: Natural Gesture Interfaces Improve Instructional Efficiency on a Conceptual Computer Lesson

    Get PDF
    As gesture-based interactions with computer interfaces become more technologically feasible for educational and training systems, it is important to consider what interactions are best for the learner. Computer interactions should not interfere with learning nor increase the mental effort of completing the lesson. The purpose of the current set of studies was to determine whether natural gesture-based interactions, or instruction of those gestures, help the learner in a computer lesson by increasing learning and reducing mental effort. First, two studies were conducted to determine what gestures were considered natural by participants. Then, those gestures were implemented in an experiment to compare type of gesture and type of gesture instruction on learning conceptual information from a computer lesson. The goal of these studies was to determine the instructional efficiency – that is, the extent of learning taking into account the amount of mental effort – of implementing gesture-based interactions in a conceptual computer lesson. To test whether the type of gesture interaction affects conceptual learning in a computer lesson, the gesture-based interactions were either naturally- or arbitrarily-mapped to the learning material on the fundamentals of optics. The optics lesson presented conceptual information about reflection and refraction, and participants used the gesture-based interactions during the lesson to manipulate on-screen lenses and mirrors in a beam of light. The beam of light refracted/reflected at the angle corresponding with type of lens/mirror. The natural gesture-based interactions were those that mimicked the physical movement used to manipulate the lenses and mirrors in the optics lesson, while the arbitrary gestures were those that did not match the movement of the lens or mirror being manipulated. The natural gestures implemented in the computer lesson were determined from Study 1, in which participants performed gestures they considered natural for a set of actions, and rated in Study 2 as most closely resembling the physical interaction they represent. The arbitrary gestures were rated by participants as most arbitrary for each computer action in Study 2. To test whether the effect of novel gesture-based interactions depends on how they are taught, the way the gestures were instructed was varied in the main experiment by using either video- or text-based tutorials. Results of the experiment support that natural gesture-based interactions were better for learning than arbitrary gestures, and instruction of the gestures largely did not affect learning and amount of mental effort felt during the task. To further investigate the factors affecting instructional efficiency in using gesture-based interactions for a computer lesson, individual differences of the learner were taken into account. Results indicated that the instructional efficiency of the gestures and their instruction depended on an individual\u27s spatial ability, such that arbitrary gesture interactions taught with a text-based tutorial were particularly inefficient for those with lower spatial ability. These findings are explained in the context of Embodied Cognition and Cognitive Load Theory, and guidelines are provided for instructional design of computer lessons using natural user interfaces. The theoretical frameworks of Embodied Cognition and Cognitive Load Theory were used to explain why gesture-based interactions and their instructions impacted the instructional efficiency of these factors in a computer lesson. Gesture-based interactions that are natural (i.e., mimic the physical interaction by corresponding to the learning material) were more instructionally efficient than arbitrary gestures because natural gestures may help schema development of conceptual information through physical enactment of the learning material. Furthermore, natural gestures resulted in lower cognitive load than arbitrary gestures, because arbitrary gestures that do not match the learning material may increase the working memory processing not associated with the learning material during the lesson. Additionally, the way in which the gesture-based interactions were taught was varied by either instructing the gestures with video- or text-based tutorials, and it was hypothesized that video-based tutorials would be a better way to instruct gesture-based interactions because the videos may help the learner to visualize the interactions and create a more easily recalled sensorimotor representation for the gestures; however, this hypothesis was not supported and there was not strong evidence that video-based tutorials were more instructionally efficient than text-based instructions. The results of the current set of studies can be applied to educational and training systems that incorporate a gesture-based interface. The finding that more natural gestures are better for learning efficiency, cognitive load, and a variety of usability factors should encourage instructional designers and researchers to keep the user in mind when developing gesture-based interactions
    corecore