63 research outputs found

    Tell Me How You Feel: Designing Emotion-Aware Voicebots to Ease Pandemic Anxiety In Aging Citizens

    Full text link
    The feeling of anxiety and loneliness among aging population has been recently amplified by the COVID-19 related lockdowns. Emotion-aware multimodal bot application combining voice and visual interface was developed to address the problem in the group of older citizens. The application is novel as it combines three main modules: information, emotion selection and psychological intervention, with the aim of improving human well-being. The preliminary study with target group confirmed that multimodality improves usability and that the information module is essential for participating in a psychological intervention. The solution is universal and can also be applied to areas not directly related to COVID-19 pandemic.Comment: 16 page

    Enhancing portuguese public services: prototype of a mobile application with a digital assistant

    Get PDF
    Trabalho de projeto apresentado à Escola Superior de Comunicação Social como parte dos requisitos para obtenção de grau de mestre em Audiovisual e Multimédia.A inteligência artificial (IA) está a transformar a forma como interagimos com a tecnologia, incluindo a forma como os cidadãos acedem e interagem com os serviços públicos. Portugal desenvolveu uma estratégia nacional para a adoção da IA, a fim de melhorar a experiência e o envolvimento dos cidadãos, com destaque para a inclusão digital e a digitalização da administração pública. Apesar dos progressos, o país está atrasado em relação a outros países da União Europeia no que respeita à transformação digital. Para simplificar e modernizar os serviços públicos, Portugal introduziu o portal ePortugal, que inclui o chatbot “Sigma” e uma assistente virtual, que neste momento ainda se encontra numa versão de teste. A adoção de sistemas de IA conversacional, como os assistentes de voz e os chatbots, tem o potencial de reduzir os encargos administrativos, melhorar a acessibilidade e aumentar a participação dos cidadãos. Este projeto visa conceber uma aplicação móvel para o ePortugal, que inclui uma assistente digital equipada com funcionalidades de texto e voz.ABSTRACT: Artificial intelligence (AI) is transforming the way we interact with technology, including how citizens access and engage with government services. Portugal has developed a national strategy for AI adoption to improve the citizen experience and engagement, with a focus on digital inclusion and the digitalization of public administration. Despite progress, the country lags behind other European Union countries in digital transformation. To simplify and modernize public services, Portugal has introduced the ePortugal portal, featuring a chatbot named “Sigma” and a virtual assistant that is currently being tested. The adoption of conversational AI systems, such as voice assistants and chatbots, has the potential to reduce administrative burdens, improve accessibility, and enhance citizen engagement. This project aims to design the ePortugal mobile application, featuring a digital assistant equipped with both text and voice functionalities.N/

    Challenges and Opportunities for the Design of Smart Speakers

    Full text link
    Advances in voice technology and voice user interfaces (VUIs) -- such as Alexa, Siri, and Google Home -- have opened up the potential for many new types of interaction. However, despite the potential of these devices reflected by the growing market and body of VUI research, there is a lingering sense that the technology is still underused. In this paper, we conducted a systematic literature review of 35 papers to identify and synthesize 127 VUI design guidelines into five themes. Additionally, we conducted semi-structured interviews with 15 smart speaker users to understand their use and non-use of the technology. From the interviews, we distill four design challenges that contribute the most to non-use. Based on their (non-)use, we identify four opportunity spaces for designers to explore such as focusing on information support while multitasking (cooking, driving, childcare, etc), incorporating users' mental models for smart speakers, and integrating calm design principles.Comment: 15 pages, 7 figure

    Exploring user experience (UX) factors For ICTD services

    Get PDF
    Consistent with global entities such as the United Nations- through the World Summit of the Information Society (WSIS), introduction of Information and Communication Technology (ICT) for human development has seen the introduction of ICT-based services aimed at facilitating socio-economic development of marginalized communities. The use of ICTs has always solicited the concept of Human Computer Interaction (HCI), which involves the methods which humans interact with technology. The types of User Interfaces (UIs) and interaction techniques that people use to interact with ICTs affects the way they perceive technology and eventually, their acceptance of the technology. Current ICT systems still haven‟t adopted the concept of placing the user at the core of the interaction. Users are still required to adapt themselves to the interface‟s characteristics; which limits the number of people who can use the system due to inabilities to adapt to the interface. As a result, the information embedded in these technologies is still inaccessible and useless to Marginalized Rural Area (MRA) users. Such usability challenges can be mitigated against and avoided by matching UI components with the users‟ mental models, language, preferences, needs and other socio-cultural artefacts. In this research, literature in Human-Computer Interaction (HCI) is reviewed with emphasis on the usability and User Experience (UX) during user interaction with ICTs using various modes of interactions. HCI emphasizes the need for systems to take account of user‟s characteristics such as their abilities, needs, socio-cultural experiences, behaviours and interests. In efforts to meet the requirement of UX, the user, system and the context of use, need to be evaluated, taking into consideration that changing one entity modifies the UX. This will be achieved by persona profiling to determine the key characteristics of the user communities, clustered according to the key UX attributes. Subsequently, through detailed usability evaluations, including the use of System Usability Scale (SUS) to determine user satisfaction with various UI components/techniques per identified persona- thus providing and persona mapping for usability of Information and Communication Technology for Development (ICTD) services. The results from this research are reflective of the importance of creating personas for usability testing. Some of the personas do not have a problem with interacting with most of the interfaces but their choice of interface comes from a preference point of view. For some personas, their skills and level of experience with ICTs motivates their choice of interface. The common UI component that users from across the spectrum appreciate is UI consistency which makes interaction easier and more natural. Common obstacles with current User Interfaces (UIs) that inhibit users from MRAs include the hefty use of text in interfaces, unintuitive navigation structures and the use of a foreign language. Differences in UIs from different application developers present an inconsistency which challenges the users from rural areas. These differences include the layout, the text entry methods and the form of output produced. A solution to this has been identified from the usability test as the use of speech-enabled interfaces in a language that can be understood by the target audience. In addition, through literature study it has been found that UX of interfaces can be improved by the use of less textual or text-free interfaces. Based on literature, users from MRAs can benefit from using hand-writing based UIs for text-based entry which mimics pen and paper environment for literate users who have experience with writing. Finally, the use of numbered options can assist illiterate users in tasks that requires users to choose options and for navigation. Therefore, consistency in UIs designed to be used by MRA users can improve usability of these interfaces and thus, improving the overall UX

    Mobile phones interaction techniques for second economy people

    Get PDF
    Second economy people in developing countries are people living in communities that are underserved in terms of basic amenities and social services. Due to literacy challenges and user accessibility problems in rural communities, it is often difficult to design user interfaces that conform to the capabilities and cultural experiences of low-literacy rural community users. Rural community users are technologically illiterate and lack the knowledge of the potential of information and communication technologies. In order to embrace new technology, users will need to perceive the user interface and application as useful and easy to interact with. This requires proper understanding of the users and their socio-cultural environment. This will enable the interfaces and interactions to conform to their behaviours, motivations as well as cultural experiences and preferences and thus enhance usability and user experience. Mobile phones have the potential to increase access to information and provide a platform for economic development in rural communities. Rural communities have economic potential in terms of agriculture and micro-enterprises. Information technology can be used to enhance socio-economic activities and improve rural livelihood. We conducted a study to design user interfaces for a mobile commerce application for micro-entrepreneurs in a rural community in South Africa. The aim of the study was to design mobile interfaces and interaction techniques that are easy to use and meet the cultural preferences and experiences of users who have little to no previous experience of mobile commerce technology. And also to explore the potentials of information technologies rural community users, and bring mobile added value services to rural micro-entrepreneurs. We applied a user-centred design approach in Dwesa community and used qualitative and quantitative research methods to collect data for the design of the user interfaces (graphic user interface and voice user interface) and mobile commerce application. We identified and used several interface elements to design and finally evaluate the graphical user interface. The statistics analysis of the evaluation results show that the users in the community have positive perception of the usefulness of the application, the ease of use and intention to use the application. Community users with no prior experience with this technology were able to learn and understand the interface, recorded minimum errors and a high level of v precision during task performance when they interacted with the shop-owner graphic user interface. The voice user interface designed in this study consists of two flavours (dual tone multi-frequency input and voice input) for rural users. The evaluation results show that community users recorded higher tasks successes and minimum errors with the dual tone multi-frequency input interface than the voice only input interface. Also, a higher percentage of users prefer the dual tone multi-frequency input interface. The t-Test statistical analysis performed on the tasks completion times and error rate show that there was significant statistical difference between the dual tone multi-frequency input interface and the voice input interface. The interfaces were easy to learn, understand and use. Properly designed user interfaces that meet the experience and capabilities of low-literacy users in rural areas will improve usability and users‟ experiences. Adaptation of interfaces to users‟ culture and preferences will enhance information services accessibility among different user groups in different regions. This will promote technology acceptance in rural communities for socio-economic benefits. The user interfaces presented in this study can be adapted to different cultures to provide similar services for marginalised communities in developing countrie

    Constructing An Auditory Notation in Software Engineering: Understanding UML Models With Voice And Sound

    Get PDF
    Sound is crucial to how we interact with the world around us, providing feedback and contextualising information. However, when discussed in software, it is not given the same importance as vision. Neglecting this channel results in untapped possibilities that could enhance the user experience, and the exclusion of many visually impaired people from activities related to software engineering, since the visual notations and well-accepted tools in this field are not supportive of audio, such as UML. Several technologies have been developed and integrated into prototypes. Still, it became evident during our research that, among other factors, their usability is greatly impacted by unsuitable choices of sound and voice symbolism, as well as wrong interac- tion dialogues that can become too cumbersome to be used. Sound should be analysed in the context of software engineering, as it has the unexplored potential to significantly contribute to how we construct and interact with the software while allowing blind and visually impaired people to be part of these activities. For this purpose, we are interested in building a foundational framework to substanti- ate decisions when designing an auditory notation, and a tool that performs diagrammatic readings in UML, intended to validate these proposals. Supported by the semiotics of the audible field and music symbology, combined with the insights provided by Moody’s Physics of Notations, the findings of other research and developed tools concerning these topics, along with experimental studies that we carried out and are presented in this document. We believe that this work can be instrumental in creating a structured and intuitive auditory notation for software engineering, complemented by a tool built in the right direction for accessibility. Furthermore, it is an approach that aims to join both the visual and hearing senses in a manner that benefits a large and diverse population of experienced software engineers and novices alike, heightening the visual notation in the process.O som é crucial para a forma como interagimos com o mundo ao nosso redor, fornecendo feedback e contextualizando informação. No entanto, quando este é discutido em software, não lhe é dada a mesma importância que à visão. Negligenciar este canal resulta em possibilidades inexploradas que poderiam melhorar a experiência do utilizador, e na exclusão de pessoas com deficiências visuais de actividades relacionadas com engenharia de software, uma vez que as notações visuais e as ferramentas bem aceites neste domínio não suportam áudio, como é o caso do UML. Diversas tecnologias foram desenvolvidas e integradas em protótipos, mas durante a nossa pesquisa tornou-se evidente que, para além de outros factores, a sua usabilidade é bastante impactada por escolhas inadequadas de simbolismo relativamente ao som e voz, bem como diálogos de interação errados que se podem tornar demasiado incómodos para serem utilizados. O som deve ser analisado no contexto de engenharia de software, pois tem o potencial de contribuir para a forma como construímos e interagimos com software, permitindo ainda que pessoas com deficiências visuais façam parte destas atividades. Para esta finalidade, queremos construir uma framework para fundamentar decisões na construção de uma notação auditiva, em conjunto com uma ferramenta que efectua leituras diagramáticas em UML, destinada à validação destas propostas. Tendo por base a compreensão das semióticas do domínio audível e simbologia musical, combinado com os conhecimentos fornecidos pelo Physics of Notations de Moody, as descobertas de outros tra- balhos e ferramentas desenvolvidas neste âmbito, juntamente com estudos experimentais que realizámos, apresentados neste documento. Acreditamos que este trabalho possa ser fundamental na criação de uma notação auditiva estruturada e intuitiva para engenharia de software, complementada por uma ferramenta construída na direção certa para a acessibilidade. Além disso, é uma aborda- gem que visa a união dos sentidos de visão e audição, de forma a beneficiar uma ampla e diversa população de engenheiros de software experientes e novatos, elevando a notação visual no processo

    Embedding Intelligence. Designerly reflections on AI-infused products

    Get PDF
    Artificial intelligence is more-or-less covertly entering our lives and houses, embedded into products and services that are acquiring novel roles and agency on users. Products such as virtual assistants represent the first wave of materializa- tion of artificial intelligence in the domestic realm and beyond. They are new interlocutors in an emerging redefined relationship between humans and computers. They are agents, with miscommunicated or unclear proper- ties, performing actions to reach human-set goals. They embed capabilities that industrial products never had. They can learn users’ preferences and accordingly adapt their responses, but they are also powerful means to shape people’s behavior and build new practices and habits. Nevertheless, the way these products are used is not fully exploiting their potential, and frequently they entail poor user experiences, relegating their role to gadgets or toys. Furthermore, AI-infused products need vast amounts of personal data to work accurately, and the gathering and processing of this data are often obscure to end-users. As well, how, whether, and when it is preferable to implement AI in products and services is still an open debate. This condition raises critical ethical issues about their usage and may dramatically impact users’ trust and, ultimately, the quality of user experience. The design discipline and the Human-Computer Interaction (HCI) field are just beginning to explore the wicked relationship between Design and AI, looking for a definition of its borders, still blurred and ever-changing. The book approaches this issue from a human-centered standpoint, proposing designerly reflections on AI-infused products. It addresses one main guiding question: what are the design implications of embedding intelligence into everyday objects

    Wonder Vision-A Hybrid Way-finding System to assist people with Visual Impairment

    Get PDF
    We use multi-sensory information to find our ways around environments. Among these, vision plays a crucial part in way-finding tasks, such as perceiving landmarks and layouts. People with impaired vision may find it difficult to move around in unfamiliar environments because they are unable to use their eyesight to capture critical information. Limiting vision can affect how people interact with their environment, especially for navigation. Individuals with varying degrees of vision will require a different level of way-finding aids. Blind people rely heavily on white canes, whereas low-vision patients could choose from magnifiers for amplifying signs, or even GPS mobile applications to acquire knowledge before their arrival. The purpose of this study is to investigate the in-situ challenges of way-finding for persons with visual impairments. With the methodologies of Research through Design (RTD) and User-centered Design (UCD), I conducted online user research and created a series of iterative prototypes towards a final one: Wonder Vision. It is a hybrid way-finding system that combines Augmented Reality (AR) and Voice User Interface (VUI) to assist people with visual impairments. The descriptive evaluation method suggests Wonder Vision as a possible solution for helping people with visual impairments to find their way toward their goals

    Software Usability

    Get PDF
    This volume delivers a collection of high-quality contributions to help broaden developers’ and non-developers’ minds alike when it comes to considering software usability. It presents novel research and experiences and disseminates new ideas accessible to people who might not be software makers but who are undoubtedly software users
    corecore