1,016 research outputs found

    A framework for the design, prototyping and evaluation of mobile interfaces for domestic environments

    Get PDF
    The idea of the smart home has been discussed for over three decades, but it has yet to achieve mass-market adoption. This thesis asks the question Why is my home not smart? It highlights four main areas that are barriers to adoption, and concentrates on a single one of these issues: usability. It presents an investigation that focuses on design, prototyping and evaluation of mobile interfaces for domestic environments resulting in the development of a novel framework. A smart home is the physical realisation of a ubiquitous computing system for domestic living. The research area offers numerous benefits to end-users such as convenience, assistive living, energy saving and improved security and safety. However, these benefits have yet to become accessible due to a lack of usable smart home control interfaces. This issue is considered a key reason for lack of adoption and is the focus for this thesis. Within this thesis, a framework is introduced as a novel approach for the design, prototyping and evaluation of mobile interfaces for domestic environments. Included within this framework are three components. Firstly, the Reconfigurable Multimedia Environment (RME), a physical evaluation and observation space for conducting user centred research. Secondly, Simulated Interactive Devices (SID), a video-based development and control tool for simulating interactive devices commonly found within a smart home. Thirdly, iProto, a tool that facilitates the production and rapid deployment of high fidelity prototypes for mobile touch screen devices. This framework is evaluated as a round-tripping toolchain for prototyping smart home control and found to be an efficient process for facilitating the design and evaluation of such interfaces

    The State of Speech in HCI: Trends, Themes and Challenges

    Get PDF

    How language of interaction affects the user perception of a robot

    Full text link
    Spoken language is the most natural way for a human to communicate with a robot. It may seem intuitive that a robot should communicate with users in their native language. However, it is not clear if a user's perception of a robot is affected by the language of interaction. We investigated this question by conducting a study with twenty-three native Czech participants who were also fluent in English. The participants were tasked with instructing the Pepper robot on where to place objects on a shelf. The robot was controlled remotely using the Wizard-of-Oz technique. We collected data through questionnaires, video recordings, and a post-experiment feedback session. The results of our experiment show that people perceive an English-speaking robot as more intelligent than a Czech-speaking robot (z = 18.00, p-value = 0.02). This finding highlights the influence of language on human-robot interaction. Furthermore, we discuss the feedback obtained from the participants via the post-experiment sessions and its implications for HRI design.Comment: ICSR 202

    Design and Architecture of an Ontology-driven Dialogue System for HPV Vaccine Counseling

    Get PDF
    Speech and conversational technologies are increasingly being used by consumers, with the inevitability that one day they will be integrated in health care. Where this technology could be of service is in patient-provider communication, specifically for communicating the risks and benefits of vaccines. Human papillomavirus (HPV) vaccine, in particular, is a vaccine that inoculates individuals from certain HPV viruses responsible for adulthood cancers - cervical, head and neck cancers, etc. My research focuses on the architecture and development of speech-enabled conversational agent that relies on series of consumer-centric health ontologies and the technology that utilizes these ontologies. Ontologies are computable artifacts that encode and structure domain knowledge that can be utilized by machines to provide high level capabilities, such as reasoning and sharing information. I will focus the agent’s impact on the HPV vaccine domain to observe if users would respond favorably towards conversational agents and the possible impact of the agent on their beliefs of the HPV vaccine. The approach of this study involves a multi-tier structure. The first tier is the domain knowledge base, the second is the application interaction design tier, and the third is the feasibility assessment of the participants. The research in this study proposes the following questions: Can ontologies support the system architecture for a spoken conversational agent for HPV vaccine counseling? How would prospective users’ perception towards an agent and towards the HPV vaccine be impacted after using conversational agent for HPV vaccine education? The outcome of this study is a comprehensive assessment of a system architecture of a conversational agent for patient-centric HPV vaccine counseling. Each layer of the agent architecture is regulated through domain and application ontologies, and supported by the various ontology-driven software components that I developed to compose the agent architecture. Also discussed in this work, I present preliminary evidence of high usability of the agent and improvement of the users’ health beliefs toward the HPV vaccine. All in all, I introduce a comprehensive and feasible model for the design and development of an open-sourced, ontology-driven conversational agent for any health consumer domain, and corroborate the viability of a conversational agent as a health intervention tool

    Application of Machine Learning within Visual Content Production

    Get PDF
    We are living in an era where digital content is being produced at a dazzling pace. The heterogeneity of contents and contexts is so varied that a numerous amount of applications have been created to respond to people and market demands. The visual content production pipeline is the generalisation of the process that allows a content editor to create and evaluate their product, such as a video, an image, a 3D model, etc. Such data is then displayed on one or more devices such as TVs, PC monitors, virtual reality head-mounted displays, tablets, mobiles, or even smartwatches. Content creation can be simple as clicking a button to film a video and then share it into a social network, or complex as managing a dense user interface full of parameters by using keyboard and mouse to generate a realistic 3D model for a VR game. In this second example, such sophistication results in a steep learning curve for beginner-level users. In contrast, expert users regularly need to refine their skills via expensive lessons, time-consuming tutorials, or experience. Thus, user interaction plays an essential role in the diffusion of content creation software, primarily when it is targeted to untrained people. In particular, with the fast spread of virtual reality devices into the consumer market, new opportunities for designing reliable and intuitive interfaces have been created. Such new interactions need to take a step beyond the point and click interaction typical of the 2D desktop environment. The interactions need to be smart, intuitive and reliable, to interpret 3D gestures and therefore, more accurate algorithms are needed to recognise patterns. In recent years, machine learning and in particular deep learning have achieved outstanding results in many branches of computer science, such as computer graphics and human-computer interface, outperforming algorithms that were considered state of the art, however, there are only fleeting efforts to translate this into virtual reality. In this thesis, we seek to apply and take advantage of deep learning models to two different content production pipeline areas embracing the following subjects of interest: advanced methods for user interaction and visual quality assessment. First, we focus on 3D sketching to retrieve models from an extensive database of complex geometries and textures, while the user is immersed in a virtual environment. We explore both 2D and 3D strokes as tools for model retrieval in VR. Therefore, we implement a novel system for improving accuracy in searching for a 3D model. We contribute an efficient method to describe models through 3D sketch via an iterative descriptor generation, focusing both on accuracy and user experience. To evaluate it, we design a user study to compare different interactions for sketch generation. Second, we explore the combination of sketch input and vocal description to correct and fine-tune the search for 3D models in a database containing fine-grained variation. We analyse sketch and speech queries, identifying a way to incorporate both of them into our system's interaction loop. Third, in the context of the visual content production pipeline, we present a detailed study of visual metrics. We propose a novel method for detecting rendering-based artefacts in images. It exploits analogous deep learning algorithms used when extracting features from sketches

    The WOZ Recognizer: A Tool For Understanding User Perceptions of Sketch-Based Interfaces

    Get PDF
    Sketch recognition has the potential to be an important input method for computers in the coming years; however, designing and building an accurate and sophisticated sketch recognition system is a time consuming and daunting task. Since sketch recognition is still at a level where mistakes are common, it is important to understand how users perceive and tolerate recognition errors and other user interface elements with these imperfect systems. A problem in performing this type of research is that we cannot easily control aspects of recognition in order to rigorously study the systems. We performed a study examining user perceptions of three pen-based systems for creating logic gate diagrams: a sketch-based interface, a WIMP-based interface, and a hybrid interface that combined elements of sketching and WIMP. We found that users preferred the sketch-based interface and we identified important criteria for pen-based application design. This work exposed the issue of studying recognition systems without fine-grained control over accuracy, recognition mode, and other recognizer properties. In order to solve this problem, we developed a Wizard of Oz sketch recognition tool, the WOZ Recognizer, that supports controlled symbol and position accuracy and batch and streaming recognition modes for a variety of sketching domains. We present the design of the WOZ Recognizer, modeling recognition domains using graphs, symbol alphabets, and grammars; and discuss the types of recognition errors we included in its design. Further, we discuss how the WOZ Recognizer simulates sketch recognition, controlling the WOZ Recognizer, and how users interact with it. In addition, we present an evaluative user study of the WOZ Recognizer and the lessons we learned. We have used the WOZ Recognizer to perform two user studies examining user perceptions of sketch recognition; both studies focused on mathematical sketching. In the first study, we examined whether users prefer recognition feedback now (real-time recognition) or later (batch recognition) in relation to different recognition accuracies and sketch complexities. We found that participants displayed a preference for real-time recognition in some situations (multiple expressions, low accuracy), but no statistical preference in others. In our second study, we examined whether users displayed a greater tolerance for recognition errors when they used mathematical sketching applications they found interesting or useful compared to applications they found less interesting. Participants felt they had a greater tolerance for the applications they preferred, although our statistical analysis did not positively support this. In addition to the research already performed, we propose several avenues for future research into user perceptions of sketch recognition that we believe will be of value to sketch recognizer researchers and application designers

    Understanding user interactions in stereoscopic head-mounted displays

    Get PDF
    2022 Spring.Includes bibliographical references.Interacting in stereoscopic head mounted displays can be difficult. There are not yet clear standards for how interactions in these environments should be performed. In virtual reality there are a number of well designed interaction techniques; however, augmented reality interaction techniques still need to be improved before they can be easily used. This dissertation covers work done towards understanding how users navigate and interact with virtual environments that are displayed in stereoscopic head-mounted displays. With this understanding, existing techniques from virtual reality devices can be transferred to augmented reality where appropriate, and where that is not the case, new interaction techniques can be developed. This work begins by observing how participants interact with virtual content using gesture alone, speech alone, and the combination of gesture+speech during a basic object manipulation task in augmented reality. Later, a complex 3-dimensional data-exploration environment is developed and refined. That environment is capable of being used in both augmented reality (AR) and virtual reality (VR), either asynchronously or simultaneously. The process of iteratively designing that system and the design choices made during its implementation are provided for future researchers working on complex systems. This dissertation concludes with a comparison of user interactions and navigation in that complex environment when using either an augmented or virtual reality display. That comparison contributes new knowledge on how people perform object manipulations between the two devices. When viewing 3D visualizations, users will need to feel able to navigate the environment. Without careful attention to proper interaction technique design, people may struggle to use the developed system. These struggles may range from a system that is uncomfortable and not fit for long-term use, or they could be as major as causing new users to not being able to interact in these environments at all. Getting the interactions right for AR and VR environments is a step towards facilitating their widespread acceptance. This dissertation provides the groundwork needed to start designing interaction techniques around how people utilize their personal space, virtual space, body, tools, and feedback systems

    May I Ask a Follow-up Question? Understanding the Benefits of Conversations in Neural Network Explainability

    Full text link
    Research in explainable AI (XAI) aims to provide insights into the decision-making process of opaque AI models. To date, most XAI methods offer one-off and static explanations, which cannot cater to the diverse backgrounds and understanding levels of users. With this paper, we investigate if free-form conversations can enhance users' comprehension of static explanations, improve acceptance and trust in the explanation methods, and facilitate human-AI collaboration. Participants are presented with static explanations, followed by a conversation with a human expert regarding the explanations. We measure the effect of the conversation on participants' ability to choose, from three machine learning models, the most accurate one based on explanations and their self-reported comprehension, acceptance, and trust. Empirical results show that conversations significantly improve comprehension, acceptance, trust, and collaboration. Our findings highlight the importance of customized model explanations in the format of free-form conversations and provide insights for the future design of conversational explanations
    • …
    corecore