5,281 research outputs found

    Semi-aural Interfaces: Investigating Voice-controlled Aural Flows

    Get PDF
    To support mobile, eyes-free web browsing, users can listen to ‘playlists’ of web content— aural flows . Interacting with aural flows, however, requires users to select interface buttons, tethering visual attention to the mobile device even when it is unsafe (e.g. while walking). This research extends the interaction with aural flows through simulated voice commands as a way to reduce visual interaction. This paper presents the findings of a study with 20 participants who browsed aural flows either through a visual interface only or by augmenting it with voice commands. Results suggest that using voice commands reduced the time spent looking at the device by half but yielded similar system usability and cognitive effort ratings as using buttons. Overall, the low-cognitive effort engendered by aural flows, regardless of the interaction modality, allowed participants to do more non-instructed (e.g. looking at the surrounding environment) than instructed activities (e.g. focusing on the user interface)

    A natural user interface architecture using gestures to facilitate the detection of fundamental movement skills

    Get PDF
    Fundamental movement skills (FMSs) are considered to be one of the essential phases of motor skill development. The proper development of FMSs allows children to participate in more advanced forms of movements and sports. To be able to perform an FMS correctly, children need to learn the right way of performing it. By making use of technology, a system can be developed that can help facilitate the learning of FMSs. The objective of the research was to propose an effective natural user interface (NUI) architecture for detecting FMSs using the Kinect. In order to achieve the stated objective, an investigation into FMSs and the challenges faced when teaching them was presented. An investigation into NUIs was also presented including the merits of the Kinect as the most appropriate device to be used to facilitate the detection of an FMS. An NUI architecture was proposed that uses the Kinect to facilitate the detection of an FMS. A framework was implemented from the design of the architecture. The successful implementation of the framework provides evidence that the design of the proposed architecture is feasible. An instance of the framework incorporating the jump FMS was used as a case study in the development of a prototype that detects the correct and incorrect performance of a jump. The evaluation of the prototype proved the following: - The developed prototype was effective in detecting the correct and incorrect performance of the jump FMS; and - The implemented framework was robust for the incorporation of an FMS. The successful implementation of the prototype shows that an effective NUI architecture using the Kinect can be used to facilitate the detection of FMSs. The proposed architecture provides a structured way of developing a system using the Kinect to facilitate the detection of FMSs. This allows developers to add future FMSs to the system. This dissertation therefore makes the following contributions: - An experimental design to evaluate the effectiveness of a prototype that detects FMSs - A robust framework that incorporates FMSs; and - An effective NUI architecture to facilitate the detection of fundamental movement skills using the Kinect

    Wearable and mobile devices

    Get PDF
    Information and Communication Technologies, known as ICT, have undergone dramatic changes in the last 25 years. The 1980s was the decade of the Personal Computer (PC), which brought computing into the home and, in an educational setting, into the classroom. The 1990s gave us the World Wide Web (the Web), building on the infrastructure of the Internet, which has revolutionized the availability and delivery of information. In the midst of this information revolution, we are now confronted with a third wave of novel technologies (i.e., mobile and wearable computing), where computing devices already are becoming small enough so that we can carry them around at all times, and, in addition, they have the ability to interact with devices embedded in the environment. The development of wearable technology is perhaps a logical product of the convergence between the miniaturization of microchips (nanotechnology) and an increasing interest in pervasive computing, where mobility is the main objective. The miniaturization of computers is largely due to the decreasing size of semiconductors and switches; molecular manufacturing will allow for “not only molecular-scale switches but also nanoscale motors, pumps, pipes, machinery that could mimic skin” (Page, 2003, p. 2). This shift in the size of computers has obvious implications for the human-computer interaction introducing the next generation of interfaces. Neil Gershenfeld, the director of the Media Lab’s Physics and Media Group, argues, “The world is becoming the interface. Computers as distinguishable devices will disappear as the objects themselves become the means we use to interact with both the physical and the virtual worlds” (Page, 2003, p. 3). Ultimately, this will lead to a move away from desktop user interfaces and toward mobile interfaces and pervasive computing

    Queuing Network Modeling of Human Multitask Performance and its Application to Usability Testing of In-Vehicle Infotainment Systems.

    Full text link
    Human performance of a primary continuous task (e.g., steering a vehicle) and a secondary discrete task (e.g., tuning radio stations) simultaneously is a common scenario in many domains. It is of great importance to have a good understanding of the mechanisms of human multitasking behavior in order to design the task environments and user interfaces (UIs) that facilitate human performance and minimize potential safety hazards. In this dissertation I investigated and modeled human multitask performance with a vehicle-steering task and several typical in-vehicle secondary tasks. Two experiments were conducted to investigate how various display designs and control modules affect the driver's eye glance behavior and performance. A computational model based on the cognitive architecture of Queuing Network-Model Human Processor (QN-MHP) was built to account for the experiment findings. In contrast to most existing studies that focus on visual search in single task situations, this dissertation employed experimental work that investigates visual search in multitask situations. A modeling mechanism for flexible task activation (rather than strict serial activations) was developed to allow the activation of a task component to be based on the completion status of other task components. A task switching scheme was built to model the time-sharing nature of multitasking. These extensions offer new theoretical insights into visual search in multitask situations and enable the model to simulate parallel processing both within one task and among multiple tasks. The validation results show that the model could account for the observed performance differences from the empirical data. Based on this model, a computer-aided engineering toolkit was developed that allows the UI designers to make quantitative prediction of the usability of design concepts and prototypes. Scientifically, the results of this dissertation research offer additional insights into the mechanisms of human multitask performance. From the engineering application and practical value perspective, the new modeling mechanism and the new toolkit have advantages over the traditional usability testing methods with human subjects by enabling the UI designers to explore a larger design space and address usability issues at the early design stages with lower cost both in time and manpower.PHDIndustrial and Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113590/1/fredfeng_1.pd

    Towards a framework for socially interactive robots

    Get PDF
    250 p.En las últimas décadas, la investigación en el campo de la robótica social ha crecido considerablemente. El desarrollo de diferentes tipos de robots y sus roles dentro de la sociedad se están expandiendo poco a poco. Los robots dotados de habilidades sociales pretenden ser utilizados para diferentes aplicaciones; por ejemplo, como profesores interactivos y asistentes educativos, para apoyar el manejo de la diabetes en niños, para ayudar a personas mayores con necesidades especiales, como actores interactivos en el teatro o incluso como asistentes en hoteles y centros comerciales.El equipo de investigación RSAIT ha estado trabajando en varias áreas de la robótica, en particular,en arquitecturas de control, exploración y navegación de robots, aprendizaje automático y visión por computador. El trabajo presentado en este trabajo de investigación tiene como objetivo añadir una nueva capa al desarrollo anterior, la capa de interacción humano-robot que se centra en las capacidades sociales que un robot debe mostrar al interactuar con personas, como expresar y percibir emociones, mostrar un alto nivel de diálogo, aprender modelos de otros agentes, establecer y mantener relaciones sociales, usar medios naturales de comunicación (mirada, gestos, etc.),mostrar personalidad y carácter distintivos y aprender competencias sociales.En esta tesis doctoral, tratamos de aportar nuestro grano de arena a las preguntas básicas que surgen cuando pensamos en robots sociales: (1) ¿Cómo nos comunicamos (u operamos) los humanos con los robots sociales?; y (2) ¿Cómo actúan los robots sociales con nosotros? En esa línea, el trabajo se ha desarrollado en dos fases: en la primera, nos hemos centrado en explorar desde un punto de vista práctico varias formas que los humanos utilizan para comunicarse con los robots de una maneranatural. En la segunda además, hemos investigado cómo los robots sociales deben actuar con el usuario.Con respecto a la primera fase, hemos desarrollado tres interfaces de usuario naturales que pretenden hacer que la interacción con los robots sociales sea más natural. Para probar tales interfaces se han desarrollado dos aplicaciones de diferente uso: robots guía y un sistema de controlde robot humanoides con fines de entretenimiento. Trabajar en esas aplicaciones nos ha permitido dotar a nuestros robots con algunas habilidades básicas, como la navegación, la comunicación entre robots y el reconocimiento de voz y las capacidades de comprensión.Por otro lado, en la segunda fase nos hemos centrado en la identificación y el desarrollo de los módulos básicos de comportamiento que este tipo de robots necesitan para ser socialmente creíbles y confiables mientras actúan como agentes sociales. Se ha desarrollado una arquitectura(framework) para robots socialmente interactivos que permite a los robots expresar diferentes tipos de emociones y mostrar un lenguaje corporal natural similar al humano según la tarea a realizar y lascondiciones ambientales.La validación de los diferentes estados de desarrollo de nuestros robots sociales se ha realizado mediante representaciones públicas. La exposición de nuestros robots al público en esas actuaciones se ha convertido en una herramienta esencial para medir cualitativamente la aceptación social de los prototipos que estamos desarrollando. De la misma manera que los robots necesitan un cuerpo físico para interactuar con el entorno y convertirse en inteligentes, los robots sociales necesitan participar socialmente en tareas reales para las que han sido desarrollados, para así poder mejorar su sociabilida

    Designing for Mixed Reality Urban Exploration

    Get PDF
    This paper introduces a design framework for mixed reality urban exploration (MRUE), based on a concrete implementation in a historical city. The framework integrates different modalities, such as virtual reality (VR), augmented reality (AR), and haptics-audio interfaces, as well as advanced features such as personalized recommendations, social exploration, and itinerary management. It permits to address a number of concerns regarding information overload, safety, and quality of the experience, which are not sufficiently tackled in traditional non-integrated approaches. This study presents an integrated mobile platform built on top of this framework and reflects on the lessons learned.Peer reviewe

    Designing for Mixed Reality Urban Exploration

    Get PDF
    This paper introduces a design framework for mixed reality urban exploration (MRUE), based on a concrete implementation in a historical city. The framework integrates different modalities, such as virtual reality (VR), augmented reality (AR), and haptics-audio interfaces, as well as advanced features such as personalized recommendations, social exploration, and itinerary management. It permits to address a number of concerns regarding information overload, safety, and quality of the experience, which are not sufficiently tackled in traditional non-integrated approaches. This study presents an integrated mobile platform built on top of this framework and reflects on the lessons learned

    Affective interactions between expressive characters

    Get PDF
    When people meet in virtual worlds they are represented by computer animated characters that lack a variety of expression and can seem stiff and robotic. By comparison human bodies are highly expressive; a casual observation of a group of people mil reveals a large diversity of behavior, different postures, gestures and complex patterns of eye gaze. In order to make computer mediated communication between people more like real face-to-face communication, it is necessary to add an affective dimension. This paper presents Demeanour, an affective semi-autonomous system for the generation of realistic body language in avatars. Users control their avatars that in turn interact autonomously with other avatars to produce expressive behaviour. This allows people to have affectively rich interactions via their avatars
    corecore