2,742 research outputs found

    Usability in Multiple Monitor Displays

    Get PDF

    Diverse Contributions to Implicit Human-Computer Interaction

    Full text link
    Cuando las personas interactúan con los ordenadores, hay mucha información que no se proporciona a propósito. Mediante el estudio de estas interacciones implícitas es posible entender qué características de la interfaz de usuario son beneficiosas (o no), derivando así en implicaciones para el diseño de futuros sistemas interactivos. La principal ventaja de aprovechar datos implícitos del usuario en aplicaciones informáticas es que cualquier interacción con el sistema puede contribuir a mejorar su utilidad. Además, dichos datos eliminan el coste de tener que interrumpir al usuario para que envíe información explícitamente sobre un tema que en principio no tiene por qué guardar relación con la intención de utilizar el sistema. Por el contrario, en ocasiones las interacciones implícitas no proporcionan datos claros y concretos. Por ello, hay que prestar especial atención a la manera de gestionar esta fuente de información. El propósito de esta investigación es doble: 1) aplicar una nueva visión tanto al diseño como al desarrollo de aplicaciones que puedan reaccionar consecuentemente a las interacciones implícitas del usuario, y 2) proporcionar una serie de metodologías para la evaluación de dichos sistemas interactivos. Cinco escenarios sirven para ilustrar la viabilidad y la adecuación del marco de trabajo de la tesis. Resultados empíricos con usuarios reales demuestran que aprovechar la interacción implícita es un medio tanto adecuado como conveniente para mejorar de múltiples maneras los sistemas interactivos.Leiva Torres, LA. (2012). Diverse Contributions to Implicit Human-Computer Interaction [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/17803Palanci

    Queuing Network Modeling of Human Multitask Performance and its Application to Usability Testing of In-Vehicle Infotainment Systems.

    Full text link
    Human performance of a primary continuous task (e.g., steering a vehicle) and a secondary discrete task (e.g., tuning radio stations) simultaneously is a common scenario in many domains. It is of great importance to have a good understanding of the mechanisms of human multitasking behavior in order to design the task environments and user interfaces (UIs) that facilitate human performance and minimize potential safety hazards. In this dissertation I investigated and modeled human multitask performance with a vehicle-steering task and several typical in-vehicle secondary tasks. Two experiments were conducted to investigate how various display designs and control modules affect the driver's eye glance behavior and performance. A computational model based on the cognitive architecture of Queuing Network-Model Human Processor (QN-MHP) was built to account for the experiment findings. In contrast to most existing studies that focus on visual search in single task situations, this dissertation employed experimental work that investigates visual search in multitask situations. A modeling mechanism for flexible task activation (rather than strict serial activations) was developed to allow the activation of a task component to be based on the completion status of other task components. A task switching scheme was built to model the time-sharing nature of multitasking. These extensions offer new theoretical insights into visual search in multitask situations and enable the model to simulate parallel processing both within one task and among multiple tasks. The validation results show that the model could account for the observed performance differences from the empirical data. Based on this model, a computer-aided engineering toolkit was developed that allows the UI designers to make quantitative prediction of the usability of design concepts and prototypes. Scientifically, the results of this dissertation research offer additional insights into the mechanisms of human multitask performance. From the engineering application and practical value perspective, the new modeling mechanism and the new toolkit have advantages over the traditional usability testing methods with human subjects by enabling the UI designers to explore a larger design space and address usability issues at the early design stages with lower cost both in time and manpower.PHDIndustrial and Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113590/1/fredfeng_1.pd

    Progress towards Automated Human Factors Evaluation

    Get PDF
    Cao, S. (2015). Progress towards Automated Human Factors Evaluation. 6th International Conference on Applied Human Factors and Ergonomics (AHFE 2015) and the Affiliated Conferences, AHFE 2015, 3, 4266–4272. https://doi.org/10.1016/j.promfg.2015.07.414 This work is made available through a CC-BY-NC-ND 4.0 license. The licensor is not represented as endorsing the use made of this work. https://creativecommons.org/licenses/by-nc-nd/4.0/Human factors tests are important components of systems design. Designers need to evaluate users’ performance and workload while using a system and compare different design options to determine the optimal design choice. Currently, human factors evaluation and tests mainly rely on empirical user studies, which add a heavy cost to the design process. In addition, it is difficult to conduct comprehensive user tests at early design stages when no physical interfaces have been implemented. To address these issues, I develop computational human performance modeling techniques that can simulate users’ interaction with machine systems. This method uses a general cognitive architecture to computationally represent human cognitive capabilities and constraints. Task-specific models can be built with the specifications of user knowledge, user strategies, and user group differences. The simulation results include performance measures such as task completion time and error rate as well as workload measures. Completed studies have modeled multitasking scenarios in a wide range of domains, including transportation, healthcare, and human-computer interaction. The success of these studies demonstrated the modeling capabilities of this method. Cognitive-architecture-based models are useful, but building a cognitive model itself can be difficult to learn and master. It usually requires at least medium-level programming skills to understand and use the language and syntaxes that specify the task. For example, to build a model that simulates a driving task, a modeler needs to build a driving simulation environment so that the model can interact with the simulated vehicle. In order to simply this process, I have conducted preliminary programming work that directly connects the mental model to existing task environment simulation programs. The model will be able to directly obtain perceptual information from the task program and send control commands to the task program. With cognitive model-based tools, designers will be able to see the model performing the tasks in real-time and obtain a report of the evaluation. Automated human factors evaluation methods have tremendous value to support systems design and evaluatio

    Data-Driven Evaluation of In-Vehicle Information Systems

    Get PDF
    Today’s In-Vehicle Information Systems (IVISs) are featurerich systems that provide the driver with numerous options for entertainment, information, comfort, and communication. Drivers can stream their favorite songs, read reviews of nearby restaurants, or change the ambient lighting to their liking. To do so, they interact with large center stack touchscreens that have become the main interface between the driver and IVISs. To interact with these systems, drivers must take their eyes off the road which can impair their driving performance. This makes IVIS evaluation critical not only to meet customer needs but also to ensure road safety. The growing number of features, the distraction caused by large touchscreens, and the impact of driving automation on driver behavior pose significant challenges for the design and evaluation of IVISs. Traditionally, IVISs are evaluated qualitatively or through small-scale user studies using driving simulators. However, these methods are not scalable to the growing number of features and the variety of driving scenarios that influence driver interaction behavior. We argue that data-driven methods can be a viable solution to these challenges and can assist automotive User Experience (UX) experts in evaluating IVISs. Therefore, we need to understand how data-driven methods can facilitate the design and evaluation of IVISs, how large amounts of usage data need to be visualized, and how drivers allocate their visual attention when interacting with center stack touchscreens. In Part I, we present the results of two empirical studies and create a comprehensive understanding of the role that data-driven methods currently play in the automotive UX design process. We found that automotive UX experts face two main conflicts: First, results from qualitative or small-scale empirical studies are often not valued in the decision-making process. Second, UX experts often do not have access to customer data and lack the means and tools to analyze it appropriately. As a result, design decisions are often not user-centered and are based on subjective judgments rather than evidence-based customer insights. Our results show that automotive UX experts need data-driven methods that leverage large amounts of telematics data collected from customer vehicles. They need tools to help them visualize and analyze customer usage data and computational methods to automatically evaluate IVIS designs. In Part II, we present ICEBOAT, an interactive user behavior analysis tool for automotive user interfaces. ICEBOAT processes interaction data, driving data, and glance data, collected over-the-air from customer vehicles and visualizes it on different levels of granularity. Leveraging our multi-level user behavior analysis framework, it enables UX experts to effectively and efficiently evaluate driver interactions with touchscreen-based IVISs concerning performance and safety-related metrics. In Part III, we investigate drivers’ multitasking behavior and visual attention allocation when interacting with center stack touchscreens while driving. We present the first naturalistic driving study to assess drivers’ tactical and operational self-regulation with center stack touchscreens. Our results show significant differences in drivers’ interaction and glance behavior in response to different levels of driving automation, vehicle speed, and road curvature. During automated driving, drivers perform more interactions per touchscreen sequence and increase the time spent looking at the center stack touchscreen. These results emphasize the importance of context-dependent driver distraction assessment of driver interactions with IVISs. Motivated by this we present a machine learning-based approach to predict and explain the visual demand of in-vehicle touchscreen interactions based on customer data. By predicting the visual demand of yet unseen touchscreen interactions, our method lays the foundation for automated data-driven evaluation of early-stage IVIS prototypes. The local and global explanations provide additional insights into how design artifacts and driving context affect drivers’ glance behavior. Overall, this thesis identifies current shortcomings in the evaluation of IVISs and proposes novel solutions based on visual analytics and statistical and computational modeling that generate insights into driver interaction behavior and assist UX experts in making user-centered design decisions

    The Effect of Device When Using Smartphones and Computers to Answer Multiple-Choice and Open-Response Questions in Distance Education

    Get PDF
    Traditionally in higher education, online courses have been designed for computer users. However, the advent of mobile learning (m-learning) and the proliferation of smartphones have created two challenges for online students and instructional designers. First, instruction designed for a larger computer screen often loses its effectiveness when displayed on a smaller smartphone screen. Second, requiring students to write remains a hallmark of higher education, but miniature keyboards might restrict how thoroughly smartphone users respond to open- response test questions. The present study addressed both challenges by featuring m-learning’s greatest strength (multimedia) and by investigating its greatest weakness (text input). The purpose of the current study was to extend previous research associated with m- learning. The first goal was to determine the effect of device (computer vs. smartphone) on performance when answering multiple-choice and open-response questions. The second goal was to determine whether computers and smartphones would receive significantly different usability ratings when used by participants to answer multiple-choice and open-response questions. The construct of usability was defined as a composite score based on ratings of effectiveness, efficiency, and satisfaction. This comparative study used a between-subjects, posttest, experimental design. The study randomly assigned 70 adults to either the computer treatment group or the smartphone treatment group. Both treatment groups received the same narrated multimedia lesson on how a solar cell works. Participants accessed the lesson using either their personal computers (computer treatment group) or their personal smartphones (smartphone treatment group) at the time and location of their choice. After viewing the multimedia lesson, all participants answered the same multiple-choice and open-response posttest questions. In the current study, computer users and smartphone users had no significant difference in their scores on multiple-choice recall questions. On open-response questions, smartphone users performed better than predicted, which resulted in no significant difference between scores of the two treatment groups. Regarding usability, participants gave computers and smartphones high usability ratings when answering multiple-choice items. However, for answering open-response items, smartphones received significantly lower usability ratings than computers

    To take or not to take the laptop or tablet to classes, that is the question

    Get PDF
    In recent decades, so-called mobile learning or m-learning has become a new paradigm in education as a consequence of technological advances and the widespread use of mobile devices to access information and for communication. In this context, this paper analyzes different profiles depending on students’ preferences for taking mobile devices (specifically tablets and/or laptops) to economics classes at the University of Seville (Spain). A survey-based field study of a sample of 412 students and the application of bivariate probit models show a low level of mobile device integration in teaching (devices taken to class by only 29.8% of respondents) with a slight predominance of laptops. The results also show differences between users of the two types of devices. Students who take their laptops to class usually live at home with their family, have already used them in pre-university levels, and are concerned about recharging their devices in class. However, although users who take their tablets to class also live with their parents, they are much more active on social network sites and more concerned about the quality of the internet connection. These findings enable the design of strategies to encourage students to attend class with their own mobile devices

    Brain-based target expansion

    Full text link
    corecore