22 research outputs found

    Developing predictive equations to model the visual demand of in-vehicle touchscreen HMIs

    Get PDF
    Touchscreen HMIs are commonly employed as the primary control interface and touch-point of vehicles. However, there has been very little theoretical work to model the demand associated with such devices in the automotive domain. Instead, touchscreen HMIs intended for deployment within vehicles tend to undergo time-consuming and expensive empirical testing and user trials, typically requiring fully-functioning prototypes, test rigs and extensive experimental protocols. While such testing is invaluable and must remain within the normal design/development cycle, there are clear benefits, both fiscal and practical, to the theoretical modelling of human performance. We describe the development of a preliminary model of human performance that makes a priori predictions of the visual demand (total glance time, number of glances and mean glance duration) elicited by in-vehicle touchscreen HMI designs, when used concurrently with driving. The model incorporates information theoretic components based on Hick-Hyman Law decision/search time and Fitts’ Law pointing time, and considers anticipation afforded by structuring and repeated exposure to an interface. Encouraging validation results, obtained by applying the model to a real-world prototype touchscreen HMI, suggest that it may provide an effective design and evaluation tool, capable of making valuable predictions regarding the limits of visual demand/performance associated with in-vehicle HMIs, much earlier in the design cycle than traditional design evaluation techniques. Further validation work is required to explore the behaviour associated with more complex tasks requiring multiple screen interactions, as well as other HMI design elements and interaction techniques. Results are discussed in the context of facilitating the design of in-vehicle touchscreen HMI to minimise visual demand

    Virtual Techniques for Prototype HMI Evaluation

    Get PDF
    The aim of this project was to investigate the behavioural validity of virtual methods, namely driving simulators and computational models, as prototype HMI evaluation tools. A driving study was designed where participants had to perform secondary tasks while driving in a real world and a driving simulator setting. Statistical analysis of the data, along with an in-depth review of related findings was used to identify the levels of behavioural validity that could be achieved by different simulator settings across different metrics. A further analysis was performed to identify behavioural strategies that drivers employ regarding their visual attention sharing while executing HMI tasks concurrently to driving. Finally, two existing computational models were validated and a novel model was proposed that can account for drivers’ behavioural phenomena, not previously accounted for

    Shared control strategies for automated vehicles

    Get PDF
    188 p.Los vehículos automatizados (AVs) han surgido como una solución tecnológica para compensar las deficiencias de la conducción manual. Sin embargo, esta tecnología aún no está lo suficientemente madura para reemplazar completamente al conductor, ya que esto plantea problemas técnicos, sociales y legales. Sin embargo, los accidentes siguen ocurriendo y se necesitan nuevas soluciones tecnológicas para mejorar la seguridad vial. En este contexto, el enfoque de control compartido, en el que el conductor permanece en el bucle de control y, junto con la automatización, forma un equipo bien coordinado que colabora continuamente en los niveles táctico y de control de la tarea de conducción, es una solución prometedora para mejorar el rendimiento de la conducción manual aprovechando los últimos avances en tecnología de conducción automatizada. Esta estrategia tiene como objetivo promover el desarrollo de sistemas de asistencia al conductor más avanzados y con mayor grade de cooperatición en comparación con los disponibles en los vehículos comerciales. En este sentido, los vehículos automatizados serán los supervisores que necesitan los conductores, y no al revés. La presente tesis aborda en profundidad el tema del control compartido en vehículos automatizados, tanto desde una perspectiva teórica como práctica. En primer lugar, se proporciona una revisión exhaustiva del estado del arte para brindar una descripción general de los conceptos y aplicaciones en los que los investigadores han estado trabajando durante lasúltimas dos décadas. Luego, se adopta un enfoque práctico mediante el desarrollo de un controlador para ayudar al conductor en el control lateral del vehículo. Este controlador y su sistema de toma de decisiones asociado (Módulo de Arbitraje) se integrarán en el marco general de conducción automatizada y se validarán en una plataforma de simulación con conductores reales. Finalmente, el controlador desarrollado se aplica a dos sistemas. El primero para asistir a un conductor distraído y el otro en la implementación de una función de seguridad para realizar maniobras de adelantamiento en carreteras de doble sentido. Al finalizar, se presentan las conclusiones más relevantes y las perspectivas de investigación futuras para el control compartido en la conducción automatizada

    Data-Driven Evaluation of In-Vehicle Information Systems

    Get PDF
    Today’s In-Vehicle Information Systems (IVISs) are featurerich systems that provide the driver with numerous options for entertainment, information, comfort, and communication. Drivers can stream their favorite songs, read reviews of nearby restaurants, or change the ambient lighting to their liking. To do so, they interact with large center stack touchscreens that have become the main interface between the driver and IVISs. To interact with these systems, drivers must take their eyes off the road which can impair their driving performance. This makes IVIS evaluation critical not only to meet customer needs but also to ensure road safety. The growing number of features, the distraction caused by large touchscreens, and the impact of driving automation on driver behavior pose significant challenges for the design and evaluation of IVISs. Traditionally, IVISs are evaluated qualitatively or through small-scale user studies using driving simulators. However, these methods are not scalable to the growing number of features and the variety of driving scenarios that influence driver interaction behavior. We argue that data-driven methods can be a viable solution to these challenges and can assist automotive User Experience (UX) experts in evaluating IVISs. Therefore, we need to understand how data-driven methods can facilitate the design and evaluation of IVISs, how large amounts of usage data need to be visualized, and how drivers allocate their visual attention when interacting with center stack touchscreens. In Part I, we present the results of two empirical studies and create a comprehensive understanding of the role that data-driven methods currently play in the automotive UX design process. We found that automotive UX experts face two main conflicts: First, results from qualitative or small-scale empirical studies are often not valued in the decision-making process. Second, UX experts often do not have access to customer data and lack the means and tools to analyze it appropriately. As a result, design decisions are often not user-centered and are based on subjective judgments rather than evidence-based customer insights. Our results show that automotive UX experts need data-driven methods that leverage large amounts of telematics data collected from customer vehicles. They need tools to help them visualize and analyze customer usage data and computational methods to automatically evaluate IVIS designs. In Part II, we present ICEBOAT, an interactive user behavior analysis tool for automotive user interfaces. ICEBOAT processes interaction data, driving data, and glance data, collected over-the-air from customer vehicles and visualizes it on different levels of granularity. Leveraging our multi-level user behavior analysis framework, it enables UX experts to effectively and efficiently evaluate driver interactions with touchscreen-based IVISs concerning performance and safety-related metrics. In Part III, we investigate drivers’ multitasking behavior and visual attention allocation when interacting with center stack touchscreens while driving. We present the first naturalistic driving study to assess drivers’ tactical and operational self-regulation with center stack touchscreens. Our results show significant differences in drivers’ interaction and glance behavior in response to different levels of driving automation, vehicle speed, and road curvature. During automated driving, drivers perform more interactions per touchscreen sequence and increase the time spent looking at the center stack touchscreen. These results emphasize the importance of context-dependent driver distraction assessment of driver interactions with IVISs. Motivated by this we present a machine learning-based approach to predict and explain the visual demand of in-vehicle touchscreen interactions based on customer data. By predicting the visual demand of yet unseen touchscreen interactions, our method lays the foundation for automated data-driven evaluation of early-stage IVIS prototypes. The local and global explanations provide additional insights into how design artifacts and driving context affect drivers’ glance behavior. Overall, this thesis identifies current shortcomings in the evaluation of IVISs and proposes novel solutions based on visual analytics and statistical and computational modeling that generate insights into driver interaction behavior and assist UX experts in making user-centered design decisions

    Information requirements for future HMI in partially automated vehicles

    Get PDF
    Partially automated vehicles are increasing in prevalence and enable drivers to hand over physical control of the vehicle’s longitudinal and latitudinal control to the automated system. However, at this partial level of automation, drivers will still be required to continuously monitor the vehicle’s operation and take back control at any time from the system when required. The Society of Automotive Engineers (SAE) defines this as Level 2 automation and consequently a number of design implications arise. To support the driver in the monitoring task, Level 2 vehicles today present a variety of information about sensor readings and operational issues to keep the driver informed; so appropriate action can be taken when required. However, existing research has shown that current Level 2 HMIs increase the cognitive workload, leading to driver cognitive disengagement and hence increasing the risk to safety. However, despite this knowledge, these Level 2 systems are available on the road today and little is known about what information should be presented to drivers inside these systems. Hence, this doctorate aimed to deliver design recommendations on how HMIs can more appropriately support the driver in the use of a partially automated Level 2 (or higher) vehicle system. Four studies were designed and executed for this doctorate. Study 1 aimed to understand the information preferences for drivers in a Level 2 vehicle using semi-structured interviews. Participants were exposed to a 10 minute, Level 2 driving simulation. A total of 25 interviews were conducted for first study. Using thematic analysis, two categories of drivers: ‘High Information Preference’ (HIP) and ‘Low Information Preference’ (LIP) were developed. It was evident that the drivers' expectations of the partial automation capability differed, affecting their information preferences and highlighting the challenge of what information should be presented inside these vehicles. Importantly, by defining these differing preferences, HMI designers can be more informed to design effective HMI, regardless of the driver’s predisposition. Building on this, an Ideas Café public engagement event was designed for Study 2; implementing a novel methodology to understand factors of trust in automated vehicles. Qualitative data gathered from the 35 event attendees was analysed using thematic analysis. The results reaffirmed the importance of the information presented in automated vehicles. Based on these first two studies, it was evident that there was an opportunity to develop a more robust understanding of what information is required in a Level 2 vehicle. Information requirements were quantitatively investigated through two eye-tracking studies (Studies 3 and 4). Both used a novel three- or five-day longitudinal study design. A shortlist of nine types of information was developed based on the results from the first two studies, regulatory standards and collaborations with Jaguar Land Rover experts. This was the first shortlist of its kind for automated vehicles. These 9 information types were presented to participants and eye tracking was used to record their information usage during Level 2 driving. Study 3 involved 17 participants and displayed only steady state scenarios. Study 4 involved 27 participants and introduced handover and warning events. Across both studies, information usage changed significantly, highlighting the methodological importance of longitudinal testing over multiple exposures. Participants increased their usage of information confirming the vehicle’s current state technical competence. In comparison, usage decreased of future state information that could help predict the future actions of the vehicle. By characterising the change in information usage, HMI designers can now ensure important information is designed appropriately. Notably, the ‘Action Explanation’ information, that described what the vehicle was doing and why, was found to be consistently the most used information. To date, this type of information has not been observed on any existing Level 2 HMI. Results from all four studies was synthesised to develop novel design recommendations for the information required inside Level 2 vehicles, and how this should be adapted over time depending on the driver’s familiarity with the system and driving events. This doctorate has contributed novel design recommendations for Level 2 vehicles through an innovative methodological approach across four studies. These design recommendations can now be taken forward to design and test new HMIs that can create a better, safer experience for future automated vehicles

    In-vehicle touchscreens : reducing attentional demands and improving driving performance.

    Get PDF
    Touchscreens are increasingly being used in cars, motorcycles, aircraft, ships, and agricultural machinery to access a wide range of vehicle functions. The primary motivation for incorporating touchscreens in vehicles is that they offer several advantages over physical mechanical controls, including inexpensive to pro- duce, lightweight, low space requirements, design flexibility to handle multiple input/output, quick and easy interface modification, and easy replacement. Touch- screens, on the other hand, lack some features that physical controls have, such as tactile feedback and the same tactile sensations for all controls. The absence of these features on a touchscreen increases visual attentional demands and re- duces driving performance, potentially posing a serious safety risk. We have set a primary goal for this research in order to address these issues: Develop new touchscreen interaction methods to improve driving performance by reducing visual attentional demands. We have set three objectives to achieve the primary goal of this research: (1) Examine the design and use of layout-agnostic stencil overlays for in-vehicle touchscreen; (2) To propose in-vehicle dashboard controls interaction framework; (3) To empirically characterise proprioceptive target acquisition accuracy for in-vehicle touchscreens while driving. Addressing goal (1). Prior stencil based studies suggested that stencil overlays can reduce the need for visual attention on the touchscreen while driving. However, those stencils were Layout-specific with cuts and holes at the underlying touch- screen controls’ location. As a result, each stencil could only be used with a single underlying interface. Because contemporary in-vehicle touchscreens are almost always multi-functional, with different interface layouts in different parts of the interface, this restriction is unrealistic for in-vehicle touchscreens. To address the limitations of previous stencil-based studies. We aimed to design Layout-agnostic stencils. Layout-agnostic means that one stencil can provide tactile guidance to user interface targets regardless of the underlying interface layout, with the term layout agnostic’ capturing our intention that the stencils should provide tactile guidance to user interface targets regardless of the underlying interface layout. We designed several versions of layout-agnostic stencils iteratively and evaluated them in a simulated driving scenario. Our layout-agnostic stencils failed to reduce visual attentional demands and worsen driving performance, according to the findings. Addressing goal (2). The failure of objective one prompted us to take a different approach in order to continue working on the research’s main goal. In this regard, we have set a new objective, aiming to yield a new understanding. Our stencils failed despite the iterative design process of layout-agnostic stencils, which was supported by prior studies that showed stencils could reduce visual attentional de- mands. We proposed a “In-vehicle dashboard controls interaction framework” to identify the root causes of layout-agnostic stencils failure. The framework allows for a better understanding of how the driver interacts with the vehicle’s dash- board controls. The framework could be used to create new dashboard interaction techniques as well as evaluate current ones. Addressing goal (3). We used the proposed framework to evaluate the results of layout-agnostic stencils and discovered three knowledge gaps regarding human- dashboard controls interaction while driving. The first knowledge gap was a lack of understanding of how precisely a human can use proprioception to reach a dash- board control. In this regard, we set another goal and conducted an experimental study to assess human proprioceptive abilities to reach dashboard controls in a simulated driving scenario in terms of distance from the body. We empirically characterise proprioceptive target acquisition accuracy for in-vehicle touchscreens while driving based on experimental results. From various distances, we can now determine how accurately humans can reach a specific location on the touchscreen. We proposed touchscreen control sizes (in cm) based on the characterisation. Ex- isting touchscreen user interfaces could be modified to enable eyes-free proprioceptive target acquisition while driving, which would improve touchscreen interaction safety, based on our recommended touchscreen control sizes. In conclusion, this thesis makes two minor and one major contribution to the field of in-vehicle touchscreen research. The minor contribution is as follows: (1) Better understanding the use of stencil overlays for in-vehicle touchscreens. The following are the major contributions: (2) We proposed a novel framework and it is the first framework in the vehicle dashboard interaction research domain to the best of our knowledge. The proposed framework provides a better understanding of how drivers interact with dashboard controls in vehicles. (3) We proposed a characterisation of the accuracy of proprioceptive target acquisition for in-vehicle touchscreens while driving

    Computational Intelligence and Human- Computer Interaction: Modern Methods and Applications

    Get PDF
    The present book contains all of the articles that were accepted and published in the Special Issue of MDPI’s journal Mathematics titled "Computational Intelligence and Human–Computer Interaction: Modern Methods and Applications". This Special Issue covered a wide range of topics connected to the theory and application of different computational intelligence techniques to the domain of human–computer interaction, such as automatic speech recognition, speech processing and analysis, virtual reality, emotion-aware applications, digital storytelling, natural language processing, smart cars and devices, and online learning. We hope that this book will be interesting and useful for those working in various areas of artificial intelligence, human–computer interaction, and software engineering as well as for those who are interested in how these domains are connected in real-life situations

    Subjective Evaluation of Vehicle Semi-Active Suspension for Improved Ride and Handling

    Get PDF
    The number of passenger cars currently equipped with semi-active suspensions has been steadily increasing in recent decades. These suspension systems provide an improvement in ride and handling when compared to passive suspensions. Currently, the approach to evaluating and tuning semi-active suspensions has been limited to objective methods or time-consuming alterations made on physical components. To alleviate the time and costs and improve the fidelity of such methods, a novel solution to subjectively evaluating vehicle semi-active suspensions is presented. The subjective evaluation method herein involves the use of a state-of-the-art dynamic driving simulator with drivers to subjectively evaluate and tune virtual semi-active suspensions. To consider the results of the proposed evaluation method accurate, high-fidelity vehicle models supplied by an OEM are studied. These vehicle models have previously been validated with objective and subjective performance data by an OEM’s expert drivers. First, offline co-simulations between VI-grade’s CarRealTime vehicle simulation software and several versions of a Simulink semi-active suspension controller are completed to objectively evaluate ride and handling. The semi-active suspension controller is based on several well-known control strategies and incorporates the vehicle’s passive suspension settings as one of the suspension modes. This feature permits a comparison between the passive and semi-active suspensions in terms of ride and handling. For the subjective evaluation, the vehicle and controller models are uploaded in a driver-in-the-loop environment. Expert drivers then execute a series of maneuvers and provide subjective feedback on the ride and handling of the different suspension modes. A questionnaire is implemented involving a list of subjective metrics tailored for ride and handling of semi-active suspensions. Furthermore, a correlation between changes in objective and subjective metrics is made to determine where correlation exists and to suggest predictive methods for future subjective ratings. A specific evaluation procedure is presented to ensure a bias among drivers is removed. The results of the subjective evaluation method prove that the method is effective at capturing relatively small changes in ride and handling, in a timely manner. The subjective ratings from the drivers showed acceptable agreement and considered many ride and handling improvements as major differences according to SAE standards. The correlation study identified a list of strong correlations between objective and subjective metrics. These results can be used to predict subjective performance when implementing offline changes to suspensions

    NASA Tech Briefs, June 1993

    Get PDF
    Topics include: Imaging Technology: Electronic Components and Circuits; Electronic Systems; Physical Sciences; Materials; Computer Programs; Mechanics; Machinery; Fabrication Technology; Mathematics and Information Sciences; Life Sciences

    Advanced, Integrated Control for Building Operations to Achieve 40% Energy Saving

    Full text link
    corecore