16 research outputs found

    Method and engineering tools for in-vehicle information systems (In-Vehicle Information Systems), focusing on the risk of driver distraction

    Get PDF
    Thousands of deaths a year are attributed to driver distractions while driving. The automotive industry has worked for decades maturing development guidelines and complex evaluation scenarios for the acceptance of infotainment centers as an integral part of the vehicle. The massification of smart wearable devices with a gigantic ecosystem of mobile applications not designed for the particularities of the automotive environment, easily bypasses the precautions of safe driving schemes. There is then, a niche of developers without the formal knowledge and economic support to adequately enter the automotive applications development. Cost-benefit methods and tools accessible in the context of software engineering are proposed to support the development of in-vehicle information systems in accordance with the requirements of current regulations and standards.Facultad de Informátic

    Management system prototype for intelligent mobile cloud computing for big data

    Get PDF
    The current challenge of mobile devices is the storage capacity that has led service providers to develop new value-added mobile services. To address these limitations, mobile cloud computing, which offers on-demand is developed. Mobile Cloud Computing (MCC) is developed to augment device capabilities, facilitating to mobile users store, access to a big dataset on the cloud. Even so, given the limitations of bandwidth, latencies, and device battery life, new responses are required to extend the use of mobile devices. This paper presents a novel design and implementation of developing process on intelligent mobile cloud storage management system, also called as Intelligent Mobile Cloud Computing (IMCC) for android based users. IMCC is important for cloud storage user to make their data effectively and efficiently for saving the user time. IMCC provided convenience for user to use multiple cloud storage using one application and easy for users to store their data to any cloud storage. The result shows using IMCC it only took 8 seconds to access the data, which is faster compared with traditional MCC, it took 23.33 seconds. IMCC reduce 65.71% of latency occur using the MCC in managing a user data. The developed IMCC prototype is accessible through the Google Play Store

    The growing and risky industry of nomadic apps for drivers

    Get PDF
    HCI researchers have worked for decades defining methods and techniques to assess the attention demands of in-vehicle information systems (IVIS). Acceptance test methods have been proposed that must be passed for the safe use of IVIS. Most of these methods require expensive test environments and highly trained personnel for its implementation. This article makes a review of those strategies with focus in the cost and development process phase. In the realm of mobile application ecosystems (aka "apps"), guidelines and certification programs exist. Apps must pass them to be considered as automotive-ready systems or to integrate with OEM infotainment devices. However, getting into the category of certified applications does not guarantee full compliance with the criteria established by formal methods accepted by the automotive industry and international standards. Moreover, many studies show the high risk of using IVIS while driving, which lead to consider that the current predominant approaches to assess attention demands of automotive apps and to guide IVIS design are not enough. Efficient cost-benefit methods applicable in early phases of application development, as well as context-adaptive interfaces have the potential to contribute to the improvement of safe driving environments.Laboratorio de Investigación y Formación en Informática Avanzad

    The growing and risky industry of nomadic apps for drivers

    Get PDF
    HCI researchers have worked for decades defining methods and techniques to assess the attention demands of in-vehicle information systems (IVIS). Acceptance test methods have been proposed that must be passed for the safe use of IVIS. Most of these methods require expensive test environments and highly trained personnel for its implementation. This article makes a review of those strategies with focus in the cost and development process phase. In the realm of mobile application ecosystems (aka apps ), guidelines and certification programs exist. Apps must pass them to be considered as automotiveready systems or to integrate with OEM infotainment devices. However, getting into the category of certified applications does not guarantee full compliance with the criteria established by formal methods accepted by the automotive industry and international standards. Moreover, many studies show the high risk of using IVIS while driving, which lead to consider that the current predominant approaches to assess attention demands of automotive apps and to guide IVIS design are not enough. Efficient cost-benefit methods applicable in early phases of application development, as well as context-adaptive interfaces have the potential to contribute to the improvement of safe driving environments

    Driving whilst using in-vehicle information systems (IVIS): benchmarking the impairment to alcohol

    Get PDF
    Using the lane change task (LCT) a comparison of driving performance was made between normal (baseline) driving, driving whilst using an in-vehicle information system (IVIS) and driving while intoxicated at the UK blood alcohol level (80 mg per 100 ml). The results provided clear evidence for impaired performance of the LCT when performing an IVIS task in comparison to both baseline (LCT alone) and alcohol conditions. However, the LCT was found to be insensitive to the effects of alcohol in the absence of a secondary task. It is concluded that LCT performance can be impaired more when undertaking certain IVIS tasks than by having a blood alcohol level at the UK legal limit but the LCT requires further development before it can be used as a convincing proxy for the driving task

    Driver interface/HMI standards to minimize driver distraction/overload

    Full text link
    Convergence 2008 Conference Proceedings, Detroit, MichiganThis paper describes (1) the telematics distraction/overload problem, (2) what distraction and overload are and how they differ, (3) the standards and guidelines that apply to the design and evaluation of driver interfaces/human-machine interfaces (HMI) for telematics (and their strengths and weaknesses), and (4) what standards and research are needed to support the development of driver interfaces. Most of the paper is a detailed discussion of evaluation standards, in particular SAE Recommended Practices J2364 (Task Time and Occlusion Tests) and J2365 (Task Time Estimation), ISO Standards 16673 (Occlusion Test) and 26022 (Lane-Change Test), and the AAM Driver Focus Guideline.http://deepblue.lib.umich.edu/bitstream/2027.42/65018/1/102437.pd

    Effects of Local Latency on Games

    Get PDF
    Video games are a major type of entertainment for millions of people, and feature a wide variety genres. Many genres of video games require quick reactions, and in these games it is critical for player performance and player experience that the game is responsive. One of the major contributing factors that can make games less responsive is local latency — the total delay between input and a resulting change to the screen. Local latency is produced by a combination of delays from input devices, software processing, and displays. Due to latency, game companies spend considerable time and money play-testing their games to ensure the game is both responsive and that the in-game difficulty is reasonable. Past studies have made it clear that local latency negatively affects both player performance and experience, but there is still little knowledge about local latency’s exact effects on games. In this thesis, we address this problem by providing game designers with more knowledge about local latency’s effects. First, we performed a study to examine latency’s effects on performance and experience for popular pointing input devices used with games. Our results show significant differences between devices based on the task and the amount of latency. We then provide design guidelines based on our findings. Second, we performed a study to understand latency’s effects on ‘atoms’ of interaction in games. The study varied both latency and game speed, and found game speed to affect a task’s sensitivity to latency. Third, we used our findings to build a model to help designers quickly identify latency-sensitive game atoms, thus saving time during play-testing. We built and validated a model that predicts errors rates in a game atom based on latency and game speed. Our work helps game designers by providing new insight into latency’s varied effects and by modelling and predicting those effect

    Computational Modeling and Experimental Research on Touchscreen Gestures, Audio/Speech Interaction, and Driving

    Full text link
    As humans are exposed to rapidly evolving complex systems, there are growing needs for humans and systems to use multiple communication modalities such as auditory, vocal (or speech), gesture, or visual channels; thus, it is important to evaluate multimodal human-machine interactions in multitasking conditions so as to improve human performance and safety. However, traditional methods of evaluating human performance and safety rely on experimental settings using human subjects which require costly and time-consuming efforts to conduct. To minimize the limitations from the use of traditional usability tests, digital human models are often developed and used, and they also help us better understand underlying human mental processes to effectively improve safety and avoid mental overload. In this regard, I have combined computational cognitive modeling and experimental methods to study mental processes and identify differences in human performance/workload in various conditions, through this dissertation research. The computational cognitive models were implemented by extending the Queuing Network-Model Human Processor (QN-MHP) Architecture that enables simulation of human multi-task behaviors and multimodal interactions in human-machine systems. Three experiments were conducted to investigate human behaviors in multimodal and multitasking scenarios, combining the following three specific research aims that are to understand: (1) how humans use their finger movements to input information on touchscreen devices (i.e., touchscreen gestures), (2) how humans use auditory/vocal signals to interact with the machines (i.e., audio/speech interaction), and (3) how humans drive vehicles (i.e., driving controls). Future research applications of computational modeling and experimental research are also discussed. Scientifically, the results of this dissertation research make significant contributions to our better understanding of the nature of touchscreen gestures, audio/speech interaction, and driving controls in human-machine systems and whether they benefit or jeopardize human performance and safety in the multimodal and concurrent task environments. Moreover, in contrast to the previous models for multitasking scenarios mainly focusing on the visual processes, this study develops quantitative models of the combined effects of auditory, tactile, and visual factors on multitasking performance. From the practical impact perspective, the modeling work conducted in this research may help multimodal interface designers minimize the limitations of traditional usability tests and make quick design comparisons, less constrained by other time-consuming factors, such as developing prototypes and running human subjects. Furthermore, the research conducted in this dissertation may help identify which elements in the multimodal and multitasking scenarios increase workload and completion time, which can be used to reduce the number of accidents and injuries caused by distraction.PHDIndustrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/143903/1/heejinj_1.pd

    Decisioning 2022 : Collaboration in knowledge discovery and decision making: Applications to sustainable agriculture

    Get PDF
    Sustainable agriculture is one of the Sustainable Development Goals (SDG) proposed by UN (United Nations), but little systematic work on Knowledge Discovery and Decision Making has been applied to it. Knowledge discovery and decision making are becoming active research areas in the last years. The era of FAIR (Findable, Accessible, Interoperable, Reusable) data science, in which linked data with a high degree of variety and different degrees of veracity can be easily correlated and put in perspective to have an empirical and scientific perception of best practices in sustainable agricultural domain. This requires combining multiple methods such as elicitation, specification, validation, technologies from semantic web, information retrieval, formal concept analysis, collaborative work, semantic interoperability, ontological matching, specification, smart contracts, and multiple decision making. Decisioning 2022 is the first workshop on Collaboration in knowledge discovery and decision making: Applications to sustainable agriculture. It has been organized by six research teams from France, Argentina, Colombia and Chile, to explore the current frontier of knowledge and applications in different areas related to knowledge discovery and decision making. The format of this workshop aims at the discussion and knowledge exchange between the academy and industry members.Laboratorio de Investigación y Formación en Informática Avanzad

    Data-Driven Evaluation of In-Vehicle Information Systems

    Get PDF
    Today’s In-Vehicle Information Systems (IVISs) are featurerich systems that provide the driver with numerous options for entertainment, information, comfort, and communication. Drivers can stream their favorite songs, read reviews of nearby restaurants, or change the ambient lighting to their liking. To do so, they interact with large center stack touchscreens that have become the main interface between the driver and IVISs. To interact with these systems, drivers must take their eyes off the road which can impair their driving performance. This makes IVIS evaluation critical not only to meet customer needs but also to ensure road safety. The growing number of features, the distraction caused by large touchscreens, and the impact of driving automation on driver behavior pose significant challenges for the design and evaluation of IVISs. Traditionally, IVISs are evaluated qualitatively or through small-scale user studies using driving simulators. However, these methods are not scalable to the growing number of features and the variety of driving scenarios that influence driver interaction behavior. We argue that data-driven methods can be a viable solution to these challenges and can assist automotive User Experience (UX) experts in evaluating IVISs. Therefore, we need to understand how data-driven methods can facilitate the design and evaluation of IVISs, how large amounts of usage data need to be visualized, and how drivers allocate their visual attention when interacting with center stack touchscreens. In Part I, we present the results of two empirical studies and create a comprehensive understanding of the role that data-driven methods currently play in the automotive UX design process. We found that automotive UX experts face two main conflicts: First, results from qualitative or small-scale empirical studies are often not valued in the decision-making process. Second, UX experts often do not have access to customer data and lack the means and tools to analyze it appropriately. As a result, design decisions are often not user-centered and are based on subjective judgments rather than evidence-based customer insights. Our results show that automotive UX experts need data-driven methods that leverage large amounts of telematics data collected from customer vehicles. They need tools to help them visualize and analyze customer usage data and computational methods to automatically evaluate IVIS designs. In Part II, we present ICEBOAT, an interactive user behavior analysis tool for automotive user interfaces. ICEBOAT processes interaction data, driving data, and glance data, collected over-the-air from customer vehicles and visualizes it on different levels of granularity. Leveraging our multi-level user behavior analysis framework, it enables UX experts to effectively and efficiently evaluate driver interactions with touchscreen-based IVISs concerning performance and safety-related metrics. In Part III, we investigate drivers’ multitasking behavior and visual attention allocation when interacting with center stack touchscreens while driving. We present the first naturalistic driving study to assess drivers’ tactical and operational self-regulation with center stack touchscreens. Our results show significant differences in drivers’ interaction and glance behavior in response to different levels of driving automation, vehicle speed, and road curvature. During automated driving, drivers perform more interactions per touchscreen sequence and increase the time spent looking at the center stack touchscreen. These results emphasize the importance of context-dependent driver distraction assessment of driver interactions with IVISs. Motivated by this we present a machine learning-based approach to predict and explain the visual demand of in-vehicle touchscreen interactions based on customer data. By predicting the visual demand of yet unseen touchscreen interactions, our method lays the foundation for automated data-driven evaluation of early-stage IVIS prototypes. The local and global explanations provide additional insights into how design artifacts and driving context affect drivers’ glance behavior. Overall, this thesis identifies current shortcomings in the evaluation of IVISs and proposes novel solutions based on visual analytics and statistical and computational modeling that generate insights into driver interaction behavior and assist UX experts in making user-centered design decisions
    corecore