2 research outputs found

    Machine Learning, Human Factors and Security Analysis for the Remote Command of Driving: An MCity Pilot

    Get PDF
    Conducted under the U.S. DOT Office of the Assistant Secretary for Research and Technology’s (OST-R) University Transportation Centers (UTC) program.Both human drivers and autonomous vehicles are able to drive relatively well in frequently encountered settings, but fail in exceptional cases. These exceptional cases often arise suddenly, leaving human drivers with a few seconds at best to react—exactly the setting that people perform worst in. Autonomous systems also fail in exceptional cases, because ambiguous situations preceding crashes are not effectively captured in training datasets. This work introduces new methods for leveraging groups of people to provide on-demand assistance by coordinating responses and using collective answer distributions to generate responses to ambiguous scenarios using minimal time and effort. Unlike prior approaches, we introduce collective workflows that enable groups of people to significantly outperform any of the constituent individuals in terms of time and accuracy. First, we examine the latency and accuracy of crowd workers in a future state prediction task in visual driving scenes, and find that more than 50% of workers could provide accurate answers within one second. We found that using crowd predictions is a viable approach for determining critical future states to inform rapid decision making. Additionally, we characterize different estimation techniques that can be used to efficiently create collective answer distributions from crowd workers for visual tasks containing ambiguity. Surprisingly, we discovered that the most fine-grained and time-consuming methods were not the most accurate. Instead, having annotators choose all relevant responses they thought other annotators would select led to more accurate aggregate outcomes. This approach reduced human time required by 21.4% while maintaining the same level of accuracy as the baseline approach. These research results can inform the development of hybrid intelligence systems that accurately and rapidly address sudden and rare critical events, even when they are ambiguous or subjective.United States Department of Transportation Office of the Assistant Secretary for Research and TechnologyCenter for Connected and Automated Transportationhttp://deepblue.lib.umich.edu/bitstream/2027.42/156392/4/Machine Learning Human Factors and Security Analysis for the Remote Command of Driving - An Mcity Pilot.pd

    Interactional Slingshots: Providing Support Structure to User Interactions in Hybrid Intelligence Systems

    Full text link
    The proliferation of artificial intelligence (AI) systems has enabled us to engage more deeply and powerfully with our digital and physical environments, from chatbots to autonomous vehicles to robotic assistive technology. Unfortunately, these state-of-the-art systems often fail in contexts that require human understanding, are never-before-seen, or complex. In such cases, though the AI-only approaches cannot solve the full task, their ability to solve a piece of the task can be combined with human effort to become more robust to handling complexity and uncertainty. A hybrid intelligence system—one that combines human and machine skill sets—can make intelligent systems more operable in real-world settings. In this dissertation, we propose the idea of using interactional slingshots as a means of providing support structure to user interactions in hybrid intelligence systems. Much like how gravitational slingshots provide boosts to spacecraft en route to their final destinations, so do interactional slingshots provide boosts to user interactions en route to solving tasks. Several challenges arise: What does this support structure look like? How much freedom does the user have in their interactions? How is user expertise paired with that of the machine’s? To do this as a tractable socio-technical problem, we explore this idea in the context of data annotation problems, especially in those domains where AI methods fail to solve the overall task. Getting annotated (labeled) data is crucial for successful AI methods, and becomes especially more difficult in domains where AI fails, since problems in such domains require human understanding to fully solve, but also present challenges related to annotator expertise, annotation freedom, and context curation from the data. To explore data annotation problems in this space, we develop techniques and workflows whose interactional slingshot support structure harnesses the user’s interaction with data. First, we explore providing support in the form of nudging non-expert users’ interactions as they annotate text data for the task of creating conversational memory. Second, we add support structure in the form of assisting non-expert users during the annotation process itself for the task of grounding natural language references to objects in 3D point clouds. Finally, we supply support in the form of guiding expert and non-expert users both before and during their annotations for the task of conversational disentanglement across multiple domains. We demonstrate that building hybrid intelligence systems with each of these interactional slingshot support mechanisms—nudging, assisting, and guiding a user’s interaction with data—improves annotation outcomes, such as annotation speed, accuracy, effort level, even when annotators’ expertise and skill levels vary. Thesis Statement: By providing support structure that nudges, assists, and guides user interactions, it is possible to create hybrid intelligence systems that enable more efficient (faster and/or more accurate) data annotation.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163138/1/sairohit_1.pd
    corecore