773 research outputs found

    Estimation of Driver's Gaze Region from Head Position and Orientation using Probabilistic Confidence Regions

    Full text link
    A smart vehicle should be able to understand human behavior and predict their actions to avoid hazardous situations. Specific traits in human behavior can be automatically predicted, which can help the vehicle make decisions, increasing safety. One of the most important aspects pertaining to the driving task is the driver's visual attention. Predicting the driver's visual attention can help a vehicle understand the awareness state of the driver, providing important contextual information. While estimating the exact gaze direction is difficult in the car environment, a coarse estimation of the visual attention can be obtained by tracking the position and orientation of the head. Since the relation between head pose and gaze direction is not one-to-one, this paper proposes a formulation based on probabilistic models to create salient regions describing the visual attention of the driver. The area of the predicted region is small when the model has high confidence on the prediction, which is directly learned from the data. We use Gaussian process regression (GPR) to implement the framework, comparing the performance with different regression formulations such as linear regression and neural network based methods. We evaluate these frameworks by studying the tradeoff between spatial resolution and accuracy of the probability map using naturalistic recordings collected with the UTDrive platform. We observe that the GPR method produces the best result creating accurate predictions with localized salient regions. For example, the 95% confidence region is defined by an area that covers 3.77% region of a sphere surrounding the driver.Comment: 13 Pages, 12 figures, 2 table

    Exploring the Temporal Nature of Sociomateriality from a Work System Perspective

    Get PDF
    This paper uses work system theory (WST) to explore the temporal nature of sociomateriality. It summarizes concepts related to WST and sociomateriality, and notes sociomaterial aspects of WST. It uses static and dynamic views of a work system to examine six examples that can be classified in one of three time frames, minutes-to-hours, days-to-weeks, and months-to-years. The result is a straightforward interpretation of systems and related events across all of the time frames, which exhibit different types of phenomena related to adaptations, workarounds, emergence of informal work patterns, and sequences of formal projects. After approaching sociomateriality from a perspective not usually associated with that genre, this paper concludes that ambiguity about the intended time span of assertions related to entanglement and inseparability should be remedied. At minimum, it should be clear whether these phenomena occur instantaneously or in time spans of minutes-to-hours, days-to- weeks, or months-to-years

    From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI

    Full text link
    This paper gives an overview of the ten-year devel- opment of the papers presented at the International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutoUI) from 2009 to 2018. We categorize the topics into two main groups, namely, manual driving-related research and automated driving-related re- search. Within manual driving, we mainly focus on studies on user interfaces (UIs), driver states, augmented reality and head-up displays, and methodology; Within automated driv- ing, we discuss topics, such as takeover, acceptance and trust, interacting with road users, UIs, and methodology. We also discuss the main challenges and future directions for AutoUI and offer a roadmap for the research in this area.https://deepblue.lib.umich.edu/bitstream/2027.42/153959/1/From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI.pdfDescription of From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI.pdf : Main articl

    Comm-entary, Spring 2017 - Full Issue

    Get PDF
    In this issue: Sacrifice who you are, for what you will become: The Growth of Religion and Culture Through Eric Thomas by Jovan Morse Pop Culture in News: The Rise of Infotainment Programming by Charlotte Harris Corporate Ethics and Authenticity; Gaining Consumer Buying One Virtue at a Time by Nicole Downing Gender Discrepancies in Makeup Usage and the Subsequent Impact on Appearance Expectations by Rebecca C. Bishop Rushing Towards a Religion by Ellen Gibbs Sensory Test Sequence of Actions by Zoë Parsons The Effects of a Trip to India on the Music and Life of George Harrison by Carter Bennett Disconnected From Millennials: Why Hillary Clinton’s Campaign Failed to Captivate Young Voters by Jenna Ward The Ethics of Physician-assisted Death by Amanda Dwyer A Rhetorical Analysis of Before the Flood by Hannah Lan

    Employing Emerging Technologies to Develop and Evaluate In-Vehicle Intelligent Systems for Driver Support: Infotainment AR HUD Case Study

    Get PDF
    The plurality of current infotainment devices within the in-vehicle space produces an unprecedented volume of incoming data that overwhelm the typical driver, leading to higher collision probability. This work presents an investigation to an alternative option which aims to manage the incoming information while offering an uncluttered and timely manner of presenting and interacting with the incoming data safely. The latter is achieved through the use of an augmented reality (AR) head-up display (HUD) system, which projects the information within the driver’s field of view. An uncluttered gesture recognition interface provides the interaction with the AR visuals. For the assessment of the system’s effectiveness, we developed a full-scale virtual reality driving simulator which immerses the drivers in challenging, collision-prone, scenarios. The scenarios unfold within a digital twin model of the surrounding motorways of the city of Glasgow. The proposed system was evaluated in contrast to a typical head-down display (HDD) interface system by 30 users, showing promising results that are discussed in detail

    Employing Emerging Technologies to Develop and Evaluate In-Vehicle Intelligent Systems for Driver Support: Infotainment AR HUD Case Study

    Get PDF
    The plurality of current infotainment devices within the in-vehicle space produces an unprecedented volume of incoming data that overwhelm the typical driver, leading to higher collision probability. This work presents an investigation to an alternative option which aims to manage the incoming information while offering an uncluttered and timely manner of presenting and interacting with the incoming data safely. The latter is achieved through the use of an augmented reality (AR) head-up display (HUD) system, which projects the information within the driver’s field of view. An uncluttered gesture recognition interface provides the interaction with the AR visuals. For the assessment of the system’s effectiveness, we developed a full-scale virtual reality driving simulator which immerses the drivers in challenging, collision-prone, scenarios. The scenarios unfold within a digital twin model of the surrounding motorways of the city of Glasgow. The proposed system was evaluated in contrast to a typical head-down display (HDD) interface system by 30 users, showing promising results that are discussed in detail

    On driver behavior recognition for increased safety:A roadmap

    Get PDF
    Advanced Driver-Assistance Systems (ADASs) are used for increasing safety in the automotive domain, yet current ADASs notably operate without taking into account drivers’ states, e.g., whether she/he is emotionally apt to drive. In this paper, we first review the state-of-the-art of emotional and cognitive analysis for ADAS: we consider psychological models, the sensors needed for capturing physiological signals, and the typical algorithms used for human emotion classification. Our investigation highlights a lack of advanced Driver Monitoring Systems (DMSs) for ADASs, which could increase driving quality and security for both drivers and passengers. We then provide our view on a novel perception architecture for driver monitoring, built around the concept of Driver Complex State (DCS). DCS relies on multiple non-obtrusive sensors and Artificial Intelligence (AI) for uncovering the driver state and uses it to implement innovative Human–Machine Interface (HMI) functionalities. This concept will be implemented and validated in the recently EU-funded NextPerception project, which is briefly introduced

    THE USE OF CONTEXTUAL CLUES IN REDUCING FALSE POSITIVES IN AN EFFICIENT VISION-BASED HEAD GESTURE RECOGNITION SYSTEM

    Get PDF
    This thesis explores the use of head gesture recognition as an intuitive interface for computer interaction. This research presents a novel vision-based head gesture recognition system which utilizes contextual clues to reduce false positives. The system is used as a computer interface for answering dialog boxes. This work seeks to validate similar research, but focuses on using more efficient techniques using everyday hardware. A survey of image processing techniques for recognizing and tracking facial features is presented along with a comparison of several methods for tracking and identifying gestures over time. The design explains an efficient reusable head gesture recognition system using efficient lightweight algorithms to minimize resource utilization. The research conducted consists of a comparison between the base gesture recognition system and an optimized system that uses contextual clues to reduce false positives. The results confirm that simple contextual clues can lead to a significant reduction of false positives. The head gesture recognition system achieves an overall accuracy of 96% using contextual clues and significantly reduces false positives. In addition, the results from a usability study are presented showing that head gesture recognition is considered an intuitive interface and desirable above conventional input for answering dialog boxes. By providing the detailed design and architecture of a head gesture recognition system using efficient techniques and simple hardware, this thesis demonstrates the feasibility of implementing head gesture recognition as an intuitive form of interaction using preexisting infrastructure, and also provides evidence that such a system is desirable

    Reality TV and the Entrapment of Predators

    Get PDF
    Dateline NBC’s “To Catch a Predator”(2006-08) involved NBC staff working with police and a watchdog group called “Perverted Justice” to televise “special intensity” arrests of men who were lured into meeting adult decoys posing as young children, presumably for a sexual encounter. As reality television, “To Catch a Predator” facilitates public shaming of those caught in front of the cameras, which distinguishes it from fictional representations. In one case, a Texas District Attorney, Louis Conradt, shot himself on film, unable to bear the public humiliation of cameras airing his arrest. The show engenders conflicting responses: Did the show fulfill a public service by informing the public about real dangers and deterring potential predators, or was it an insensitive effort to garner ratings by taking advantage of human weaknesses? Is the sort of public shaming it imposes an appropriate form of punishment given the legitimate purposes of punishment? Did the show portray justice, or did it entrap victims? How did NBC’s working relationship with local police bear on the answer to that question? This paper addresses these questions and develops three objections to the show: that NBC in effect metes out unjust punishment; that it invades privacy; and that it entraps
    • …
    corecore