5,395 research outputs found

    Collision warning design in automotive head-up displays

    Get PDF
    Abstract. In the last few years, the automotive industry has experienced a large growth in the hardware and the underlying electronics. The industry benefits from both Human Machine Interface (HMI) research and modern technology. There are many applications of the Advanced Driver Assistant System (ADAS) and their positive impact on drivers is even more. Forward Collision Warning (FCW) is one of many applications of ADAS. In the last decades, different approaches and tools are used to implement FCW systems. Current Augmented Reality (AR) applications are feasible to integrate in modern cars. In this thesis work, we introduce three different FCW designs: static, animated and 3D animated warnings. We test the proposed designs in three different environments: day, night and rain. The designs static and animated achieve a minimum response time 0.486 s whereas the 3D animated warning achieves 1.153 s

    Visible light communication for advanced driver assistant systems

    Get PDF
    VIsible light communication for advanced Driver Assistant Systems (VIDAS) is an outdoor application using the visible spectrum of light emitting diodes (LED). A simple traffic light set up based on LED traffic lights for traffic information transmission has been analyzed in this paper. Various important design parameters have been optimized through intensive investigation based on gain variation over 100 m of transmission range. This process is expected to simplify the front-end receiver design and enhance the performance of the receiver which is one of the most critical elements in a visible light communication (VLC) transceiver, especially in outdoor applications. Our design results show receiver adaptability for different packet sizes and different distances.FCT project VIDAS – PDTC/EEA-TEL/75217/200

    From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI

    Full text link
    This paper gives an overview of the ten-year devel- opment of the papers presented at the International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutoUI) from 2009 to 2018. We categorize the topics into two main groups, namely, manual driving-related research and automated driving-related re- search. Within manual driving, we mainly focus on studies on user interfaces (UIs), driver states, augmented reality and head-up displays, and methodology; Within automated driv- ing, we discuss topics, such as takeover, acceptance and trust, interacting with road users, UIs, and methodology. We also discuss the main challenges and future directions for AutoUI and offer a roadmap for the research in this area.https://deepblue.lib.umich.edu/bitstream/2027.42/153959/1/From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI.pdfDescription of From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI.pdf : Main articl

    Defining Traffic Scenarios for the Visually Impaired

    Get PDF
    For the development of a transfer concept of camera-based object detections from Advanced Driver Assistance Systems to the assistance of the visually impaired, we define relevant traffic scenarios and vision use cases by means of problem-centered interviews with four experts and ten members of the target group. We identify the six traffic scenarios: general orientation, navigating to an address, crossing a road, obstacle avoidance, boarding a bus, and at the train station clustered into the three categories: Orientation, Pedestrian, and Public Transport. Based on the data, we describe each traffic scenario and derive a summarizing table adapted from software engineering resulting in a collection of vision use cases. The ones that are also of interest in Advanced Driver Assistance Systems – Bicycle, Crosswalk, Traffic Sign, Traffic Light (State), Driving Vehicle, Obstacle, and Lane Detection – build the foundation of our future work. Furthermore, we present social insights that we gained from the interviews and discuss the indications we gather by considering the importance of the identified use cases for each interviewed member of the target group

    Smart Vehicle Proxemics: A Conceptual Framework Operationalizing Proxemics in the Context of Outside-the-Vehicle Interactions

    Get PDF
    We introduce smart vehicle proxemics, a conceptual framework for interactive vehicular applications that operationalizes proxemics to outside-the-vehicle interactions. We identify four zones around the vehicle affording different kinds of interactions and discuss the corresponding conceptual space along three dimensions (physical distance, interaction paradigm, and goal). We study the dimensions of this framework and synthesize our findings regarding drivers’ preferences for (i) information to obtain from their vehicles at a distance, (ii) system functions of their vehicles to control remotely, and (iii) devices (e.g., smartphones, smartglasses, smart key fobs) for interactions outside the vehicle. We discuss the positioning of smart vehicle proxemics in the context of proxemic interactions more generally, and expand on the dichotomy and complementarity of outside-the-vehicle and inside-the-vehicle interactions for new applications enabled by smart vehicle proxemics

    Advanced methods for safe visualization on automotive displays

    Get PDF
    Camera Monitor Systems (CMSs), for example, for backup cameras or mirror replacements, become increasingly important and already cover safety aspects such as guaranteed latency and no frame freeze. Today\u27s approaches deal only with supervision of the digital interface, LCD backlight, and power supply. This paper introduces methods for advanced safety monitoring of panel electronics and optical display output that aim to enable future CMS based automotive use cases. Our methods are based on correlation of physical measurements with predicted values derived from a corresponding display model. This model was made via calibration measurements and many test patterns. Correlation of the monitoring results with predicted values corresponds to the probability that the RGB data are shown as intended. This implies that an overlying system, an Automotive Safety Integrity Level (ASIL) Prepared Video Safety System (APVSS), ensures that only safety verified RGB data are provided to the panel electronics. In case of failures, our methods enable a safe system state, for example, by deactivating the panel. An additional challenge is to allow graceful degradations, a safe but slightly degraded image may provide a better customer experience compared with no information. We successfully verified our approach by a fully functional prototype and extensive evaluation towards “light-to-light” (camera to display output) supervision
    • …
    corecore