20 research outputs found

    Saliency difference based objective evaluation method for a superimposed screen of the HUD with various background

    Full text link
    The head-up display (HUD) is an emerging device which can project information on a transparent screen. The HUD has been used in airplanes and vehicles, and it is usually placed in front of the operator's view. In the case of the vehicle, the driver can see not only various information on the HUD but also the backgrounds (driving environment) through the HUD. However, the projected information on the HUD may interfere with the colors in the background because the HUD is transparent. For example, a red message on the HUD will be less noticeable when there is an overlap between it and the red brake light from the front vehicle. As the first step to solve this issue, how to evaluate the mutual interference between the information on the HUD and backgrounds is important. Therefore, this paper proposes a method to evaluate the mutual interference based on saliency. It can be evaluated by comparing the HUD part cut from a saliency map of a measured image with the HUD image.Comment: 10 pages, 5 fighres, 1 table, accepted by IFAC-HMS 201

    What Is the Gaze Behavior of Pedestrians in Interactions with an Automated Vehicle When They Do Not Understand Its Intentions?

    Full text link
    Interactions between pedestrians and automated vehicles (AVs) will increase significantly with the popularity of AV. However, pedestrians often have not enough trust on the AVs , particularly when they are confused about an AV's intention in a interaction. This study seeks to evaluate if pedestrians clearly understand the driving intentions of AVs in interactions and presents experimental research on the relationship between gaze behaviors of pedestrians and their understanding of the intentions of the AV. The hypothesis investigated in this study was that the less the pedestrian understands the driving intentions of the AV, the longer the duration of their gazing behavior will be. A pedestrian--vehicle interaction experiment was designed to verify the proposed hypothesis. A robotic wheelchair was used as the manual driving vehicle (MV) and AV for interacting with pedestrians while pedestrians' gaze data and their subjective evaluation of the driving intentions were recorded. The experimental results supported our hypothesis as there was a negative correlation between the pedestrians' gaze duration on the AV and their understanding of the driving intentions of the AV. Moreover, the gaze duration of most of the pedestrians on the MV was shorter than that on an AV. Therefore, we conclude with two recommendations to designers of external human-machine interfaces (eHMI): (1) when a pedestrian is engaged in an interaction with an AV, the driving intentions of the AV should be provided; (2) if the pedestrian still gazes at the AV after the AV displays its driving intentions, the AV should provide clearer information about its driving intentions.Comment: 10 pages, 10 figure

    Importance of Instruction for Pedestrian-Automated Driving Vehicle Interaction with an External Human Machine Interface: Effects on Pedestrians' Situation Awareness, Trust, Perceived Risks and Decision Making

    Full text link
    Compared to a manual driving vehicle (MV), an automated driving vehicle lacks a way to communicate with the pedestrian through the driver when it interacts with the pedestrian because the driver usually does not participate in driving tasks. Thus, an external human machine interface (eHMI) can be viewed as a novel explicit communication method for providing driving intentions of an automated driving vehicle (AV) to pedestrians when they need to negotiate in an interaction, e.g., an encountering scene. However, the eHMI may not guarantee that the pedestrians will fully recognize the intention of the AV. In this paper, we propose that the instruction of the eHMI's rationale can help pedestrians correctly understand the driving intentions and predict the behavior of the AV, and thus their subjective feelings (i.e., dangerous feeling, trust in the AV, and feeling of relief) and decision-making are also improved. The results of an interaction experiment in a road-crossing scene indicate that the participants were more difficult to be aware of the situation when they encountered an AV w/o eHMI compared to when they encountered an MV; further, the participants' subjective feelings and hesitation in decision-making also deteriorated significantly. When the eHMI was used in the AV, the situational awareness, subjective feelings and decision-making of the participants regarding the AV w/ eHMI were improved. After the instruction, it was easier for the participants to understand the driving intention and predict driving behavior of the AV w/ eHMI. Further, the subjective feelings and the hesitation related to decision-making were improved and reached the same standards as that for the MV.Comment: 5 figures, Accepted by IEEE IV202

    Attribute-Aware Loss Function for Accurate Semantic Segmentation Considering the Pedestrian Orientations

    Get PDF
    Numerous applications such as autonomous driving, satellite imagery sensing, and biomedical imaging use computer vision as an important tool for perception tasks. For Intelligent Transportation Systems (ITS), it is required to precisely recognize and locate scenes in sensor data. Semantic segmentation is one of computer vision methods intended to perform such tasks. However, the existing semantic segmentation tasks label each pixel with a single object's class. Recognizing object attributes, e.g., pedestrian orientation, will be more informative and help for a better scene understanding. Thus, we propose a method to perform semantic segmentation with pedestrian attribute recognition simultaneously. We introduce an attribute-aware loss function that can be applied to an arbitrary base model. Furthermore, a re-annotation to the existing Cityscapes dataset enriches the ground-truth labels by annotating the attributes of pedestrian orientation. We implement the proposed method and compare the experimental results with others. The attribute-aware semantic segmentation shows the ability to outperform baseline methods both in the traditional object segmentation task and the expanded attribute detection task

    MVA2023 Small Object Detection Challenge for Spotting Birds: Dataset, Methods, and Results

    Full text link
    Small Object Detection (SOD) is an important machine vision topic because (i) a variety of real-world applications require object detection for distant objects and (ii) SOD is a challenging task due to the noisy, blurred, and less-informative image appearances of small objects. This paper proposes a new SOD dataset consisting of 39,070 images including 137,121 bird instances, which is called the Small Object Detection for Spotting Birds (SOD4SB) dataset. The detail of the challenge with the SOD4SB dataset is introduced in this paper. In total, 223 participants joined this challenge. This paper briefly introduces the award-winning methods. The dataset, the baseline code, and the website for evaluation on the public testset are publicly available.Comment: This paper is included in the proceedings of the 18th International Conference on Machine Vision Applications (MVA2023). It will be officially published at a later date. Project page : https://www.mva-org.jp/mva2023/challeng

    Computational Models of Human Visual Attention and Their Implementations: A Survey

    Get PDF
    We humans are easily able to instantaneously detect the regions in a visual scene that are most likely to contain something of interest. Exploiting this pre-selection mechanism called visual attention for image and video processing systems would make them more sophisticated and therefore more useful. This paper briefly describes various computational models of human visual attention and their development, as well as related psychophysical findings. In particular, our objective is to carefully distinguish several types of studies related to human visual attention and saliency as a measure of attentiveness, and to provide a taxonomy from several viewpoints such as the main objective, the use of additional cues and mathematical principles. This survey finally discusses possible future directions for research into human visual attention and saliency computation

    Mental Focus Analysis Using the Spatio-temporal Correlation between Visual Saliency and Eye Movements

    Get PDF
    The spatio-temporal correlation analysis between visual saliency and eye movements is presented for the estimation of the mental focus toward videos. We extract spatio-temporal dynamics patterns of saliency areas from the videos, which we refer to as saliency-dynamics patterns, and evaluate eye movements based on their correlation with the saliency-dynamics patterns in view. Experimental results using TV commercials demonstrate the effectiveness of the proposed method for the mental-focus estimation

    Model-based Reminiscence : Guiding Mental Time Travel by Cognitive Modeling

    No full text
    HAI \u2716: The Fourth International Conference on Human Agent Interaction Biopolis Singapore October 4 - 7, 2016autho
    corecore