117,111 research outputs found

    Evaluating the effects of bilingual traffic signs on driver performance and safety

    Get PDF
    Variable Message Signs (VMS) can provide immediate and relevant information to road users and bilingual VMS can provide great flexibility in countries where a significant proportion of the population speak an alternative language to the majority. The study reported here evaluates the effect of various bilingual VMS configurations on driver behaviour and safety. The aim of the study was to determine whether or not the visual distraction associated with bilingual VMS signs of different configurations (length, complexity) impacted on driving performance. A driving simulator was used to allow full control over the scenarios, road environment and sign configuration and both longitudinal and lateral driver performance was assessed. Drivers were able to read one and two-line monolingual signs and two-line bilingual signs without disruption to their driving behaviour. However, drivers significantly reduced their speed in order to read four-line monolingual and four-line bilingual signs, accompanied by an increase in headway to the vehicle in front. This implies that drivers are possibly reading the irrelevant text on the bilingual sign and various methods for reducing this effect are discussed

    Evaluation of techniques to improve the legibility of bilingual Variable Message Signs

    Get PDF
    This study evaluated a number of techniques that could be employed to reduce the amount of time drivers spend searching and reading bilingual signs. Using a tachistoscope, monolingual and Welsh bilingual participants were presented with various configurations of bilingual signing. The amount of information was varied (i.e. the number of lines) and a number of language-differentiation techniques were implemented. These techniques attempted to aid the perception and recognition of the relevant language and relied either on manipulating the position of the two languages, or by using demarcation (colour, font etc.). With regards to the amount of information presented, it was found that the reading response time for a single line of relevant text within a two-line bilingual sign was not significantly different to the reading response time for a one-line monolingual sign. Thus, participants were able to extract the relevant language from the bilingual sign with no decrement in performance. However, reading response time for a message of two lines of relevant text in a four-line bilingual was significantly longer than the reading response time for a two-line monolingual sign. Thus the amount of information (even if irrelevant) impacted on their performance. With regards to the positioning techniques, grouping the lines by language resulted in a decreased reading response time compared to when the text was grouped by content. In addition, positioning the user’s dominant language at the top of the sign improved reading times for both one and two-line messages on bilingual signs. All the demarcation techniques were successful in reducing reading times on four-line bilingual signs, and it was found that having established a particular pattern of presentation, an unexpected change significantly increased reading time

    Not Using the Car to See the Sidewalk: Quantifying and Controlling the Effects of Context in Classification and Segmentation

    Full text link
    Importance of visual context in scene understanding tasks is well recognized in the computer vision community. However, to what extent the computer vision models for image classification and semantic segmentation are dependent on the context to make their predictions is unclear. A model overly relying on context will fail when encountering objects in context distributions different from training data and hence it is important to identify these dependencies before we can deploy the models in the real-world. We propose a method to quantify the sensitivity of black-box vision models to visual context by editing images to remove selected objects and measuring the response of the target models. We apply this methodology on two tasks, image classification and semantic segmentation, and discover undesirable dependency between objects and context, for example that "sidewalk" segmentation relies heavily on "cars" being present in the image. We propose an object removal based data augmentation solution to mitigate this dependency and increase the robustness of classification and segmentation models to contextual variations. Our experiments show that the proposed data augmentation helps these models improve the performance in out-of-context scenarios, while preserving the performance on regular data.Comment: 14 pages (12 figures

    Play and Learn: Using Video Games to Train Computer Vision Models

    Full text link
    Video games are a compelling source of annotated data as they can readily provide fine-grained groundtruth for diverse tasks. However, it is not clear whether the synthetically generated data has enough resemblance to the real-world images to improve the performance of computer vision models in practice. We present experiments assessing the effectiveness on real-world data of systems trained on synthetic RGB images that are extracted from a video game. We collected over 60000 synthetic samples from a modern video game with similar conditions to the real-world CamVid and Cityscapes datasets. We provide several experiments to demonstrate that the synthetically generated RGB images can be used to improve the performance of deep neural networks on both image segmentation and depth estimation. These results show that a convolutional network trained on synthetic data achieves a similar test error to a network that is trained on real-world data for dense image classification. Furthermore, the synthetically generated RGB images can provide similar or better results compared to the real-world datasets if a simple domain adaptation technique is applied. Our results suggest that collaboration with game developers for an accessible interface to gather data is potentially a fruitful direction for future work in computer vision.Comment: To appear in the British Machine Vision Conference (BMVC), September 2016. -v2: fixed a typo in the reference
    • …
    corecore