125,880 research outputs found
Vision-Based Traffic Sign Detection and Recognition Systems: Current Trends and Challenges
The automatic traffic sign detection and recognition (TSDR) system is very important research in the development of advanced driver assistance systems (ADAS). Investigations on vision-based TSDR have received substantial interest in the research community, which is mainly motivated by three factors, which are detection, tracking and classification. During the last decade, a substantial number of techniques have been reported for TSDR. This paper provides a comprehensive survey on traffic sign detection, tracking and classification. The details of algorithms, methods and their specifications on detection, tracking and classification are investigated and summarized in the tables along with the corresponding key references. A comparative study on each section has been provided to evaluate the TSDR data, performance metrics and their availability. Current issues and challenges of the existing technologies are illustrated with brief suggestions and a discussion on the progress of driver assistance system research in the future. This review will hopefully lead to increasing efforts towards the development of future vision-based TSDR system.
Document type: Articl
Recommended from our members
An evaluation framework for stereo-based driver assistance
This is the post-print version of the Article - Copyright @ 2012 Springer VerlagThe accuracy of stereo algorithms or optical flow methods is commonly assessed by comparing the results against the Middlebury
database. However, equivalent data for automotive or robotics applications
rarely exist as they are difficult to obtain. As our main contribution, we introduce an evaluation framework tailored for stereo-based driver assistance able to deliver excellent performance measures while
circumventing manual label effort. Within this framework one can combine several ways of ground-truthing, different comparison metrics, and use large image databases.
Using our framework we show examples on several types of ground truthing techniques: implicit ground truthing (e.g. sequence recorded without a crash occurred), robotic vehicles with high precision sensors, and to a small extent, manual labeling. To show the effectiveness of our evaluation framework we compare three different stereo algorithms on
pixel and object level. In more detail we evaluate an intermediate representation
called the Stixel World. Besides evaluating the accuracy of the Stixels, we investigate the completeness (equivalent to the detection rate) of the StixelWorld vs. the number of phantom Stixels. Among many findings, using this framework enables us to reduce the number of phantom Stixels by a factor of three compared to the base parametrization. This base parametrization has already been optimized by test driving vehicles for distances exceeding 10000 km
The Visual Driver; promoting clarity and coherence
Drawing from a research-based case study for a vision support charity, this professional paper articulates the role of a 'visual driver' as a key tool in shaping a rebranding. The 'visual driver' is a visual-based rubric of nine subjects, each with an image critically selected to capture the personality and essence of an entity. The paper discusses challenges around identifying the subtleties of a brand, how it behaves, its world outlook, its tone of voice. All difficult to define. However, once established, the designer’s journey towards creating a successful brand with personality becomes clear. Furthermore, the participatory nature of the 'visual driver' rubric – as it passes between designer and client, communicates early ideation as well as initiating an informed dialogue between multiple parties. The flexibility, accessibility and the participatory nature of this method are especially critical when working alongside clients with sensory impairments. The case study within the paper demonstrates the flexibility of the ‘Visual Driver’ to incorporate textures which enhance the effectiveness of the tool for an organisation dealing with visual impairment. The paper articulates how the 'visual rubric' enables designers to work collaboratively with clients, comparing their creative thinking and ensuring a better awareness and understanding of the brand challenges from client and end-user perspectives. Increasingly, developing a modern brand strategy demands a multiplicity of additional sensory feedback— aural, touch sonic etc. The paper concludes by presenting and discussing how a multisensory 'visual driver' was used to facilitate a rebrand
Harmonisation, decentralisation and local governance: Enhancing aid effectiveness
During the last decades, international development assistance was often marked by overlaps, duplication of efforts and rivalry between multitudes of donor organisations. In order to translate the principles of the Paris Declaration into practice in the field of Local Governance and Decentralisation (LGD), different donor organisations have joined forces on headquarter level and formed a working group, the Development Partners Working Group for Local Governance and Decentralisation (DPWG-LGD), which is operating since 2006. InWEnt is hosting the secretariat of the group since 2008 and assigned Wageningen International to organise two lead donor workshops. The workshop drew a cross section of delegates who comprised development partners, consultants, academicians, members of parliament and local governance practitioners. The partner countries included Rwanda, Ghana, Tanzania and Uganda whose experiences were mutually re-enforcing and beneficial
Sharing Human-Generated Observations by Integrating HMI and the Semantic Sensor Web
Current “Internet of Things” concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C’s Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers’ observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is soun
- …