8,420 research outputs found

    Assessment of Driver\u27s Attention to Traffic Signs through Analysis of Gaze and Driving Sequences

    Get PDF
    A driver’s behavior is one of the most significant factors in Advance Driver Assistance Systems. One area that has received little study is just how observant drivers are in seeing and recognizing traffic signs. In this contribution, we present a system considering the location where a driver is looking (points of gaze) as a factor to determine that whether the driver has seen a sign. Our system detects and classifies traffic signs inside the driver’s attentional visual field to identify whether the driver has seen the traffic signs or not. Based on the results obtained from this stage which provides quantitative information, our system is able to determine how observant of traffic signs that drivers are. We take advantage of the combination of Maximally Stable Extremal Regions algorithm and Color information in addition to a binary linear Support Vector Machine classifier and Histogram of Oriented Gradients as features detector for detection. In classification stage, we use a multi class Support Vector Machine for classifier also Histogram of Oriented Gradients for features. In addition to the detection and recognition of traffic signs, our system is capable of determining if the sign is inside the attentional visual field of the drivers. It means the driver has kept his gaze on traffic signs and sees the sign, while if the sign is not inside this area, the driver did not look at the sign and sign has been missed

    Detection and Recognition of Traffic Sign using FCM with SVM

    Get PDF
    This paper mainly focuses on Traffic Sign and board Detection systems that have been placed on roads and highway. This system aims to deal with real-time traffic sign and traffic board recognition, i.e. localizing what type of traffic sign and traffic board are appears in which area of an input image at a fast processing time. Our detection module is based on proposed extraction and classification of traffic signs built upon a color probability model using HAAR feature Extraction and color Histogram of Orientated Gradients (HOG).HOG technique is used to convert original image into gray color then applies RGB for foreground. Then the Support Vector Machine (SVM) fetches the object from the above result and compares with database. At the same time Fuzzy Cmeans cluster (FCM) technique get the same output from above result and thenĂ‚  to compare with the database images. By using this method, accuracy of identifying the signs could be improved. Also the dynamic updating of new signals can be done. The goal of this work is to provide optimized prediction on the given sign

    Computer Vision-Based Traffic Sign Detection and Extraction: A Hybrid Approach Using GIS And Machine Learning

    Get PDF
    Traffic sign detection and positioning have drawn considerable attention because of the recent development of autonomous driving and intelligent transportation systems. In order to detect and pinpoint traffic signs accurately, this research proposes two methods. In the first method, geo-tagged Google Street View images and road networks were utilized to locate traffic signs. In the second method, both traffic signs categories and locations were identified and extracted from the location-based GoPro video. TensorFlow is the machine learning framework used to implement these two methods. To that end, 363 stop signs were detected and mapped accurately using the first method (Google Street View image-based approach). Then 32 traffic signs were recognized and pinpointed using the second method (GoPro video-based approach) for better location accuracy, within 10 meters. The average distance from the observation points to the 32 ground truth references was 7.78 meters. The advantages of these methods were discussed. GoPro video-based approach has higher location accuracy, while Google Street View image-based approach is more accessible in most major cities around the world. The proposed traffic sign detection workflow can thus extract and locate traffic signs in other cities. For further consideration and development of this research, IMU (Inertial Measurement Unit) and SLAM (Simultaneous Localization and Mapping) methods could be integrated to incorporate more data and improve location prediction accuracy

    Traffic sign detection for U.S. roads:Remaining challenges and a case for tracking

    Get PDF
    Abstract — Traffic sign detection is crucial in intelligent vehi-cles, no matter if one’s objective is to develop Advanced Driver Assistance Systems or autonomous cars. Recent advances in traffic sign detection, especially the great effort put into the competition German Traffic Sign Detection Benchmark, have given rise to very reliable detection systems when tested on European signs. The U.S., however, has a rather different approach to traffic sign design. This paper evaluates whether a current state-of-the-art traffic sign detector is useful for American signs. We find that for colorful, distinctively shaped signs, Integral Channel Features work well, but it fails on the large superclass of speed limit signs and similar designs. We also introduce an extension to the largest public dataset of American signs, the LISA Traffic Sign Dataset, and present an evaluation of tracking in the context of sign detection. We show that tracking essentially suppresses all false positives in our test set, and argue that in order to be useful for higher level analysis, any traffic sign detection system should contain tracking

    A novel Big Data analytics and intelligent technique to predict driver's intent

    Get PDF
    Modern age offers a great potential for automatically predicting the driver's intent through the increasing miniaturization of computing technologies, rapid advancements in communication technologies and continuous connectivity of heterogeneous smart objects. Inside the cabin and engine of modern cars, dedicated computer systems need to possess the ability to exploit the wealth of information generated by heterogeneous data sources with different contextual and conceptual representations. Processing and utilizing this diverse and voluminous data, involves many challenges concerning the design of the computational technique used to perform this task. In this paper, we investigate the various data sources available in the car and the surrounding environment, which can be utilized as inputs in order to predict driver's intent and behavior. As part of investigating these potential data sources, we conducted experiments on e-calendars for a large number of employees, and have reviewed a number of available geo referencing systems. Through the results of a statistical analysis and by computing location recognition accuracy results, we explored in detail the potential utilization of calendar location data to detect the driver's intentions. In order to exploit the numerous diverse data inputs available in modern vehicles, we investigate the suitability of different Computational Intelligence (CI) techniques, and propose a novel fuzzy computational modelling methodology. Finally, we outline the impact of applying advanced CI and Big Data analytics techniques in modern vehicles on the driver and society in general, and discuss ethical and legal issues arising from the deployment of intelligent self-learning cars

    Data analytics 2016: proceedings of the fifth international conference on data analytics

    Get PDF

    Development of a spatial data infrastructure for precision agriculture applications

    Get PDF
    Precision agriculture (PA) is the technical answer to tackling heterogeneous conditions in a field. It works through site specific operations on a small scale and is driven by data. The objective is an optimized agricultural field application that is adaptable to local needs. The needs differ within a task by spatial conditions. A field, as a homogenous-planted unit, exceeds by its size the scale units of different landscape ecological properties, like soil type, slope, moisture content, solar radiation etc. Various PA-sensors sample data of the heterogeneous conditions in a field. PA-software and Farm Management Information Systems (FMIS) transfer the data into status information or application instructions, which are optimized for the local conditions. The starting point of the research was the determination that the process of PA was only being used in individual environments without exchange between different users and to other domains. Data have been sampled regarding specific operations, but the model of PA suffers from these closed data streams and software products. Initial sensors, data processing and controlled implementations were constructed and sold as monolithic application. An exchange of hard- or software as well as of data was not planned. The design was focused on functionality in a fixed surrounding and conceived as being a unit. This has been identified as a disadvantage for ongoing developments and the creation of added value. Influences from the outside that may be innovative or even inspired cannot be considered. To make this possible, the underlying infrastructure must be flexible and optimized for the exchange of data. This thesis explores the necessary data handling, in terms of integrating knowledge of other domains with a focus on the geo-spatial data processing. As PA is largely dependent on geographical data, this work develops spatial data infrastructure (SDI) components and is based on the methods and tools of geo-informatics. An SDI provides concepts for the organization of geospatial components. It consists of spatial- and metadata in geospatial workflows. The SDI in the center of these workflows is implemented by technologies, policies, arrangements, and interfaces to make the data accessible for various users. Data exchange is the major aim of the concept. As previously stated, data exchange is necessary for PA operations, and it can benefit from defined components of an SDI. Furthermore, PA-processes gain access to interchange with other domains. The import of additional, external data is a benefit. Simultaneously, an export interface for agricultural data offers new possibilities. Coordinated communication ensures understanding for each participant. From the technological point of view, standardized interfaces are best practice. This work demonstrates the benefit of a standardized data exchange for PA, by using the standards of the Open Geospatial Consortium (OGC). The OGC develops and publishes a wide range of relevant standards, which are widely adopted in geospatially enabled software. They are practically proven in other domains and were implemented partially in FMIS in the recent years. Depending on their focus, they could support software solutions by incorporating additional information for humans or machines into additional logics and algorithms. This work demonstrates the benefits of standardized data exchange for PA, especially by the standards of the OGC. The process of research follows five objectives: (i) to increase the usability of PA-tools in order to open the technology for a wider group of users, (ii) to include external data and services seamlessly through standardized interfaces to PA-applications, (iii) to support exchange with other domains concerning data and technology, (iv) to create a modern PA-software architecture, which allows new players and known brands to support processes in PA and to develop new business segments, (v) to use IT-technologies as a driver for agriculture and to contribute to the digitalization of agriculture.Precision agriculture (PA) ist die technische Antwort, um heterogenen Bedingungen in einem Feld zu begegnen. Es arbeitet mit teilflächenspezifischen Handlungen kleinräumig und ist durch Daten angetrieben. Das Ziel ist die optimierte landwirtschaftliche Feldanwendung, welche an die lokalen Gegebenheiten angepasst wird. Die Bedürfnisse unterscheiden sich innerhalb einer Anwendung in den räumlichen Bedingungen. Ein Feld, als gleichmäßig bepflanzte Einheit, überschreitet in seiner Größe die räumlichen Einheiten verschiedener landschaftsökologischer Größen, wie den Bodentyp, die Hangneigung, den Feuchtigkeitsgehalt, die Sonneneinstrahlung etc. Unterschiedliche Sensoren sammeln Daten zu den heterogenen Bedingungen im Feld. PA-Software und farm management information systems (FMIS) überführen die Daten in Statusinformationen oder Bearbeitungsanweisungen, die für die Bedingungen am Ort optimiert sind. Ausgangspunkt dieser Dissertation war die Feststellung, dass der Prozess innerhalb von PA sich nur in einer individuellen Umgebung abspielte, ohne dass es einen Austausch zwischen verschiedenen Nutzern oder anderen Domänen gab. Daten wurden gezielt für Anwendungen gesammelt, aber das Modell von PA leidet unter diesen geschlossenen Datenströmen und Softwareprodukten. Ursprünglich wurden Sensoren, die Datenverarbeitung und die Steuerung von Anbaugeräten konstruiert und als monolithische Anwendung verkauft. Ein Austausch von Hard- und Software war ebenso nicht vorgesehen wie der von Daten. Das Design war auf Funktionen in einer festen Umgebung ausgerichtet und als eine Einheit konzipiert. Dieses zeigte sich als Nachteil für weitere Entwicklungen und bei der Erzeugung von Mehrwerten. Äußere innovative oder inspirierende Einflüsse können nicht berücksichtigt werden. Um dieses zu ermöglichen muss die darunterliegende Infrastruktur flexibel und auf einen Austausch von Daten optimiert sein. Diese Dissertation erkundet die notwendige Datenverarbeitung im Sinne der Integration von Wissen aus anderen Bereichen mit dem Fokus auf der Verarbeitung von Geodaten. Da PA sehr abhängig von geographischen Daten ist, werden in dieser Arbeit die Bausteine einer Geodateninfrastruktur (GDI) entwickelt, die auf den Methoden undWerkzeugen der Geoinformatik beruhen. Eine GDI stellt Konzepte zur Organisation räumlicher Komponenten. Sie besteht aus Geodaten und Metadaten in raumbezogenen Arbeitsprozessen. Die GDI, als Zentrum dieser Arbeitsprozesse, wird mit Technologien, Richtlinien, Regelungen sowie Schnittstellen, die den Zugriff durch unterschiedliche Nutzer ermöglichen, umgesetzt. Datenaustausch ist das Hauptziel des Konzeptes. Wie bereits erwähnt, ist der Datenaustausch wichtig für PA-Tätigkeiten und er kann von den definierten Komponenten einer GDI profitieren. Ferner bereichert der Austausch mit anderen Gebieten die PA-Prozesse. Der Import zusätzlicher Daten ist daher ein Gewinn. Gleichzeitig bietet eine Export-Schnittstelle für landwirtschaftliche Daten neue Möglichkeiten. Koordinierte Kommunikation sichert das Verständnis für jeden Teilnehmer. Aus technischer Sicht sind standardisierte Schnittstellen die beste Lösung. Diese Arbeit zeigt den Gewinn durch einen standardisierten Datenaustausch für PA, indem die Standards des Open Geospatial Consortium (OGC) genutzt wurden. Der OGC entwickelt und publiziert eine Vielzahl von relevanten Standards, die eine große Reichweite in Geo-Software haben. Sie haben sich in der Praxis anderer Bereiche bewährt und wurden in den letzten Jahren teilweise in FMIS eingesetzt. Abhängig von ihrer Ausrichtung könnten sie Softwarelösungen unterstützen, indem sie zusätzliche Informationen für Menschen oder Maschinen in zusätzlicher Logik oder Algorithmen integrieren. Diese Arbeit zeigt die Vorzüge eines standardisierten Datenaustauschs für PA, insbesondere durch die Standards des OGC. Die Ziele der Forschung waren: (i) die Nutzbarkeit von PA-Werkzeugen zu erhöhen und damit die Technologie einer breiteren Gruppe von Anwendern verfügbar zu machen, (ii) externe Daten und Dienste ohne Unterbrechung sowie über standardisierte Schnittstellen für PA-Anwendungen einzubeziehen, (iii) den Austausch mit anderen Bereichen im Bezug auf Daten und Technologien zu unterstützen, (iv) eine moderne PA-Softwarearchitektur zu erschaffen, die es neuen Teilnehmern und bekannten Marken ermöglicht, Prozesse in PA zu unterstützen und neue Geschäftsfelder zu entwickeln, (v) IT-Technologien als Antrieb für die Landwirtschaft zu nutzen und einen Beitrag zur Digitalisierung der Landwirtschaft zu leisten
    • …
    corecore