213 research outputs found

    EYES : a novel overtaking assistance system for vehicular networks

    Get PDF
    Developments in the ITS area are received with great expectation by both consumers and industry. Despite their huge potential benefits, ITS solutions suffer from the slow pace of adoption by manufacturers. In this paper we propose EYES, an ITS system that aims at helping drivers in overtaking. The system autonomously creates a network of the devices running EYES, and provides drivers with a video feed from the vehicle located just ahead, thus presenting a better view of any vehicles coming from the opposite direction and the road ahead. This is specially useful when the front view of the driver is blocked by large vehicles, and thus the decision whether to overtake can be taken based on the visuals provided by the application. We have validated EYES, the proposed overtaking assistance system, in both indoor and realistic scenarios involving vehicular network, and preliminary results allow being optimistic about its effectiveness and applicability

    Performance tuning of a smartphone-based overtaking assistant

    Get PDF
    ITS solutions suffer from the slow pace of adoption by manufacturers despite the interest shown by both consumers and industry. Our goal is to develop ITS applications using already available technologies to make them affordable, quick to deploy, and easy to adopt. In this paper we introduce EYES, an overtaking assistance solution that provides drivers with a real-time video feed from the vehicle located just in front. Our application thus provides a better view of the road ahead, and of any vehicles travelling in the opposite direction, being especially useful when the front view of the driver is blocked by large vehicles. We evaluated our application using the MJPEG video encoding format, and have determined the most effective resolution and JPEG quality choice for our case. Experimental results from the tests performed with the application in both indoor and outdoor scenarios, allow us to be optimistic about the effectiveness and applicability of smartphones in providing overtaking assistance based on video streaming in vehicular networks

    Potential of Augmented Reality for Intelligent Transportation Systems

    Full text link
    Rapid advances in wireless communication technologies coupled with ongoing massive development in vehicular networking standards and innovations in computing, sensing, and analytics have paved the way for intelligent transportation systems (ITS) to develop rapidly in the near future. ITS provides a complete solution for the efficient and intelligent management of real-time traffic, wherein sensory data is collected from within the vehicles (i.e., via their onboard units) as well as data exchanged between the vehicles, between the vehicles and their supporting roadside infrastructure/network, among the vehicles and vulnerable pedestrians, subsequently paving the way for the realization of the futuristic Internet of Vehicles. The traditional intent of an ITS system is to detect, monitor, control, and subsequently reduce traffic congestion based on a real-time analysis of the data pertinent to certain patterns of the road traffic, including traffic density at a geographical area of interest, precise velocity of vehicles, current and predicted travelling trajectories and times, etc. However, merely relying on an ITS framework is not an optimal solution. In case of dense traffic environments, where communication broadcasts from hundreds of thousands of vehicles could potentially choke the entire network (and so could lead to fatal accidents in the case of autonomous vehicles that depend on reliable communications for their operational safety), a fall back to the traditional decentralized vehicular ad hoc network (VANET) approach becomes necessary. It is therefore of critical importance to enhance the situational awareness of vehicular drivers so as to enable them to make quick but well-founded manual decisions in such safety-critical situations.Comment: In: Lee N. (eds) Encyclopedia of Computer Graphics and Games. Springer, Cham, 201

    On the needs and requirements arising from connected and automated driving

    Get PDF
    Future 5G systems have set a goal to support mission-critical Vehicle-to-Everything (V2X) communications and they contribute to an important step towards connected and automated driving. To achieve this goal, the communication technologies should be designed based on a solid understanding of the new V2X applications and the related requirements and challenges. In this regard, we provide a description of the main V2X application categories and their representative use cases selected based on an analysis of the future needs of cooperative and automated driving. We also present a methodology on how to derive the network related requirements from the automotive specific requirements. The methodology can be used to analyze the key requirements of both existing and future V2X use cases

    Simulation of AEB system testing

    Get PDF
    Tato diplomová práce popisuje simulační nástroj, který byl vytvořen pro analýzu funkcí ADAS systémů a dynamiky vozidel. Nástroj byl vytvořen v aplikacích CarMaker a Microsoft Excel. Software lze použít jako SIL test pro analýzu výstupních dat senzoru před provedením fyzických zkoušek.This master thesis describes a simulation tool which was created to analyze ADAS functions and vehicle dynamics. The tool was created in CarMaker and Microsoft Excel. The software can be used as SIL testing to analyze sensor output data before proving ground test

    Radar target classification by micro-Doppler contributions

    Get PDF
    This thesis studies non-cooperative automatic radar target classification. Recent developments in silicon-germanium and monolithic microwave integrated circuit technologies allows to build cheap and powerful continuous wave radars. Availability of radars opens new applications in different areas. One of these applications is security. Radars could be used for surveillance of huge areas and detect unwanted moving objects. Determination of the type of the target is essential for such systems. Microwave radars use high frequencies that reflect from objects of millimetre size. The micro-Doppler signature of a target is a time-varying frequency modulated contribution that arose in radar backscattering and caused by the relative movement of separate parts of the target. The micro-Doppler phenomenon allows to classify non-rigid moving objects by analysing their signatures. This thesis is focused on designing of automatic target classification systems based on analysis of micro-Doppler signatures. Analysis of micro-Doppler radar signatures is usually performed by second-order statistics, i.e. common energy-based power spectra and spectrogram. However, the information about phase coupling content in backscattering is totally lost in these energy-based statistics. This useful phase coupling content can be extracted by higher-order spectral techniques. We show that this content is useful for radar target classification in terms of improved robustness to various corruption factors. A problem of unmanned aerial vehicle (UAV) classification using continuous wave radar is covered in the thesis. All steps of processing required to make a decision out of the raw radar data are considered. A novel feature extraction method is introduced. It is based on eigenpairs extracted from the correlation matrix of the signature. Different classes of UAVs are successfully separated in feature space by support vector machine. Within experiments or real radar data, achieved high classification accuracy proves the efficiency of the proposed solutions. Thesis also covers several applications of the automotive radar due to very high growth in technologies for intelligent vehicle radar systems. Such radars are already build-in in the vehicle and ready for new applications. We consider two novel applications. First application is a multi-sensor fusion of video camera and radar for more efficient vehicle-to-vehicle video transmission. Second application is a frequency band invariant pedestrian classification by an automotive radar. This system allows us to use the same signal processing hardware/software for different countries where regulations vary and radars with different operating frequency are required. We consider different radar applications: ground moving target classification, aerial target classification, unmanned aerial vehicles classification, pedestrian classification. The highest priority is given to verification of proposed methods on real radar data collected with frequencies equal to 9.5, 10, 16.8, 24 and 33 GHz

    An ITS solution providing real-time visual overtaking assistance using smartphones

    Get PDF
    © 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.ITS solutions suffer from the slow pace of adoption by manufacturers despite the interest shown by both consumers and industry. Our goal is to develop ITS applications using already available technologies to make them affordable, quick to deploy, and easy to adopt. In this paper we introduce an ITS system for overtaking assistance that provides drivers with a real-time video feed from the vehicle located just in front. This provides a better view of the road ahead, and of any vehicles travelling in the opposite direction, being especially useful when the front view of the driver is blocked by large vehicles. We evaluated our application using H.264 and MJPEG video encoding formats, and determined the most effective codec choice for our case. Experimental results allow us to be optimistic about the effectiveness and applicability of smartphones in providing overtaking assistance based on video streaming in vehicular networks.This work was partially supported by the European Commission under Svagata.eu, the Erasmus Mundus Programme, Action 2 (EMA2) and the Ministerio de Economía y Competitividad, Programa Estatal de Investigación, Desarrollo e Innovación Orientada a los Retos de la Sociedad, Proyectos I+D+I 2014, Spain, under Grant TEC2014-52690-R.Patra, S.; Tavares De Araujo Cesariny Calafate, CM.; Cano Escribá, JC.; Manzoni, P. (2015). An ITS solution providing real-time visual overtaking assistance using smartphones. IEEE. https://doi.org/10.1109/LCN.2015.7366320

    Multisensor Data Fusion Strategies for Advanced Driver Assistance Systems

    Get PDF
    Multisensor data fusion and integration is a rapidly evolving research area that requires interdisciplinary knowledge in control theory, signal processing, artificial intelligence, probability and statistics, etc. Multisensor data fusion refers to the synergistic combination of sensory data from multiple sensors and related information to provide more reliable and accurate information than could be achieved using a single, independent sensor (Luo et al., 2007). Actually Multisensor data fusion is a multilevel, multifaceted process dealing with automatic detection, association, correlation, estimation, and combination of data from single and multiple information sources. The results of data fusion process help users make decisions in complicated scenarios. Integration of multiple sensor data was originally needed for military applications in ocean surveillance, air-to air and surface-to-air defence, or battlefield intelligence. More recently, multisensor data fusion has also included the nonmilitary fields of remote environmental sensing, medical diagnosis, automated monitoring of equipment, robotics, and automotive systems (Macci et al., 2008). The potential advantages of multisensor fusion and integration are redundancy, complementarity, timeliness, and cost of the information. The integration or fusion of redundant information can reduce overall uncertainty and thus serve to increase the accuracy with which the features are perceived by the system. Multiple sensors providing redundant information can also serve to increase reliability in the case of sensor error or failure. Complementary information from multiple sensors allows features in the environment to be perceived that are impossible to perceive using just the information from each individual sensor operating separately. (Luo et al., 2007) Besides, driving as one of our daily activities is a complex task involving a great amount of interaction between driver and vehicle. Drivers regularly share their attention among operating the vehicle, monitoring traffic and nearby obstacles, and performing secondary tasks such as conversing, adjusting comfort settings (e.g. temperature, radio.) The complexity of the task and uncertainty of the driving environment make driving a very dangerous task, as according to a study in the European member states, there are more than 1,200,000 traffic accidents a year with over 40,000 fatalities. This fact points up the growing demand for automotive safety systems, which aim for a significant contribution to the overall road safety (Tatschke et al., 2006). Therefore, recently, there are an increased number of research activities focusing on the Driver Assistance System (DAS) development in order O pe n A cc es s D at ab as e w w w .in te ch w eb .o r
    corecore