266 research outputs found

    Моделирование безопасного поведения водителя на перекрестках с помощью глубинного обучения

    Get PDF
    Roundabouts provide safe and fast circulation as well as many environmental advantages, but drivers adopting unsafe behaviours while circulating through them may cause safety issues, provoking accidents. In this paper we propose a way of training an autonomous vehicle in order to behave in a human and safe way when entering a roundabout. By placing a number of cameras in our vehicle and processing their video feeds through a series of algorithms, including Machine Learning, we can build a representation of the state of the surrounding environment. Then, we use another set of Deep Learning algorithms to analyze the data and determine the safest way of circulating through a roundabout given the current state of the environment, including nearby vehicles with their estimated positions, speeds and accelerations. By watching multiple attempts of a human entering a roundabout with both safe and unsafe behaviours, our second set of algorithms can learn to mimic the human’s good attempts and act in the same way as him, which is key to a safe implementation of autonomous vehicles. This work details the series of steps that we took, from building the representation of our environment to acting according to it in order to attain safe entry into single lane roundabouts

    Моделирование безопасного поведения водителя на перекрестках с помощью глубинного обучения

    Get PDF
    Roundabouts provide safe and fast circulation as well as many environmental advantages, but drivers adopting unsafe behaviours while circulating through them may cause safety issues, provoking accidents. In this paper we propose a way of training an autonomous vehicle in order to behave in a human and safe way when entering a roundabout. By placing a number of cameras in our vehicle and processing their video feeds through a series of algorithms, including Machine Learning, we can build a representation of the state of the surrounding environment. Then, we use another set of Deep Learning algorithms to analyze the data and determine the safest way of circulating through a roundabout given the current state of the environment, including nearby vehicles with their estimated positions, speeds and accelerations. By watching multiple attempts of a human entering a roundabout with both safe and unsafe behaviours, our second set of algorithms can learn to mimic the human’s good attempts and act in the same way as him, which is key to a safe implementation of autonomous vehicles. This work details the series of steps that we took, from building the representation of our environment to acting according to it in order to attain safe entry into single lane roundabouts.Кольцевые транспортные развязки обеспечивают безопасное и быстрое движение, а также ряд экологических преимуществ. Но водители, придерживающиеся ненормативных правил поведения при вождении по ним, могут вызвать проблемы с безопасностью, что приводит к несчастным случаям. В статье предлагается способ обучения водителя автономного транспортного средства с целью обеспечения правильного и безопасного поведения при въезде в кольцевую транспортную развязку. Поместив несколько камер в транспортное средство и обработав видеозапись видеопотоков с помощью ряда алгоритмов, включая и машинное обучение, можно получить представление о состоянии окружающей среды. Затем используется другой набор алгоритмов глубокого обучения для анализа данных и определения наиболее безопасного пути кругового движения с учетом текущего состояния окружающей среды, включая ближайшие транспортные средства с их предполагаемым местоположением, скоростью и ускорением. Анализируя многочисленные примеры безопасного и опасного поведения водителя во время движения по кольцевой транспортной развязке, предлагается второй набор алгоритмов, который позволяет моделировать правильное поведение водителя, что и является главным условием безопасного применения автономных транспортных средств. В статье подробно описываются все этапы работы, начиная от построения рассматриваемой окружающей среды и заканчивая соответствующим поведением в зависимости от ситуации, что позволяет обеспечить безопасное движение в кольцевой развязке с одной полосой движения

    Fuzzy Free Path Detection from Disparity Maps by Using Least-Squares Fitting to a Plane

    Full text link
    A method to detect obstacle-free paths in real-time which works as part of a cognitive navigation aid system for visually impaired people is proposed. It is based on the analysis of disparity maps obtained from a stereo vision system which is carried by the blind user. The presented detection method consists of a fuzzy logic system that assigns a certainty to be part of a free path to each group of pixels, depending on the parameters of a planar-model fitting. We also present experimental results on different real outdoor scenarios showing that our method is the most reliable in the sense that it minimizes the false positives rate.N. Ortigosa acknowledges the support of Universidad Politecnica de Valencia under grant FPI-UPV 2008 and Spanish Ministry of Science and Innovation under grant MTM2010-15200. S. Morillas acknowledges the support of Universidad Politecnica de Valencia under grant PAID-05-12-SP20120696.Ortigosa Araque, N.; Morillas Gómez, S. (2014). Fuzzy Free Path Detection from Disparity Maps by Using Least-Squares Fitting to a Plane. Journal of Intelligent and Robotic Systems. 75(2):313-330. https://doi.org/10.1007/s10846-013-9997-1S313330752Cai, L., He, L., Xu, Y., Zhao, Y., Yang, X.: Multi-object detection and tracking by stereovision. Pattern Recognit. 43(12), 4028–4041 (2010)Hikosaka, N., Watanabe, K., Umeda, K.: Obstacle detection of a humanoid on a plane using a relative disparity map obtained by a small range image sensor. In: Proceedings of the IEEE International Conference on Robotics and Automation, vol. 1, pp. 3048–3053 (2007)Benenson, R., Mathias, M., Timofte, R., Van Gool, L.: Fast stixel computation for fast pedestrian detection. In: ECCV, CVVT workshop, October (2012)Huang, Y., Fu, S., Thompson, C.: Stereovision-based object segmentation for automotive applications. EURASIP J. Appl. Signal Process. 2005(14), 2322–2329 (2005)Duan, B.B., Liu, W., Fu, P.Y., Yang, C.Y., Wen, X.Z., Yuan, H.: Real-time on-road vehicle and motorcycle detection using a single camera. In: IEEE International Conference on Industrial Technology, pp. 579–584. IEEE (2009)Oliveira L, Nunes, U.: On integration of features and classifiers for robust vehicle detection. In: IEEE International Conference on Intelligent Transportation Systems, pp. 414–419. IEEE (2008)Sun, Z., Bebis, G., Miller, R.: On-road vehicle detection: A review. IEEE Trans. Pattern Anal. Mach. Intell. 28(5), 694–711 (2006)Sun, H.J., Yang, J.Y.: Obstacle detection for mobile vehicle using neural network and fuzzy logic. Neural Netw. Distrib. Process. 4555(1), 99–104 (2001)Hui, N.B., Pratihar, D.K.: Soft computing-based navigation schemes for a real wheeled robot moving among static obstacles. J. Intell. Robot. Syst. 51(3), 333–368 (2008)Menon, A., Akmeliawati, R., Demidenko, S.: Towards a simple mobile robot with obstacle avoidance and target seeking capabilities using fuzzy logic. In: Proceedings IEEE Instrumentation and Measurement Technology Conference, vol. 1–5, pp. 1003–1008 (2008)Moreno-Garcia, J., Rodriguez-Benitez, L., Fernandez-Caballero, A., Lopez, M.T.: Video sequence motion tracking by fuzzification techniques. Appl. Soft Comput. 10(1), 318–331 (2010)Nguyen, T.H., Nguyen, J.S., Pham, D.M., Nguyen, H.T.: Real-time obstacle detection for an autonomous wheelchair using stereoscopic cameras. Conf. Proc. IEEE Eng. Med. Biol. Soc. 2007(1), 4775–4778 (2007)Nguyen, J.S., Nguyen, T.H., Nguyen, H.T.: Semi-autonomous wheelchair system using stereoscopic cameras. In: Annual International Conference of the IEEE Engineering in Medicine and Biology Society, vol. 1–20, pp. 5068–5071 (2009)Grosso, E., Tistarelli, M.: Active/dynamic stereo vision. IEEE Trans. Pattern Anal. Mach. Intell. 17(9), 868–879 (1995)Kubota, S., Nakano, T., Okamoto, Y.: A global optimization for real-time on-board stereo obstacle detection systems. In: IEEE Intelligent Vehicles Symposium, pp. 7–12. IEEE (2007)Ortigosa, N., Morillas, S., Peris-Fajarnés, G., Dunai, L.: Fuzzy free path detection based on dense disparity maps obtained from stereo cameras. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 20(2), 245–259 (2012)Murray, D., Little, J.J.: Using real-time stereo vision for mobile robot navigation. Auton. Robot. 8(2), 161–171 (2000)Badino, H., Mester, R., Vaudrey, T., Franke, U.: Stereo-based free space computation in complex traffic scenarios. In: IEEE Southwest Symposium on Image Analysis & Interpretation, pp. 189–192 (2008)Hoilund, C., Moeslund, T.B., Madsen, C.L., Trivedi, M.M.: Free space computation from stochastic occupancy grids based on iconic kalman filtered disparity maps. In: Proceedings International Conference on Computer Vision Theory and Applications, vol. 1, pp. 164–167 (2010)Franke, U., Joos, A.: Real-time stereo vision for urban traffic scene understanding. In: IEEE Intelligent Vehicles Symposium, pp. 273–278. IEEE (2000)Wedel, A., Badino, H., Rabe, C., Loose, H., Franke, U., Cremers, D.: B-spline modeling of road surfaces with an application to free-space estimation. IEEE Trans. Intell. Transp. Syst. 10(4), 572–583 (2009)Vergauwen, M., Pollefeys, M., Van Gool, L.: A stereo-vision system for support of planetary surface exploration. Mach. Vis. Appl. 14(1), 5–14 (2003)Tarel, J.P., Leng, S.S., Charbonnier, P.: Accurate and robust image alignment for road profile reconstruction. In: IEEE International Conference on Image Processing, pp. 365–368. IEEE (2007)Kostavelis, I., Gasteratos, A.: Stereovision-based algorithm for obstacle avoidance. In: Lecture Notes in Computer Science, pp. 195–204. Intelligent Robotics and Applications (2009)Cerri, P., Grisleri, P.: Free space detection on highways using time correlation between stabilized sub-pixel precision ipm images. In: IEEE International Conference on Robotics and Automation, pp. 2223–2228. IEEE (2005)Labayrade, R., Aubert, D., Tarel, J.P.: Real time obstacle detection in stereo vision on non-flat road geometry through v-disparity representation. In: IEEE Intelligent Vehicle Symposium, pp. 646–651. INRIA (2002)Ortigosa, N., Morillas, S., Peris-Fajarnés, G., Dunai, L.: Disparity maps for free path detection. In: Proceedings International Conference on Computer Vision Theory and Applications, vol. 1, pp. 310–315 (2010)Ortigosa, N., Morillas, S., Peris-Fajarnés, G.: Obstacle-free pathway detection by means of depth maps. J. Intell. Robot. Syst. 63(1), 115–129 (2011)http://www.casblip.comBach y Rita, P., Collins, C., Sauders, B., White, B., Scadden, L.: Vision substitution by tactile image projection. Nature 221, 963964 (1969)Sampaio, E., Maris, S., Bach y Rita, P.: Brain plasticity: visual acuity of blind persons via the tongue. Brain Res. 908, 204207 (2001)http://www.seeingwithsound.comCapelle, C., Trullemans, C., Arno, P., Veraart, C.: A real-time experimental prototype for enhancement of vision rehabilitation using auditory substitution. IEEE Trans. Biomed. Eng. 45, 12791293 (1998)Lee, S.W., Kang, S.K., Lee, S.A.: A walking guidance system for the visually impaired. Int. J. Pattern Recognit. 22, 11711186 (2008)Chen, C.L., Liao, Y.F., Tai, C.L.: Image-to-midi mapping based on dynamic fuzzy color segmentation for visually impaired people. Pattern Recognit. Lett. 32, 549–560 (2011)Lombardi, P., Zanin, M., Messelodi, S.: Unified stereovision for ground, road, and obstacle detection. In: Proceedings on the Intelligent Vehicles Symposium, 2005, pp. 783–788. IEEE (2005)Yu, Q., Araujo, H., Wang, H.: Stereo-vision based real time obstacle detection for urban environments. In: Proceedings on the International Conference of Advanced Robotics, vol. 1, pp. 1671–1676 (2003)Benenson, R., Timofte, R., Van Gool, L.: Stixels estimation without depth map computation. In: ICCV, CVVT workshop (2011)Li, X., Yao, X., Murphey, Y.L., Karlsen, R., Gerhart, G.: A real-time vehicle detection and tracking system in outdoor traffic scenes. In: Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on, vol. 2, pp. 761–764 (2004)Zhang, Z.Y.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000)Dhond, U.R., Aggarwal, J.K.: Structure from stereo: a review. IEEE Trans. Syst. Man Cybern. 19, 1489–1510 (1989)Scharstein, D., Szeliski, R.: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comput. Vis. 47(1/2/3), 7–42 (2002)Middlebury Stereo Vision Page. http://vision.middlebury.edu/stereo/Birchfield, S., Tomasi, C.: Depth discontinuities by pixel-to-pixel stereo. Int. J. Comput. Vis. 17(3), 269–293 (1999)Lawrence Zitnick, C., Bing Kang, S.: Stereo for image-based rendering using image over-segmentation. Int. J. Comput. Vis. 75(1), 49–65 (2007)Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient belief propagation for early vision. Int. J. Comput. Vis. 70(1), 41–54 (2006)Yang, Q., Wang, L., Yang, R., Stewnius, H., Nistr, D.: Stereo matching with color-weighted correlation, hierarchical belief propagation, and occlusion handling. IEEE Trans. Pattern Anal. Mach. Intell. 31(3), 492–504 (2009)Gehrig, S., Eberli, F., Meyer, T.: A real-time low-power stereo vision engine using semi-global matching. Lect. Notes Comput. Sci. 5815/2009, 134–143 (2009)Wedel, A., Brox, T., Vaudrey, T., Rabe, C., Franke, U., Cremers, D.: Stereoscopic scene flow computation for 3d motion understanding. Int. J. Comput. Vis. 95, 29–51 (2011)Hirschmuller, H.: Stereo processing by semiglobal matching and mutual information. IEEE Trans. Pattern Anal. Mach. Intell. 30(2), 328–341 (2008)Leung, C., Appleton, B., Sun, C.: Iterated dynamic programming and quadtree subregioning for fast stereo matching. Image Vis. Comput. 26(10), 1371–1383 (2008)Hartley, R.I., Zisserman, A.: Multiple view geometry in computer vision, 2nd edn. Cambridge University Press, ISBN: 0521540518 (2004)Spiegel, M.R., Stepthens, L.J.: Statistics, 4th edn. Mc Graw Hill (2008)Kerre, E.E.: Fuzzy sets and approximate reasoning. Xian Jiaotong University Press (1998)Dubois, D., Prade, H.: Fuzzy sets and systems: theory and applications. Academic Press, New York (1980)Lee, C.C.: Fuzzy logic in control systems: Fuzzy logic controller-parts 1 and 2. IEEE Trans. Syst. Man Cybern. 20(2), 404–435 (1990)Fodor, J.C.: A new look at fuzzy-connectives. Fuzzy Sets Syst. 57(2), 141–148 (1993)Nalpantidis, L., Gasteratos, A.: Stereo vision for robotic applications in the presence of non-ideal lightning conditions. Image Vis. Comput. 28(6), 940–951 (2010

    Robust Autonomous Vehicle Pursuit without Expert Steering Labels

    Full text link
    In this work, we present a learning method for lateral and longitudinal motion control of an ego-vehicle for vehicle pursuit. The car being controlled does not have a pre-defined route, rather it reactively adapts to follow a target vehicle while maintaining a safety distance. To train our model, we do not rely on steering labels recorded from an expert driver but effectively leverage a classical controller as an offline label generation tool. In addition, we account for the errors in the predicted control values, which can lead to a loss of tracking and catastrophic crashes of the controlled vehicle. To this end, we propose an effective data augmentation approach, which allows to train a network capable of handling different views of the target vehicle. During the pursuit, the target vehicle is firstly localized using a Convolutional Neural Network. The network takes a single RGB image along with cars' velocities and estimates the target vehicle's pose with respect to the ego-vehicle. This information is then fed to a Multi-Layer Perceptron, which regresses the control commands for the ego-vehicle, namely throttle and steering angle. We extensively validate our approach using the CARLA simulator on a wide range of terrains. Our method demonstrates real-time performance and robustness to different scenarios including unseen trajectories and high route completion. The project page containing code and multimedia can be publicly accessed here: https://changyaozhou.github.io/Autonomous-Vehicle-Pursuit/.Comment: 9 pages, 4 figures, 3 table

    Naturalistic Driver Intention and Path Prediction using Machine Learning

    Get PDF
    Autonomous vehicles are still yet to be available to the public. This is because there are a number of challenges that have not been overcome to ensure that autonomous vehicles can safely and efficiently drive on public roads. Accurate prediction of other vehicles is vital for safe driving, as interacting with other vehicles is unavoidable on public streets. This thesis explores reasons why this problem of scene understanding is still unsolved, and presents methods for driver intention and path prediction. The thesis focuses on intersections, as this is a very complex scenario in which to predict the actions of human drivers. There is very limited data available for intersection studies from the perspective of an autonomous vehicle. This thesis presents a very large dataset of over 23,000 vehicle trajectories, used to validate the algorithms presented in this thesis. This dataset was collected using a lidar based vehicle detection and tracking system onboard a vehicle. Analytics of this data is presented. To determine the intent of vehicle at an intersection, a method for manoeuvre classification through the use of recurrent neural networks is presented. This allows accurate predictions of which destination a vehicle will take at an unsignalised intersection, based on that vehicle's approach. The final contribution of this thesis presents a method for driver path prediction, based on recurrent neural networks. It produces a multi-modal prediction for the vehicle’s path with uncertainty assigned to each mode. The output modes are not hand labelled, but instead learned from the data. This results in there not being a fixed number of output modes. Whilst the application of this method is vehicle prediction, this method shows significant promise to be used in other areas of robotics
    corecore