75,951 research outputs found

    Fusion-layer-based machine vision for intelligent transportation systems/

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 307-317).Environment understanding technology is very vital for intelligent vehicles that are expected to automatically respond to fast changing environment and dangerous situations. To obtain perception abilities, we should automatically detect static and dynamic obstacles, and obtain their related information, such as, locations, speed, collision/occlusion possibility, and other dynamic current/historic information. Conventional methods independently detect individual information, which is normally noisy and not very reliable. Instead we propose fusion-based and layered-based information-retrieval methodology to systematically detect obstacles and obtain their location/timing information for visible and infrared sequences. The proposed obstacle detection methodologies take advantage of connection between different information and increase the computational accuracy of obstacle information estimation, thus improving environment understanding abilities, and driving safety.by Yajun Fang.Ph.D

    Intelligent Transportation Systems: Fusing Computer Vision and Sensor Networks for Traffic Management

    Get PDF
    Intelligent Transportation Systems (ITS) represent a pivotal approach to addressing the complex challenges posed by modern-day urban mobility. By seamlessly integrating computer vision and sensor networks, ITS offer a comprehensive solution for traffic management, safety enhancement, and environmental sustainability. This paper delves into the synergistic fusion of computer vision and sensor networks within the framework of ITS, emphasizing their collective role in optimizing traffic flow, mitigating congestion, and enhancing overall road safety. Leveraging cutting-edge technologies such as machine learning, image processing, and Internet of Things (IoT), ITS harness real-time data acquisition and analytics capabilities to facilitate informed decision-making by transportation authorities. Through a comprehensive review of recent advancements, challenges, and opportunities, this paper illuminates the transformative potential of integrating computer vision and sensor networks in ITS. Furthermore, it presents compelling case studies and exemplary applications, showcasing the tangible benefits of this fusion across diverse traffic management scenarios. Ultimately, this paper advocates for the widespread adoption of integrated ITS solutions as a means to usher in a new era of smarter, safer, and more sustainable urban transportation systems

    Infrared Technologies for Defence Systems

    Get PDF
     Infrared technology has seen phenomenal growth since its inception during World War II. Defence applications have been the main driver of infrared technology development all over the world. Infrared systems have been mainly developed for night vision, all weather surveillance, search and tracking and missile seeker applications. Ever demanding defence system requirements have facilitated considerable investment. Research has been mainly directed towards the product development. Medical applications such as thermographs, transportation applications such as enhanced vision systems for airplanes, helicopters, sea vehicles, and automobiles, law enforcement applications in drug prevention and criminal tracking, managing forest fires and environmental monitoring are some of the spin-offs. Infrared technology has proven to be a force multiplier in war as well as low intensity conflict situation. Intelligent vision sensor development covering visible-infrared spectrum for automated surveillance, change detection, 3D machine vision systems, dynamic particle metrology, missile and ballistic testing/imaging, faster, more precise and more manoeuvrable robotic applications will drive the future research

    6G for Vehicle-to-Everything (V2X) Communications: Enabling Technologies, Challenges, and Opportunities

    Get PDF
    We are on the cusp of a new era of connected autonomous vehicles with unprecedented user experiences, tremendously improved road safety and air quality, highly diverse transportation environments and use cases, as well as a plethora of advanced applications. Realizing this grand vision requires a significantly enhanced vehicle-to-everything (V2X) communication network which should be extremely intelligent and capable of concurrently supporting hyper-fast, ultra-reliable, and low-latency massive information exchange. It is anticipated that the sixth-generation (6G) communication systems will fulfill these requirements of the next-generation V2X. In this article, we outline a series of key enabling technologies from a range of domains, such as new materials, algorithms, and system architectures. Aiming for truly intelligent transportation systems, we envision that machine learning will play an instrumental role for advanced vehicular communication and networking. To this end, we provide an overview on the recent advances of machine learning in 6G vehicular networks. To stimulate future research in this area, we discuss the strength, open challenges, maturity, and enhancing areas of these technologies

    An Intelligent Safety System for Human-Centered Semi-Autonomous Vehicles

    Full text link
    Nowadays, automobile manufacturers make efforts to develop ways to make cars fully safe. Monitoring driver's actions by computer vision techniques to detect driving mistakes in real-time and then planning for autonomous driving to avoid vehicle collisions is one of the most important issues that has been investigated in the machine vision and Intelligent Transportation Systems (ITS). The main goal of this study is to prevent accidents caused by fatigue, drowsiness, and driver distraction. To avoid these incidents, this paper proposes an integrated safety system that continuously monitors the driver's attention and vehicle surroundings, and finally decides whether the actual steering control status is safe or not. For this purpose, we equipped an ordinary car called FARAZ with a vision system consisting of four mounted cameras along with a universal car tool for communicating with surrounding factory-installed sensors and other car systems, and sending commands to actuators. The proposed system leverages a scene understanding pipeline using deep convolutional encoder-decoder networks and a driver state detection pipeline. We have been identifying and assessing domestic capabilities for the development of technologies specifically of the ordinary vehicles in order to manufacture smart cars and eke providing an intelligent system to increase safety and to assist the driver in various conditions/situations.Comment: 15 pages and 5 figures, Submitted to the international conference on Contemporary issues in Data Science (CiDaS 2019), Learn more about this project at https://iasbs.ac.ir/~ansari/fara

    Perception advances in outdoor vehicle detection for automatic cruise control

    Get PDF
    This paper describes a vehicle detection system based on support vector machine (SVM) and monocular vision. The final goal is to provide vehicle-to-vehicle time gap for automatic cruise control (ACC) applications in the framework of intelligent transportation systems (ITS). The challenge is to use a single camera as input, in order to achieve a low cost final system that meets the requirements needed to undertake serial production in automotive industry. The basic feature of the detected objects are first located in the image using vision and then combined with a SVMbased classifier. An intelligent learning approach is proposed in order to better deal with objects variability, illumination conditions, partial occlusions and rotations. A large database containing thousands of object examples extracted from real road scenes has been created for learning purposes. The classifier is trained using SVM in order to be able to classify vehicles, including trucks. In addition, the vehicle detection system described in this paper provides early detection of passing cars and assigns lane to target vehicles. In the paper, we present and discuss the results achieved up to date in real traffic conditions.Ministerio de Educación y Cienci

    Fuzzy Free Path Detection based on Dense Disparity Maps obtained from Stereo Cameras

    Full text link
    In this paper we propose a fuzzy method to detect free paths in real-time using digital stereo images. It is based on looking for linear variations of depth in disparity maps, which are obtained by processing a pair of rectified images from two stereo cameras. By applying least-squares fitting over groups of disparity maps columns to a linear model, free paths are detected by giving a certainty using a fuzzy rule. Experimental results on real outdoor images are also presented.Nuria Ortigosa acknowledges the support of Universidad Polit'ecnica de Valencia under grant FPI-UPV 2008. Samuel Morillas acknowledges the support of Spanish Ministry of Education and Science under grant MTM 2009-12872-C02-01.Ortigosa Araque, N.; Morillas Gómez, S.; Peris Fajarnes, G.; Dunai Dunai, L. (2012). Fuzzy Free Path Detection based on Dense Disparity Maps obtained from Stereo Cameras. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems. 20(2):245-259. doi:10.1142/S0218488512500122S245259202Grosso, E., & Tistarelli, M. (1995). Active/dynamic stereo vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 17(9), 868-879. doi:10.1109/34.406652Wedel, A., Badino, H., Rabe, C., Loose, H., Franke, U., & Cremers, D. (2009). B-Spline Modeling of Road Surfaces With an Application to Free-Space Estimation. IEEE Transactions on Intelligent Transportation Systems, 10(4), 572-583. doi:10.1109/tits.2009.2027223Bloch, I. (2005). Fuzzy spatial relationships for image processing and interpretation: a review. Image and Vision Computing, 23(2), 89-110. doi:10.1016/j.imavis.2004.06.013Keller, J. M., & Wang, X. (2000). A Fuzzy Rule-Based Approach to Scene Description Involving Spatial Relationships. Computer Vision and Image Understanding, 80(1), 21-41. doi:10.1006/cviu.2000.0872Moreno-Garcia, J., Rodriguez-Benitez, L., Fernández-Caballero, A., & López, M. T. (2010). Video sequence motion tracking by fuzzification techniques. Applied Soft Computing, 10(1), 318-331. doi:10.1016/j.asoc.2009.08.002Morillas, S., Gregori, V., & Hervas, A. (2009). Fuzzy Peer Groups for Reducing Mixed Gaussian-Impulse Noise From Color Images. IEEE Transactions on Image Processing, 18(7), 1452-1466. doi:10.1109/tip.2009.2019305Poloni, M., Ulivi, G., & Vendittelli, M. (1995). Fuzzy logic and autonomous vehicles: Experiments in ultrasonic vision. Fuzzy Sets and Systems, 69(1), 15-27. doi:10.1016/0165-0114(94)00237-2Alonso, J. M., Magdalena, L., Guillaume, S., Sotelo, M. A., Bergasa, L. M., Ocaña, M., & Flores, R. (2007). Knowledge-based Intelligent Diagnosis of Ground Robot Collision with Non Detectable Obstacles. Journal of Intelligent and Robotic Systems, 48(4), 539-566. doi:10.1007/s10846-006-9125-6McFetridge, L., & Ibrahim, M. Y. (2009). A new methodology of mobile robot navigation: The agoraphilic algorithm. Robotics and Computer-Integrated Manufacturing, 25(3), 545-551. doi:10.1016/j.rcim.2008.01.008Sun, H., & Yang, J. (2001). Obstacle detection for mobile vehicle using neural network and fuzzy logic. Neural Network and Distributed Processing. doi:10.1117/12.441696Ortigosa, N., Morillas, S., & Peris-Fajarnés, G. (2010). Obstacle-Free Pathway Detection by Means of Depth Maps. Journal of Intelligent & Robotic Systems, 63(1), 115-129. doi:10.1007/s10846-010-9498-4Picton, P. D., & Capp, M. D. (2008). Relaying scene information to the blind via sound using cartoon depth maps. Image and Vision Computing, 26(4), 570-577. doi:10.1016/j.imavis.2007.07.005Zhang, Z. (2000). A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11), 1330-1334. doi:10.1109/34.888718Scharstein, D., & Szeliski, R. (2002). International Journal of Computer Vision, 47(1/3), 7-42. doi:10.1023/a:1014573219977Felzenszwalb, P. F., & Huttenlocher, D. P. (2006). Efficient Belief Propagation for Early Vision. International Journal of Computer Vision, 70(1), 41-54. doi:10.1007/s11263-006-7899-4Qingxiong Yang, Liang Wang, Ruigang Yang, Stewenius, H., & Nister, D. (2009). Stereo Matching with Color-Weighted Correlation, Hierarchical Belief Propagation, and Occlusion Handling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(3), 492-504. doi:10.1109/tpami.2008.99Zitnick, C. L., & Kang, S. B. (2007). Stereo for Image-Based Rendering using Image Over-Segmentation. International Journal of Computer Vision, 75(1), 49-65. doi:10.1007/s11263-006-0018-8Hartley, R., & Zisserman, A. (2004). Multiple View Geometry in Computer Vision. doi:10.1017/cbo9780511811685Lee, C. C. (1990). Fuzzy logic in control systems: fuzzy logic controller. I. IEEE Transactions on Systems, Man, and Cybernetics, 20(2), 404-418. doi:10.1109/21.52551C. Fodor, J. (1993). A new look at fuzzy connectives. Fuzzy Sets and Systems, 57(2), 141-148. doi:10.1016/0165-0114(93)90153-9Nalpantidis, L., & Gasteratos, A. (2010). Stereo vision for robotic applications in the presence of non-ideal lighting conditions. Image and Vision Computing, 28(6), 940-951. doi:10.1016/j.imavis.2009.11.011BOHANNON, R. W. (1997). Comfortable and maximum walking speed of adults aged 20—79 years: reference values and determinants. Age and Ageing, 26(1), 15-19. doi:10.1093/ageing/26.1.1

    Driver Distraction Identification with an Ensemble of Convolutional Neural Networks

    Get PDF
    The World Health Organization (WHO) reported 1.25 million deaths yearly due to road traffic accidents worldwide and the number has been continuously increasing over the last few years. Nearly fifth of these accidents are caused by distracted drivers. Existing work of distracted driver detection is concerned with a small set of distractions (mostly, cell phone usage). Unreliable ad-hoc methods are often used.In this paper, we present the first publicly available dataset for driver distraction identification with more distraction postures than existing alternatives. In addition, we propose a reliable deep learning-based solution that achieves a 90% accuracy. The system consists of a genetically-weighted ensemble of convolutional neural networks, we show that a weighted ensemble of classifiers using a genetic algorithm yields in a better classification confidence. We also study the effect of different visual elements in distraction detection by means of face and hand localizations, and skin segmentation. Finally, we present a thinned version of our ensemble that could achieve 84.64% classification accuracy and operate in a real-time environment.Comment: arXiv admin note: substantial text overlap with arXiv:1706.0949
    corecore