139 research outputs found

    Application of Image Processing and Three-Dimensional Data Reconstruction Algorithm Based on Traffic Video in Vehicle Component Detection

    Get PDF
    Vehicle detection is one of the important technologies in intelligent video surveillance systems. Owing to the perspective projection imaging principle of cameras, traditional two-dimensional (2D) images usually distort the size and shape of vehicles. In order to solve these problems, the traffic scene calibration and inverse projection construction methods are used to project the three-dimensional (3D) information onto the 2D images. In addition, a vehicle target can be characterized by several components, and thus vehicle detection can be fulfilled based on the combination of these components. The key characteristics of vehicle targets are distinct during a single day; for example, the headlight brightness is more significant at night, while the vehicle taillight and license plate color are much more prominent in the daytime. In this paper, by using the background subtraction method and Gaussian mixture model, we can realize the accurate detection of target lights at night. In the daytime, however, the detection of the license plate and taillight of a vehicle can be fulfilled by exploiting the background subtraction method and the Markov random field, based on the spatial geometry relation between the corresponding components. Further, by utilizing Kalman filters to follow the vehicle tracks, detection accuracy can be further improved. Finally, experiment results demonstrate the effectiveness of the proposed methods

    Robust Reflection Detection and Removal in Rainy Conditions using LAB and HSV Color Spaces

    Get PDF
    In the field of traffic monitoring systems, shadows are the main causes of errors in computer vision-based vehicle detection and classification. A great number of  research have been carried out to detect and remove shadows. However, these research works only focused on solving shadow problems in daytime traffic scenes. Up to now, far too little attention has been paid to the problem caused by vehicles’ reflections in rainy conditions. Unlike shadows in the daytime, which are homogeneous gray shades, reflection shadows are inhomogeneous regions of different colors. This characteristic makes reflections harder to detect and remove. Therefore, in this paper, we aim to develop a reflection detection and removal method from single images or video. Reflections are detected by determining a combination of L and B channels from LAB color space and H channel from HSV color space. The reflection removal method is performed by determining the optimal intensity of reflected areas so that they match with neighbor regions. The advantage of our method is that all reflected areas are removed without affecting vehicles’ textures or details

    Vision-based Detection, Tracking and Classification of Vehicles using Stable Features with Automatic Camera Calibration

    Get PDF
    A method is presented for segmenting and tracking vehicles on highways using a camera that is relatively low to the ground. At such low angles, 3D perspective effects cause significant appearance changes over time, as well as severe occlusions by vehicles in neighboring lanes. Traditional approaches to occlusion reasoning assume that the vehicles initially appear well-separated in the image, but in our sequences it is not uncommon for vehicles to enter the scene partially occluded and remain so throughout. By utilizing a 3D perspective mapping from the scene to the image, along with a plumb line projection, a subset of features is identified whose 3D coordinates can be accurately estimated. These features are then grouped to yield the number and locations of the vehicles, and standard feature tracking is used to maintain the locations of the vehicles over time. Additional features are then assigned to these groups and used to classify vehicles as cars or trucks. The technique uses a single grayscale camera beside the road, processes image frames incrementally, works in real time, and produces vehicle counts with over 90% accuracy on challenging sequences. Adverse weather conditions are handled by augmenting feature tracking with a boosted cascade vehicle detector (BCVD). To overcome the need of manual camera calibration, an algorithm is presented which uses BCVD to calibrate the camera automatically without relying on any scene-specific image features such as road lane markings

    Efficient Vehicle Counting and Classification using Robust Multi-Cue Consecutive Frame Subtraction

    Get PDF
    The ability to count and classify vehicles provides valuable information to road network managers, highways agencies and traffic operators alike, enabling them to manage traffic and to plan future development of the network. Increased computational speed of processors has enabled application of vision technology in several fields such as: Industrial automation, Video security, transportation and automotive. The proposed method in this paper is a robust adaptive multi-cue frame subtraction method that detects foreground pixels corresponding to moving and stopped vehicles, even with noisy images due to compression. First the approach adaptively thresholds a combination of luminance and chromaticity disparity maps between the learned background and the current frame. The segmentation is further used by a two-step tracking approach, which combines the simplicity of a linear 2-D Kalman filter and the complexity of 3-D volume estimation using Markov chain Monte Carlo (MCMC) methods. The experimental results shows that the proposed method can count and classify vehicles in real time with a high level of performance under challenging situations, such as with moving casted shadows on sunny days, headlight reflections on the road using only a single standard camera

    Lane Line Detection and Object Scene Segmentation Using Otsu Thresholding and the Fast Hough Transform for Intelligent Vehicles in Complex Road Conditions

    Get PDF
    An Otsu-threshold- and Canny-edge-detection-based fast Hough transform (FHT) approach to lane detection was proposed to improve the accuracy of lane detection for autonomous vehicle driving. During the last two decades, autonomous vehicles have become very popular, and it is constructive to avoid traffic accidents due to human mistakes. The new generation needs automatic vehicle intelligence. One of the essential functions of a cutting-edge automobile system is lane detection. This study recommended the idea of lane detection through improved (extended) Canny edge detection using a fast Hough transform. The Gaussian blur filter was used to smooth out the image and reduce noise, which could help to improve the edge detection accuracy. An edge detection operator known as the Sobel operator calculated the gradient of the image intensity to identify edges in an image using a convolutional kernel. These techniques were applied in the initial lane detection module to enhance the characteristics of the road lanes, making it easier to detect them in the image. The Hough transform was then used to identify the routes based on the mathematical relationship between the lanes and the vehicle. It did this by converting the image into a polar coordinate system and looking for lines within a specific range of contrasting points. This allowed the algorithm to distinguish between the lanes and other features in the image. After this, the Hough transform was used for lane detection, making it possible to distinguish between left and right lane marking detection extraction; the region of interest (ROI) must be extracted for traditional approaches to work effectively and easily. The proposed methodology was tested on several image sequences. The least-squares fitting in this region was then used to track the lane. The proposed system demonstrated high lane detection in experiments, demonstrating that the identification method performed well regarding reasoning speed and identification accuracy, which considered both accuracy and real-time processing and could satisfy the requirements of lane recognition for lightweight automatic driving systems

    Preference Modeling in Data-Driven Product Design: Application in Visual Aesthetics

    Full text link
    Creating a form that is attractive to the intended market audience is one of the greatest challenges in product development given the subjective nature of preference and heterogeneous market segments with potentially different product preferences. Accordingly, product designers use a variety of qualitative and quantitative research tools to assess product preferences across market segments, such as design theme clinics, focus groups, customer surveys, and design reviews; however, these tools are still limited due to their dependence on subjective judgment, and being time and resource intensive. In this dissertation, we focus on a key research question: how can we understand and predict more reliably the preference for a future product in heterogeneous markets, so that this understanding can inform designers' decision-making? We present a number of data-driven approaches to model product preference. Instead of depending on any subjective judgment from human, the proposed preference models investigate the mathematical patterns behind users’ choice and behavior. This allows a more objective translation of customers' perception and preference into analytical relations that can inform design decision-making. Moreover, these models are scalable in that they have the capacity to analyze large-scale data and model customer heterogeneity accurately across market segments. In particular, we use feature representation as an intermediate step in our preference model, so that we can not only increase the predictive accuracy of the model but also capture in-depth insight into customers' preference. We tested our data-driven approaches with applications in visual aesthetics preference. Our results show that the proposed approaches can obtain an objective measurement of aesthetic perception and preference for a given market segment. This measurement enables designers to reliably evaluate and predict the aesthetic appeal of their designs. We also quantify the relative importance of aesthetic attributes when both aesthetic attributes and functional attributes are considered by customers. This quantification has great utility in helping product designers and executives in design reviews and selection of designs. Moreover, we visualize the possible factors affecting customers' perception of product aesthetics and how these factors differ across different market segments. Those visualizations are incredibly important to designers as they relate physical design details to psychological customer reactions. The main contribution of this dissertation is to present purely data-driven approaches that enable designers to quantify and interpret more reliably the product preference. Methodological contributions include using modern probabilistic approaches and feature learning algorithms to quantitatively model the design process involving product aesthetics. These novel approaches can not only increase the predictive accuracy but also capture insights to inform design decision-making.PHDDesign ScienceUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145987/1/yanxinp_1.pd

    Application of improved you only look once model in road traffic monitoring system

    Get PDF
    The present research focuses on developing an intelligent traffic management solution for tracking the vehicles on roads. Our proposed work focuses on a much better you only look once (YOLOv4) traffic monitoring system that uses the CSPDarknet53 architecture as its foundation. Deep-sort learning methodology for vehicle multi-target detection from traffic video is also part of our research study. We have included features like the Kalman filter, which estimates unknown objects and can track moving targets. Hungarian techniques identify the correct frame for the object. We are using enhanced object detection network design and new data augmentation techniques with YOLOv4, which ultimately aids in traffic monitoring. Until recently, object identification models could either perform quickly or draw conclusions quickly. This was a big improvement, as YOLOv4 has an astoundingly good performance for a very high frames per second (FPS). The current study is focused on developing an intelligent video surveillance-based vehicle tracking system that tracks the vehicles using a neural network, image-based tracking, and YOLOv4. Real video sequences of road traffic are used to test the effectiveness of the method that has been suggested in the research. Through simulations, it is demonstrated that the suggested technique significantly increases graphics processing unit (GPU) speed and FSP as compared to baseline algorithms
    • 

    corecore