8 research outputs found

    Monitoring of traffic using unmanned aerial vehicle in Malaysia landscape perspective

    Get PDF
    The theoretical investigation has been performed on the implementation of optical Mobius shape in add-drop microring resonator. The modified add-drop Mobius configuration is used to investigate the optical bistability and the spectral transmission. The optical bright soliton pulse is used as the input source of the resonator system. The pulses propagation of the resonator system is modelled using the iterative programming based on the transfer matrix analysis equations. The enhancement of nonlinear effect of the resonator system is achieved by the add-drop Mobius resonator configuration. The system has been modelled for a variation of coupling coefficient for increase the bistable signal properties. The Add-drop Mobius MRR generated a bistable signal with 6.01 mW hysteresis width, and 9.47 mW output switch power with optimized radius of 5 µm outer and 4.5 m inner ring parts with 50 mW controlled power and input power. Mobius configuration is found as the better shape of resonator cavity that capable of optical switching application

    Moving Cast Shadow Detection

    Get PDF

    Robust Vehicle Detection under Various Environments to Realize Road Traffic Flow Surveillance Using an Infrared Thermal Camera

    Get PDF
    To realize road traffic flow surveillance under various environments which contain poor visibility conditions, we have already proposed two vehicle detection methods using thermal images taken with an infrared thermal camera. The first method uses pattern recognition for the windshields and their surroundings to detect vehicles. However, the first method decreases the vehicle detection accuracy in winter season. To maintain high vehicle detection accuracy in all seasons, we developed the second method. The second method uses tires’ thermal energy reflection areas on a road as the detection targets. The second method did not achieve high detection accuracy for vehicles on left-hand and right-hand lanes except for two center-lanes. Therefore, we have developed a new method based on the second method to increase the vehicle detection accuracy. This paper proposes the new method and shows that the detection accuracy for vehicles on all lanes is 92.1%. Therefore, by combining the first method and the new method, high vehicle detection accuracies are maintained under various environments, and road traffic flow surveillance can be realized

    Automatic traffic data collection under varying lighting and temperature conditions in multimodal environments: thermal versus visible spectrum video-based systems

    Get PDF
    Vision-based monitoring systems using visible spectrum (regular) video cameras can complement or substitute conventional sensors and provide rich positional and classification data. Although new camera technologies, including thermal video sensors, may improve the performance of digital video-based sensors, their performance under various conditions has rarely been evaluated at multimodal facilities. The purpose of this research is to integrate existing computer vision methods for automated data collection and evaluate the detection, classification, and speed measurement performance of thermal video sensors under varying lighting and temperature conditions. Thermal and regular video data was collected simultaneously under different conditions across multiple sites. Although the regular video sensor narrowly outperformed the thermal sensor during daytime, the performance of the thermal sensor is significantly better for low visibility and shadow conditions, particularly for pedestrians and cyclists. Retraining the algorithm on thermal data yielded an improvement in the global accuracy of 48%. Thermal speed measurements were consistently more accurate than for the regular video at daytime and nighttime. Thermal video is insensitive to lighting interference and pavement temperature, solves issues associated with visible light cameras for traffic data collection, and offers other benefits such as privacy, insensitivity to glare, storage space, and lower processing requirements

    Shadow removal utilizing multiplicative fusion of texture and colour features for surveillance image

    Get PDF
    Automated surveillance systems often identify shadows as parts of a moving object which jeopardized subsequent image processing tasks such as object identification and tracking. In this thesis, an improved shadow elimination method for an indoor surveillance system is presented. This developed method is a fusion of several image processing methods. Firstly, the image is segmented using the Statistical Region Merging algorithm to obtain the segmented potential shadow regions. Next, multiple shadow identification features which include Normalized Cross-Correlation, Local Color Constancy and Hue-Saturation-Value shadow cues are applied on the images to generate feature maps. These feature maps are used for identifying and removing cast shadows according to the segmented regions. The video dataset used is the Autonomous Agents for On-Scene Networked Incident Management which covers both indoor and outdoor video scenes. The benchmarking result indicates that the developed method is on-par with several normally used shadow detection methods. The developed method yields a mean score of 85.17% for the video sequence in which the strongest shadow is present and a mean score of 89.93% for the video having the most complex textured background. This research contributes to the development and improvement of a functioning shadow eliminator method that is able to cope with image noise and various illumination changes

    Feature-based image patch classification for moving shadow detection

    Get PDF
    Moving object detection is a first step towards many computer vision applications, such as human interaction and tracking, video surveillance, and traffic monitoring systems. Accurate estimation of the target object’s size and shape is often required before higher-level tasks (e.g., object tracking or recog nition) can be performed. However, these properties can be derived only when the foreground object is detected precisely. Background subtraction is a common technique to extract foreground objects from image sequences. The purpose of background subtraction is to detect changes in pixel values within a given frame. The main problem with background subtraction and other related object detection techniques is that cast shadows tend to be misclassified as either parts of the foreground objects (if objects and their cast shadows are bonded together) or independent foreground objects (if objects and shadows are separated). The reason for this phenomenon is the presence of similar characteristics between the target object and its cast shadow, i.e., shadows have similar motion, attitude, and intensity changes as the moving objects that cast them. Detecting shadows of moving objects is challenging because of problem atic situations related to shadows, for example, chromatic shadows, shadow color blending, foreground-background camouflage, nontextured surfaces and dark surfaces. Various methods for shadow detection have been proposed in the liter ature to address these problems. Many of these methods use general-purpose image feature descriptors to detect shadows. These feature descriptors may be effective in distinguishing shadow points from the foreground object in a specific problematic situation; however, such methods often fail to distinguish shadow points from the foreground object in other situations. In addition, many of these moving shadow detection methods require prior knowledge of the scene condi tions and/or impose strong assumptions, which make them excessively restrictive in practice. The aim of this research is to develop an efficient method capable of addressing possible environmental problems associated with shadow detection while simultaneously improving the overall accuracy and detection stability. In this research study, possible problematic situations for dynamic shad ows are addressed and discussed in detail. On the basis of the analysis, a ro bust method, including change detection and shadow detection, is proposed to address these environmental problems. A new set of two local feature descrip tors, namely, binary patterns of local color constancy (BPLCC) and light-based gradient orientation (LGO), is introduced to address the identified problematic situations by incorporating intensity, color, texture, and gradient information. The feature vectors are concatenated in a column-by-column manner to con struct one dictionary for the objects and another dictionary for the shadows. A new sparse representation framework is then applied to find the nearest neighbor of the test image segment by computing a weighted linear combination of the reference dictionary. Image segment classification is then performed based on the similarity between the test image and the sparse representations of the two classes. The performance of the proposed framework on common shadow detec tion datasets is evaluated, and the method shows improved performance com pared with state-of-the-art methods in terms of the shadow detection rate, dis crimination rate, accuracy, and stability. By achieving these significant improve ments, the proposed method demonstrates its ability to handle various problems associated with image processing and accomplishes the aim of this thesis

    Robust Vehicle and Traffic Information Extraction for Highway Surveillance

    Get PDF
    A robust vision-based traffic monitoring system for vehicle and traffic information extraction is developed in this research. It is challenging to maintain detection robustness at all time for a highway surveillance system. There are three major problems in detecting and tracking a vehicle: (1) the moving cast shadow effect, (2) the occlusion effect, and (3) nighttime detection. For moving cast shadow elimination, a 2D joint vehicle-shadow model is employed. For occlusion detection, a multiple-camera system is used to detect occlusion so as to extract the exact location of each vehicle. For vehicle nighttime detection, a rear-view monitoring technique is proposed to maintain tracking and detection accuracy. Furthermore, we propose a method to improve the accuracy of background extraction, which usually serves as the first step in any vehicle detection processing. Experimental results are given to demonstrate that the proposed techniques are effective and efficient for vision-based highway surveillance
    corecore