31 research outputs found

    Vision-Based Lane Analysis: Exploration of Issues and Approaches for Embedded Realization

    Full text link
    Lane feature extraction is one of the key computational steps in lane analysis systems. In this paper, we propose a lane feature extraction method, which enables different configurations of embedded solutions that address both ac-curacy and embedded systems ’ constraints. The proposed lane feature extraction process is evaluated in detail using real world lane data, to explore its effectiveness for embed-ded realization and adaptability to varying contextual in-formation like lane types and environmental conditions. 1. Role of Lane Analysis in IDAS Intelligent driver assistance systems (IDAS) are increas-ingly becoming a part of modern automobiles. Reliable and trustworthy driver assistance systems require accurate and efficient means for capturing states of vehicle surroundings

    Vision-Based Lane Analysis: Exploration of Issues and Approaches for Embedded Realization

    Full text link
    Lane feature extraction is one of the key computational steps in lane analysis systems. In this paper, we propose a lane feature extraction method, which enables different configurations of embedded solutions that address both ac-curacy and embedded systems ’ constraints. The proposed lane feature extraction process is evaluated in detail using real world lane data, to explore its effectiveness for embed-ded realization and adaptability to varying contextual in-formation like lane types and environmental conditions. 1. Role of Lane Analysis in IDAS Intelligent driver assistance systems (IDAS) are increas-ingly becoming a part of modern automobiles. Reliable and trustworthy driver assistance systems require accurate and efficient means for capturing states of vehicle surroundings

    PolyFormer: Referring Image Segmentation as Sequential Polygon Generation

    Full text link
    In this work, instead of directly predicting the pixel-level segmentation masks, the problem of referring image segmentation is formulated as sequential polygon generation, and the predicted polygons can be later converted into segmentation masks. This is enabled by a new sequence-to-sequence framework, Polygon Transformer (PolyFormer), which takes a sequence of image patches and text query tokens as input, and outputs a sequence of polygon vertices autoregressively. For more accurate geometric localization, we propose a regression-based decoder, which predicts the precise floating-point coordinates directly, without any coordinate quantization error. In the experiments, PolyFormer outperforms the prior art by a clear margin, e.g., 5.40% and 4.52% absolute improvements on the challenging RefCOCO+ and RefCOCOg datasets. It also shows strong generalization ability when evaluated on the referring video segmentation task without fine-tuning, e.g., achieving competitive 61.5% J&F on the Ref-DAVIS17 dataset

    Selective salient feature based lane analysis

    Full text link
    Abstract — Lane analysis involves data-intensive processing of input video frames to extract lanes that form a small percentage of the entire input image data. In this paper, we propose lane analysis using selective regions (LASeR), that takes advantage of the saliency of the lane features to estimate and track lanes in a road scene captured by on-board camera. The proposed technique processes selected bands in the image instead of the entire region of interest to extract sufficient lane features for efficient lane estimation. A detailed performance evaluation of the proposed approach is presented, which shows that such selective processing is sufficient to perform lane analysis with a high degree of accuracy. I

    Embedded computing techniques for vision-based lane change decision aid systems

    No full text
    Incorrect assessment of the positions and speeds of nearby vehicles will compromise road safety due to unsafe lane change decisions. Vision-based lane change decision aid systems (LCDAS) are being increasingly explored to facilitate the automatic assessment of the scene around the host vehicle. In this thesis, computationally efficient techniques that can operate under complex road scene environment are proposed for a vision-based LCDAS. Detection of multiple lanes is challenging particularly due to varying prominence of the associated edge features depending on their perspective with respect to the camera. In this thesis, a novel method is proposed for the automatic detection of host and neighbor lanes in the near view. The proposed method involves the systematic investigation of gradient magnitude histograms (GMH), gradient angle histograms (GAH) and Hough Transform (HT) iteratively to cater for varying prominence of lane markings. Evaluation of the proposed method (referred to as GMH-GAH-HT method) using a dataset consisting of more than 6000 images in various complex conditions shows that it is capable of high detection rates of 97% and 95% for host and neighbor lanes respectively. It was observed that the lanes in far-view are fainter and smaller in size compared to those in near view. The positions of host and neighbor lanes in the near-view are relied upon to deploy the GMH-GAH-HT method to systematically ascertain the far-view lanes. Selective processing of faint edges was introduced to enhance the robustness of the proposed technique to detect the host and neighbor lanes in the far view. A block-level HT computation process called Additive HT (AHT) is also proposed to exploit the inherent parallelisms of HT, resulting in order of magnitude speed up. The computation complexity of combining the block-level Hough spaces is also significantly reduced by introducing a hierarchical derivative of the AHT called the HAHT. In addition, the proposed method for detecting multiple lanes in the far-view is capable of estimating curved lanes by breaking them into a number of smaller straight lines. The detection of vehicles and their proximity in the RoI is tackled next. Two main characteristics, namely, ‘under-vehicle shadow’ and ‘multiple edge symmetry of vehicles’ were relied upon to first establish the presence of a vehicle. The detection of the under-vehicle shadow necessitated the automatic determination of the binarization threshold for varying lighting and road conditions. The proposed linear-regression based technique was evaluated on an exhaustive dataset to confirm that it can adapt to varying illumination conditions. Selective deployment of the GMH-GAH-HT made it possible to extract multiple edge-symmetry cues so as to further ascertain the presence of vehicles. The proposed method for vehicle detection is shown to yield a high vehicle detection rate of 95% on a test dataset with images taken under varying road, illumination and weather conditions. The lane width in the immediate vicinity of the target vehicle was then employed to estimate the proximity of the vehicle from the host vehicle. Unlike conventional methods that rely on stereo vision and 3-D models for estimating proximity of the detected vehicles, the proposed method has been shown to work with 2-D images resulting in a notable reduction in computational complexity. Verification against ground truth confirms that the proposed method is capable of estimating the relative distance of a target vehicle from the ego vehicle with an accuracy of 1m in the near view and 4m in the far view. Unlike 4-wheel vehicles like cars and lorries, detection of motorcycles cannot rely on prominent under-vehicle shadows, clearly defined edges and symmetry signatures. This motivated the development of a novel method to detect the tyre region of the motorcycles. The proposed method is adaptive to local illumination conditions by examining the intensities immediately adjacent to lane markings. Systematic exploration of the surrounding areas around the tyre region was also carried out to further strengthen the identification process. An early evaluation of the proposed technique shows promising results in complex scenarios. Existing LCDAS mainly focus on the blind spot detection aspect of the lane change decision aid process, as comprehensive assessment of the 360 degree scene around the vehicle is still a rather complex and computationally demanding process. In this work, a two-camera system consisting of front and rear facing monocular cameras has been employed to establish a near 360 degree field of view. The proximity of vehicles surrounding the host vehicle and their speeds were incorporated into a Gaussian risk function for estimating the risks posed by different vehicles in the front and rear views. A state machine was also introduced to monitor the blind spot region by combining the risk information of vehicles in front and rear RoIs. Finally, the proposed techniques lend well for compute-efficient realizations and simulations on real video sequences show that the integrated framework can be deployed to deterministically evaluate the risks associated with lane change maneuvers.DOCTOR OF PHILOSOPHY (SCE

    Fast finite field multipliers for public key cryptosystems

    No full text
    144 p.The security strength of Public Key Cryptosystems (PKCs) is attributed to the complex computations that are employed in the encryption and decryption algorithms. These algorithms are executed by cryptoprocessors which are used as embedded co-processors in devices like smartcards, smart cameras etc. Speed of operation, circuit area and security strength variability are vital design considerations in such applications. These criteria call for efficient hardware implementation of the underlying arithmetic operations of the algorithms. Multiplication in finite fields, widely known as finite field multiplication or modular multiplication, forms the core computational engine of all algorithms involved in PKCs. The research work in this thesis aims at developing efficient architectures for finite field multiplications in the prime field GF(N) and the extended binary field GF(2m).MASTER OF ENGINEERING (EEE
    corecore