6 research outputs found

    Effects of Adverse Weather Conditions on Object Detection and Time-to- Collision Estimation

    No full text
    Over the past few years, there has been significant attention given to the development of level four and level five autonomous vehicles. These vehicles rely heavily on object detection technology for their stable maneuvering. The detection of objects using various sensors, such as LiDAR, radar, and cameras, must be accurate and rapid to ensure safe driving. However, detecting objects in adverse weather conditions such as stormy, snowy, or foggy conditions can pose a significant challenge, and it can reduce the time to collision TTC), which can lead to dangerous driving. Therefore, it is crucial to address this challenge to ensure the safe deployment of autonomous vehicles

    An ECL chip for Bio analysis using autonomous microbeads positioning

    No full text
    학위논문(석사) - 한국과학기술원 : 전기및전자공학전공, 2003.8, [ 56 p. ]한국과학기술원 : 전기및전자공학전공

    Adjusting the Brightness of Generated Image for Data Augmentation in Diverse Night Environments

    No full text
    As deep learning-based perception techniques continue to advance, research is being conducted to apply technologies such as obstacle detection, semantic segmentation, and depth estimation to autonomous vehicles. However, while most studies show good performance in daylight conditions, there is a frequent degradation of performance in nighttime environments. To address this, a nighttime dataset is needed, but directly acquiring this data is time-consuming and difficult. Therefore, other studies have used image-to-image translation models to generate nighttime data. However, while these models can generate well-formed nighttime images, the resulting images lack a specific brightness and can suffer from noise-induced artifacts. In this study, the Y-Control Loss and Self-attention module were added to improve the existing CycleGAN model and address this problem

    Unsupervised domain adaptation for stereo matching using epipolar line based multiple horizontal attention module

    No full text
    The development of convolutional neural networks (CNNs) has brought about significant progress in a variety of computer vision tasks, and among them, stereo matching is an important area of research that allows for the reconstruction of depth and 3D information, which is difficult to obtain with a single camera. However, CNNs have limitations, particularly in their susceptibility to domain shift. The performance of state-of-the-art stereo matching networks that rely on CNNs is known to degrade when there are changes in domain. In addition, collecting real-world ground truth data to address this issue can be a time-consuming and expensive process when compared to synthetic ground truth data. To solve this problem, this study proposes an end-to-end framework that employs image-to-image translation to bridge the domain gap in stereo matching. Specifically, the study proposes a horizontal attentive generation (HAG) module that takes into account the epipolar constraint of contents when generating target-stylized left-right views. By using a horizontal attention mechanism during the generation process, the proposed method can deal with issues related to small receptive fields by aggregating more information from each view without using the entire feature map. As a result, the network can maintain consistency between the left and right views during image generation, making it more robust across different datasets

    Challenges of Lane Detection Using Deep Neural Networks in Severe Heavy Rain: A Synthetic Evaluation Dataset Based on the CARLA Simulator

    No full text
    Autonomous driving technology nowadays targets to level 4 or beyond, but the researchers are faced with some limitations for developing reliable driving algorithms in diverse challenges. To promote the autonomous vehicles to spread widely, it is important to properly deal with the safety issues on this technology. Among various safety concerns, the sensor blockage problem by severe weather conditions can be one of the most frequent threats for lane de-tection algorithms during autonomous driving. To handle this problem, the importance of the generation of proper datasets is becoming more significant. In this paper, a synthetic lane dataset with sensor blockage is suggested in the format of lane detection evaluation. Rain streaks for each frame were made by an experimentally established equation. Using this dataset, the degradation of the diverse lane detection methods has been verified. The trend of the per-formance degradation of deep neural network- based lane detection methods has been analyzed in depth. Finally, the limitation and the future directions of the network-based methods were presented.FALSEkc
    corecore