8 research outputs found

    End-to-end deep multi-score model for No-reference stereoscopic image quality assessment

    Full text link
    Deep learning-based quality metrics have recently given significant improvement in Image Quality Assessment (IQA). In the field of stereoscopic vision, information is evenly distributed with slight disparity to the left and right eyes. However, due to asymmetric distortion, the objective quality ratings for the left and right images would differ, necessitating the learning of unique quality indicators for each view. Unlike existing stereoscopic IQA measures which focus mainly on estimating a global human score, we suggest incorporating left, right, and stereoscopic objective scores to extract the corresponding properties of each view, and so forth estimating stereoscopic image quality without reference. Therefore, we use a deep multi-score Convolutional Neural Network (CNN). Our model has been trained to perform four tasks: First, predict the left view's quality. Second, predict the quality of the left view. Third and fourth, predict the quality of the stereo view and global quality, respectively, with the global score serving as the ultimate quality. Experiments are conducted on Waterloo IVC 3D Phase 1 and Phase 2 databases. The results obtained show the superiority of our method when comparing with those of the state-of-the-art. The implementation code can be found at: https://github.com/o-messai/multi-score-SIQ

    Real-time drone detection and tracking in distorted infrared images

    No full text
    Session : Infrared imaging-based drone detection and tracking in distorted surveillance Videos. Certificate of Appreciation. This award is presented to Team SPIN for securing SECOND position in the grand challenge session titled "Infrared Imaging-based Drone Detection and Tracking in Distorted Surveillance Videos" organized at ICIP 2023, Kuala Lumpur, MalaysiaInternational audienceWith the increasing use of drones for various applications, their detection and tracking have become critical for ensuring safety and security. In this paper, we propose an algorithm for detecting and tracking drones from infrared (IR) images in challenging conditions such as noise and distortion. Our algorithm involves YOLOv7 for drone detection and utilizes the SORT algorithm for real-time tracking. To detect distortion in the drone images, we employed a vision transformer in parallel with a customized CNN. The experimental results demonstrate the effectiveness of our approach in challenging conditions and highlight the potential for future developments in drone detection and tracking using deep learning techniques. We achieve a precision of 94.2%, a recall of 92.64%, and a mean average precision (mAP) of 92.6% on the provided test data. The implementation code can be found at: https://github.com/a-bentamou/Drone-detectionand-tracking

    Enhancing Object Detection in Distorted Environments: A Seamless Integration of Classical Image Processing with Deep Learning Models

    No full text
    session: Computer Vision and Image ProcessingInternational audienceComputer vision tasks are directly influenced by the conditions of image acquisition, especially in the context of object detection. Often, these conditions are beyond our control. In this paper, we introduce a method that seamlessly integrates with any computer vision model using deep learning to enhance its performance in distorted environments. Our method effectively mitigates the effects caused by various types of image distortions. It relies on classical image processing techniques capable of reducing distortions and enhancing image quality in a general manner, without requiring specific knowledge of the applied distortion type. Integration into any model during the preprocessing stage is straightforward. Furthermore, we’ve added new layers that analyze the enhanced image in a depthwise manner, running in parallel with the model backbone. We tested the method on the object detection task using the well-known computer vision model, You Only Look Once (YOLO), and the results reveal a significant improvement in Mean Average Precision (mAP). The implementation code can be found at: https://github.com/abbass-zain-eddine/Object-detectionunder-uncontrolled-acquisition-environment.gi

    Activating Frequency and VIT for 3D Point Cloud Quality Assessment without Reference

    No full text
    Session : Point Cloud Visual Quality Assesment Grand ChallengeInternational audienceDeep learning-based quality assessments have significantly enhanced perceptual multimedia quality assessment, however it is still in the early stages for 3D visual data such as 3D point clouds (PCs). Due to the high volumeof 3D-PCs, such quantities are frequently compressed for transmission and viewing, which may affect perceived quality. Therefore, we propose no-reference quality metric of a given 3D-PC. Comparing to existing methods that mostlyfocus on geometry or color aspects, we propose integrating frequency magnitudes as indicator of spatial degradation patterns caused by the compression. To map the input attributes to quality score, we use a light-weight hybrid deep model; combined of Deformable Convolutional Network (DCN) and Vision Transformers (ViT). Experiments are carried out on ICIP20 [1], PointXR [2] dataset, and a new big dataset called BASICS [3]. The results show that our approach outperforms state-of-the-art NR-PCQA measures and even some FR-PCQA on PointXR. The implementation code can befound at: https://github.com/o-messai/3D-PCQA</i

    Image compression of surface defects of the hot-rolled steel strip using Principal Component Analysis

    No full text
    The quality control of steel products by human vision remains tedious, fatiguing, somewhat fast, rather robust, sketchy, dangerous or impossible. For these reasons, the use of the artificial vision in the world of quality control has become more than necessary. However, these images are often large in terms of quantity and size, which becomes a problem in quality control centers, where engineers are unable to store these images. For this, efficient compression techniques are necessary for archiving and transmitting the images. The reduction in file size allows more images to be stored in a disk or memory space. The present paper proposes an effective technique for redundancy extraction using the Principal Component Analysis (PCA) approach. Furthermore, it aims to study the effects of the number of eigenvectors employed in the PCA compression technique on the quality of the compressed image. The results revealed that using only 25% of the eigenvectors provide very similar compressed images compared to the original ones, in terms of quality. These images are characterized by high compression ratios and a small storage space

    Abstracts of the First International Conference on Advances in Electrical and Computer Engineering 2023

    No full text
    This book presents extended abstracts of the selected contributions to the First International Conference on Advances in Electrical and Computer Engineering (ICAECE'2023), held on 15-16 May 2023 by the Faculty of Science and Technology, Department of Electrical Engineering, University of Echahid Cheikh Larbi Tebessi, Tebessa-Algeria. ICAECE'2023 was delivered in-person and virtually and was open for researchers, engineers, academics, and industrial professionals from around the world interested in new trends and advances in current topics of Electrical and Computer Engineering. Conference Title: First International Conference on Advances in Electrical and Computer Engineering 2023Conference Acronym: ICAECE'2023Conference Date: 15-16 May 2023Conference Venue: University of Echahid Cheikh Larbi Tebessi, Tebessa-AlgeriaConference Organizer: Faculty of Science and Technology, Department of Electrical Engineering, University of Echahid Cheikh Larbi Tebessi, Tebessa-Algeri
    corecore