789 research outputs found

    Smart environment monitoring through micro unmanned aerial vehicles

    Get PDF
    In recent years, the improvements of small-scale Unmanned Aerial Vehicles (UAVs) in terms of flight time, automatic control, and remote transmission are promoting the development of a wide range of practical applications. In aerial video surveillance, the monitoring of broad areas still has many challenges due to the achievement of different tasks in real-time, including mosaicking, change detection, and object detection. In this thesis work, a small-scale UAV based vision system to maintain regular surveillance over target areas is proposed. The system works in two modes. The first mode allows to monitor an area of interest by performing several flights. During the first flight, it creates an incremental geo-referenced mosaic of an area of interest and classifies all the known elements (e.g., persons) found on the ground by an improved Faster R-CNN architecture previously trained. In subsequent reconnaissance flights, the system searches for any changes (e.g., disappearance of persons) that may occur in the mosaic by a histogram equalization and RGB-Local Binary Pattern (RGB-LBP) based algorithm. If present, the mosaic is updated. The second mode, allows to perform a real-time classification by using, again, our improved Faster R-CNN model, useful for time-critical operations. Thanks to different design features, the system works in real-time and performs mosaicking and change detection tasks at low-altitude, thus allowing the classification even of small objects. The proposed system was tested by using the whole set of challenging video sequences contained in the UAV Mosaicking and Change Detection (UMCD) dataset and other public datasets. The evaluation of the system by well-known performance metrics has shown remarkable results in terms of mosaic creation and updating, as well as in terms of change detection and object detection

    2D Reconstruction of Small Intestine's Interior Wall

    Full text link
    Examining and interpreting of a large number of wireless endoscopic images from the gastrointestinal tract is a tiresome task for physicians. A practical solution is to automatically construct a two dimensional representation of the gastrointestinal tract for easy inspection. However, little has been done on wireless endoscopic image stitching, let alone systematic investigation. The proposed new wireless endoscopic image stitching method consists of two main steps to improve the accuracy and efficiency of image registration. First, the keypoints are extracted by Principle Component Analysis and Scale Invariant Feature Transform (PCA-SIFT) algorithm and refined with Maximum Likelihood Estimation SAmple Consensus (MLESAC) outlier removal to find the most reliable keypoints. Second, the optimal transformation parameters obtained from first step are fed to the Normalised Mutual Information (NMI) algorithm as an initial solution. With modified Marquardt-Levenberg search strategy in a multiscale framework, the NMI can find the optimal transformation parameters in the shortest time. The proposed methodology has been tested on two different datasets - one with real wireless endoscopic images and another with images obtained from Micro-Ball (a new wireless cubic endoscopy system with six image sensors). The results have demonstrated the accuracy and robustness of the proposed methodology both visually and quantitatively.Comment: Journal draf

    GPU Accelerated Color Correction and Frame Warping for Real-time Video Stitching

    Full text link
    Traditional image stitching focuses on a single panorama frame without considering the spatial-temporal consistency in videos. The straightforward image stitching approach will cause temporal flicking and color inconstancy when it is applied to the video stitching task. Besides, inaccurate camera parameters will cause artifacts in the image warping. In this paper, we propose a real-time system to stitch multiple video sequences into a panoramic video, which is based on GPU accelerated color correction and frame warping without accurate camera parameters. We extend the traditional 2D-Matrix (2D-M) color correction approach and a present spatio-temporal 3D-Matrix (3D-M) color correction method for the overlap local regions with online color balancing using a piecewise function on global frames. Furthermore, we use pairwise homography matrices given by coarse camera calibration for global warping followed by accurate local warping based on the optical flow. Experimental results show that our system can generate highquality panorama videos in real time

    Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera.

    Get PDF
    This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images

    Robust Techniques for Feature-based Image Mosaicing

    Get PDF
    Since the last few decades, image mosaicing in real time applications has been a challenging field for image processing experts. It has wide applications in the field of video conferencing, 3D image reconstruction, satellite imaging and several medical as well as computer vision fields. It can also be used for mosaic-based localization, motion detection & tracking, augmented reality, resolution enhancement, generating large FOV etc. In this research work, feature based image mosaicing technique using image fusion have been proposed. The image mosaicing algorithms can be categorized into two broad horizons. The first is the direct method and the second one is based on image features. The direct methods need an ambient initialization whereas, Feature based methods does not require initialization during registration. The feature-based techniques are primarily followed by the four steps: feature detection, feature matching, transformation model estimation, image resampling and transformation. SIFT and SURF are such algorithms which are based on the feature detection for the accomplishment of image mosaicing, but both the algorithms has their own limitations as well as advantages according to the applications concerned. The proposed method employs this two feature based image mosaicing techniques to generate an output image that works out the limitations of the both in terms of image quality The developed robust algorithm takes care of the combined effect of rotation, illumination, noise variation and other minor variation. Initially, the input images are stitched together using the popular stitching algorithms i.e. Scale Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF). To extract the best features from the stitching results, the blending process is done by means of Discrete Wavelet Transform (DWT) using the maximum selection rule for both approximate as well as detail-components

    Drone-based panorama stitching: A study of SIFT, FLANN, and RANSAC techniques

    Get PDF
    This paper documents the tasks I accomplished during my internship and project at UPC. It provides an overview of the project's structure, objectives, and task distribution. A summary is given for the Web Application part of the project, which was handled by my teammate. This paper also details the drone and payloads used in the project and their functionalities. In the parts I was responsible for, I conducted thorough investigations and tests on the Raspberry Pi camera to obtain the best image quality during every flight test. I delved into the entire process of basic panorama stitching, encompassing features detection, descriptors matching, and transformation estimation based on the homography matrix. I compared popular feature detectors and descriptor matchers in terms of processing speed and performance, subsequently developing a panorama stitching algorithm for images captured by the drone. Finally, I provided a detailed discussion on some extra tasks that were not completed and points that could be improved upon. The paper not only stands as a detailed account of our contributions but also serves as an inspiration and a guide for future enhancements of drone-based panorama stitching

    A comparison of feature extractors for panorama stitching in an autonomous car architecture.

    Get PDF
    Panorama stitching consists on frames being put together to create a 360o view. This technique is proposed for its implementation in autonomous vehicles instead of the use of an external 360o camera, mostly due to its reduced cost and improved aerodynamics. This strategy requires a fast and robust set of features to be extracted from the images obtained by the cameras located around the inside of the car, in order to effectively compute the panoramic view in real time and avoid hazards on the road. In this paper, we compare and discuss three feature extraction methods (i.e. SIFT, BRISK and SURF) for image feature extraction, in order to decide which one is more suitable for a panorama stitching application in an autonomous car architecture. Experimental validation shows that SURF exhibits an improved performance under a variety of image transformations, and thus appears to be the most suitable of these three methods, given its accuracy when comparing features between both images, while maintaining a low time consumption. Furthermore, a comparison of the results obtained with respect to similar work allows to increase the reliability of our methodology and the reach of our conclusions

    Image Mosaicing for Wide Angle Panorama

    Get PDF
    Images are integral part in our daily lives. With a normal camera it is not possible to get a wide angle panorama with high resolution. Image Mosaicing is one of the novel techniques, for combining two or more images of the same scene taken in different views into one image. In the dark areas, the obtained image is a panoramic image with high resolution without mask. But in the case of lighting areas, the resultant image is generating mask. In order to gets wide angle panorama, in the existing system, extracting feature points, finding the best stitching line, Cluster Analysis (CA) and Dynamic Programming (DP) methods are used. Also used Weighted Average (WA) method for smooth stitching results and also eliminate intensity seam effectively. In the proposed system, to get feature extraction and feature matching SIFT (Scaled Invariant Feature Transform) algorithm used. In this process, outliers can be generated. RANSAC (Random Sample Consensus) is used for detecting the outliers from the resultant image. Masking is significantly reduced by using Algebraic Reconstruction Techniques (ART)

    Image Local Features Description through Polynomial Approximation

    Get PDF
    This work introduces a novel local patch descriptor that remains invariant under varying conditions of orientation, viewpoint, scale, and illumination. The proposed descriptor incorporate polynomials of various degrees to approximate the local patch within the image. Before feature detection and approximation, the image micro-texture is eliminated through a guided image filter with the potential to preserve the edges of the objects. The rotation invariance is achieved by aligning the local patch around the Harris corner through the dominant orientation shift algorithm. Weighted threshold histogram equalization (WTHE) is employed to make the descriptor in-sensitive to illumination changes. The correlation coefficient is used instead of Euclidean distance to improve the matching accuracy. The proposed descriptor has been extensively evaluated on the Oxford's affine covariant regions dataset, and absolute and transition tilt dataset. The experimental results show that our proposed descriptor can categorize the feature with more distinctiveness in comparison to state-of-the-art descriptors. - 2013 IEEE.This work was supported by the Qatar National Library.Scopu

    Identifying Robust SIFT Features for Improved Image Alignment

    Get PDF
    In this thesis, we will study different ways to improve feature matching by increasing the quality and reducing the number of SIFT features. We created an algorithm to identify robust SIFT features by evaluating how invariant individual feature points are to changes in scale. This allows us to exclude poor SIFT feature points from the matching process and obtain better matching results in reduced time. We also developed techniques consider scale ratios and changes in object orientation when performing feature matching. This allows us to exclude false-positive feature matches and obtain better image alignment results
    corecore