4 research outputs found

    An evaluation of image feature detectors based on spatial density and temporal robustness in microsurgical image processing

    Get PDF
    Optical image processing is part of many applications used for brain surgeries. Microscope camera, or patient movement, like brain-movement through the pulse or a change in the liquor, can cause the image processing to fail. One option to compensate movement is feature detection and spatial allocation. This allocation is based on image features. The frame wise matched features are used to calculate the transformation matrix. The goal of this project was to evaluate different feature detectors based on spatial density and temporal robustness to reveal the most appropriate feature. The feature detectors included corner-, and blob-detectors and were applied on nine videos. These videos were taken during brain surgery with surgical microscopes and include the RGB channels. The evaluation showed that each detector detected up to 10 features for nine frames. The feature detector KAZE resulted in being the best feature detector in both density and robustness

    Tracking Keypoints from Consecutive Video Frames Using CNN Features for Space Applications

    Get PDF
    Hard time constraints in space missions bring in the problem of fast video processing for numerous autonomous tasks. Video processing involves the separation of distinct image frames, fetching image descriptors, applying different machine learning algorithms for object detection, obstacle avoidance, and many more tasks involved in the automatic maneuvering of a spacecraft. These tasks require the most informative descriptions of an image within the time constraints. Tracking these informative points from consecutive image frames is needed in flow estimation applications. Classical algorithms like SIFT and SURF are the milestones in the feature description development. But computational complexity and high time requirements force the critical missions to avoid these techniques to get adopted in real-time processing. Hence a time conservative and less complex pre-trained Convolutional Neural Network (CNN) model is chosen in this paper as a feature descriptor. 7-layer CNN model is designed and implemented with pre-trained VGG model parameters and then these CNN features are used to match the points of interests from consecutive image frames of a lunar descent video. The performance of the system is evaluated based on visual and empirical keypoints matching. The scores of matches between two consecutive images from the video using CNN features are then compared with state-of-the-art algorithms like SIFT and SURF. The results show that CNN features are more reliable and robust in case of time-critical video processing tasks for keypoint tracking applications of space missions
    corecore