4 research outputs found
An evaluation of image feature detectors based on spatial density and temporal robustness in microsurgical image processing
Optical image processing is part of many applications used for brain surgeries. Microscope camera, or patient movement, like brain-movement through the pulse or a change in the liquor, can cause the image processing to fail. One option to compensate movement is feature detection and spatial allocation. This allocation is based on image features. The frame wise matched features are used to calculate the transformation matrix. The goal of this project was to evaluate different feature detectors based on spatial density and temporal robustness to reveal the most appropriate feature. The feature detectors included corner-, and blob-detectors and were applied on nine videos. These videos were taken during brain surgery with surgical microscopes and include the RGB channels. The evaluation showed that each detector detected up to 10 features for nine frames. The feature detector KAZE resulted in being the best feature detector in both density and robustness
Tracking Keypoints from Consecutive Video Frames Using CNN Features for Space Applications
Hard time constraints in space missions bring in the problem of fast video processing for numerous autonomous tasks. Video processing involves the separation of distinct image frames, fetching image descriptors, applying different machine learning algorithms for object detection, obstacle avoidance, and many more tasks involved in the automatic maneuvering of a spacecraft. These tasks require the most informative descriptions of an image within the time constraints. Tracking these informative points from consecutive image frames is needed in flow estimation applications. Classical algorithms like SIFT and SURF are the milestones in the feature description development. But computational complexity and high time requirements force the critical missions to avoid these techniques to get adopted in real-time processing. Hence a time conservative and less complex pre-trained Convolutional Neural Network (CNN) model is chosen in this paper as a feature descriptor. 7-layer CNN model is designed and implemented with pre-trained VGG model parameters and then these CNN features are used to match the points of interests from consecutive image frames of a lunar descent video. The performance of the system is evaluated based on visual and empirical keypoints matching. The scores of matches between two consecutive images from the video using CNN features are then compared with state-of-the-art algorithms like SIFT and SURF. The results show that CNN features are more reliable and robust in case of time-critical video processing tasks for keypoint tracking applications of space missions
Recommended from our members
Autonomous Cooperative Visual Navigation for Planetary Exploration Robots
Planetary robotics navigation has attracted the great attention of many researchers in recent years. Localization is one of the most important problems for robots on another planet in the lack of GPS. The robots need to be able to know their location and the surrounding map in the environment concurrently, to work and communicate together on another planet. In the current work, a novel algorithm is designed to cooperatively localize a team of robots on another planet. Consequently, a robust algorithm is developed for cooperative Visual Odometry (VO) to localize each robot in a planetary environment while detecting both intra-loop closure and inter-loop closures using previously observed area by the robot and shared area from other robots, respectively. To validate the proposed algorithm, a comparison is provided between the proposed cooperative VO and the single version of VO. Accordingly, a planetary analogue real dataset is employed to investigate the accuracy of the proposed algorithm. The results promise the concept of cooperative VO to significantly increase the accuracy of localization