877 research outputs found

    Fast and robust image feature matching methods for computer vision applications

    Get PDF
    Service robotic systems are designed to solve tasks such as recognizing and manipulating objects, understanding natural scenes, navigating in dynamic and populated environments. It's immediately evident that such tasks cannot be modeled in all necessary details as easy as it is with industrial robot tasks; therefore, service robotic system has to have the ability to sense and interact with the surrounding physical environment through a multitude of sensors and actuators. Environment sensing is one of the core problems that limit the deployment of mobile service robots since existing sensing systems are either too slow or too expensive. Visual sensing is the most promising way to provide a cost effective solution to the mobile robot sensing problem. It's usually achieved using one or several digital cameras placed on the robot or distributed in its environment. Digital cameras are information rich sensors and are relatively inexpensive and can be used to solve a number of key problems for robotics and other autonomous intelligent systems, such as visual servoing, robot navigation, object recognition, pose estimation, and much more. The key challenges to taking advantage of this powerful and inexpensive sensor is to come up with algorithms that can reliably and quickly extract and match the useful visual information necessary to automatically interpret the environment in real-time. Although considerable research has been conducted in recent years on the development of algorithms for computer and robot vision problems, there are still open research challenges in the context of the reliability, accuracy and processing time. Scale Invariant Feature Transform (SIFT) is one of the most widely used methods that has recently attracted much attention in the computer vision community due to the fact that SIFT features are highly distinctive, and invariant to scale, rotation and illumination changes. In addition, SIFT features are relatively easy to extract and to match against a large database of local features. Generally, there are two main drawbacks of SIFT algorithm, the first drawback is that the computational complexity of the algorithm increases rapidly with the number of key-points, especially at the matching step due to the high dimensionality of the SIFT feature descriptor. The other one is that the SIFT features are not robust to large viewpoint changes. These drawbacks limit the reasonable use of SIFT algorithm for robot vision applications since they require often real-time performance and dealing with large viewpoint changes. This dissertation proposes three new approaches to address the constraints faced when using SIFT features for robot vision applications, Speeded up SIFT feature matching, robust SIFT feature matching and the inclusion of the closed loop control structure into object recognition and pose estimation systems. The proposed methods are implemented and tested on the FRIEND II/III service robotic system. The achieved results are valuable to adapt SIFT algorithm to the robot vision applications

    Fuzzy Free Path Detection from Disparity Maps by Using Least-Squares Fitting to a Plane

    Full text link
    A method to detect obstacle-free paths in real-time which works as part of a cognitive navigation aid system for visually impaired people is proposed. It is based on the analysis of disparity maps obtained from a stereo vision system which is carried by the blind user. The presented detection method consists of a fuzzy logic system that assigns a certainty to be part of a free path to each group of pixels, depending on the parameters of a planar-model fitting. We also present experimental results on different real outdoor scenarios showing that our method is the most reliable in the sense that it minimizes the false positives rate.N. Ortigosa acknowledges the support of Universidad Politecnica de Valencia under grant FPI-UPV 2008 and Spanish Ministry of Science and Innovation under grant MTM2010-15200. S. Morillas acknowledges the support of Universidad Politecnica de Valencia under grant PAID-05-12-SP20120696.Ortigosa Araque, N.; Morillas Gómez, S. (2014). Fuzzy Free Path Detection from Disparity Maps by Using Least-Squares Fitting to a Plane. Journal of Intelligent and Robotic Systems. 75(2):313-330. https://doi.org/10.1007/s10846-013-9997-1S313330752Cai, L., He, L., Xu, Y., Zhao, Y., Yang, X.: Multi-object detection and tracking by stereovision. Pattern Recognit. 43(12), 4028–4041 (2010)Hikosaka, N., Watanabe, K., Umeda, K.: Obstacle detection of a humanoid on a plane using a relative disparity map obtained by a small range image sensor. In: Proceedings of the IEEE International Conference on Robotics and Automation, vol. 1, pp. 3048–3053 (2007)Benenson, R., Mathias, M., Timofte, R., Van Gool, L.: Fast stixel computation for fast pedestrian detection. In: ECCV, CVVT workshop, October (2012)Huang, Y., Fu, S., Thompson, C.: Stereovision-based object segmentation for automotive applications. EURASIP J. Appl. Signal Process. 2005(14), 2322–2329 (2005)Duan, B.B., Liu, W., Fu, P.Y., Yang, C.Y., Wen, X.Z., Yuan, H.: Real-time on-road vehicle and motorcycle detection using a single camera. In: IEEE International Conference on Industrial Technology, pp. 579–584. IEEE (2009)Oliveira L, Nunes, U.: On integration of features and classifiers for robust vehicle detection. In: IEEE International Conference on Intelligent Transportation Systems, pp. 414–419. IEEE (2008)Sun, Z., Bebis, G., Miller, R.: On-road vehicle detection: A review. IEEE Trans. Pattern Anal. Mach. Intell. 28(5), 694–711 (2006)Sun, H.J., Yang, J.Y.: Obstacle detection for mobile vehicle using neural network and fuzzy logic. Neural Netw. Distrib. Process. 4555(1), 99–104 (2001)Hui, N.B., Pratihar, D.K.: Soft computing-based navigation schemes for a real wheeled robot moving among static obstacles. J. Intell. Robot. Syst. 51(3), 333–368 (2008)Menon, A., Akmeliawati, R., Demidenko, S.: Towards a simple mobile robot with obstacle avoidance and target seeking capabilities using fuzzy logic. In: Proceedings IEEE Instrumentation and Measurement Technology Conference, vol. 1–5, pp. 1003–1008 (2008)Moreno-Garcia, J., Rodriguez-Benitez, L., Fernandez-Caballero, A., Lopez, M.T.: Video sequence motion tracking by fuzzification techniques. Appl. Soft Comput. 10(1), 318–331 (2010)Nguyen, T.H., Nguyen, J.S., Pham, D.M., Nguyen, H.T.: Real-time obstacle detection for an autonomous wheelchair using stereoscopic cameras. Conf. Proc. IEEE Eng. Med. Biol. Soc. 2007(1), 4775–4778 (2007)Nguyen, J.S., Nguyen, T.H., Nguyen, H.T.: Semi-autonomous wheelchair system using stereoscopic cameras. In: Annual International Conference of the IEEE Engineering in Medicine and Biology Society, vol. 1–20, pp. 5068–5071 (2009)Grosso, E., Tistarelli, M.: Active/dynamic stereo vision. IEEE Trans. Pattern Anal. Mach. Intell. 17(9), 868–879 (1995)Kubota, S., Nakano, T., Okamoto, Y.: A global optimization for real-time on-board stereo obstacle detection systems. In: IEEE Intelligent Vehicles Symposium, pp. 7–12. IEEE (2007)Ortigosa, N., Morillas, S., Peris-Fajarnés, G., Dunai, L.: Fuzzy free path detection based on dense disparity maps obtained from stereo cameras. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 20(2), 245–259 (2012)Murray, D., Little, J.J.: Using real-time stereo vision for mobile robot navigation. Auton. Robot. 8(2), 161–171 (2000)Badino, H., Mester, R., Vaudrey, T., Franke, U.: Stereo-based free space computation in complex traffic scenarios. In: IEEE Southwest Symposium on Image Analysis & Interpretation, pp. 189–192 (2008)Hoilund, C., Moeslund, T.B., Madsen, C.L., Trivedi, M.M.: Free space computation from stochastic occupancy grids based on iconic kalman filtered disparity maps. In: Proceedings International Conference on Computer Vision Theory and Applications, vol. 1, pp. 164–167 (2010)Franke, U., Joos, A.: Real-time stereo vision for urban traffic scene understanding. In: IEEE Intelligent Vehicles Symposium, pp. 273–278. IEEE (2000)Wedel, A., Badino, H., Rabe, C., Loose, H., Franke, U., Cremers, D.: B-spline modeling of road surfaces with an application to free-space estimation. IEEE Trans. Intell. Transp. Syst. 10(4), 572–583 (2009)Vergauwen, M., Pollefeys, M., Van Gool, L.: A stereo-vision system for support of planetary surface exploration. Mach. Vis. Appl. 14(1), 5–14 (2003)Tarel, J.P., Leng, S.S., Charbonnier, P.: Accurate and robust image alignment for road profile reconstruction. In: IEEE International Conference on Image Processing, pp. 365–368. IEEE (2007)Kostavelis, I., Gasteratos, A.: Stereovision-based algorithm for obstacle avoidance. In: Lecture Notes in Computer Science, pp. 195–204. Intelligent Robotics and Applications (2009)Cerri, P., Grisleri, P.: Free space detection on highways using time correlation between stabilized sub-pixel precision ipm images. In: IEEE International Conference on Robotics and Automation, pp. 2223–2228. IEEE (2005)Labayrade, R., Aubert, D., Tarel, J.P.: Real time obstacle detection in stereo vision on non-flat road geometry through v-disparity representation. In: IEEE Intelligent Vehicle Symposium, pp. 646–651. INRIA (2002)Ortigosa, N., Morillas, S., Peris-Fajarnés, G., Dunai, L.: Disparity maps for free path detection. In: Proceedings International Conference on Computer Vision Theory and Applications, vol. 1, pp. 310–315 (2010)Ortigosa, N., Morillas, S., Peris-Fajarnés, G.: Obstacle-free pathway detection by means of depth maps. J. Intell. Robot. Syst. 63(1), 115–129 (2011)http://www.casblip.comBach y Rita, P., Collins, C., Sauders, B., White, B., Scadden, L.: Vision substitution by tactile image projection. Nature 221, 963964 (1969)Sampaio, E., Maris, S., Bach y Rita, P.: Brain plasticity: visual acuity of blind persons via the tongue. Brain Res. 908, 204207 (2001)http://www.seeingwithsound.comCapelle, C., Trullemans, C., Arno, P., Veraart, C.: A real-time experimental prototype for enhancement of vision rehabilitation using auditory substitution. IEEE Trans. Biomed. Eng. 45, 12791293 (1998)Lee, S.W., Kang, S.K., Lee, S.A.: A walking guidance system for the visually impaired. Int. J. Pattern Recognit. 22, 11711186 (2008)Chen, C.L., Liao, Y.F., Tai, C.L.: Image-to-midi mapping based on dynamic fuzzy color segmentation for visually impaired people. Pattern Recognit. Lett. 32, 549–560 (2011)Lombardi, P., Zanin, M., Messelodi, S.: Unified stereovision for ground, road, and obstacle detection. In: Proceedings on the Intelligent Vehicles Symposium, 2005, pp. 783–788. IEEE (2005)Yu, Q., Araujo, H., Wang, H.: Stereo-vision based real time obstacle detection for urban environments. In: Proceedings on the International Conference of Advanced Robotics, vol. 1, pp. 1671–1676 (2003)Benenson, R., Timofte, R., Van Gool, L.: Stixels estimation without depth map computation. In: ICCV, CVVT workshop (2011)Li, X., Yao, X., Murphey, Y.L., Karlsen, R., Gerhart, G.: A real-time vehicle detection and tracking system in outdoor traffic scenes. In: Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on, vol. 2, pp. 761–764 (2004)Zhang, Z.Y.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000)Dhond, U.R., Aggarwal, J.K.: Structure from stereo: a review. IEEE Trans. Syst. Man Cybern. 19, 1489–1510 (1989)Scharstein, D., Szeliski, R.: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comput. Vis. 47(1/2/3), 7–42 (2002)Middlebury Stereo Vision Page. http://vision.middlebury.edu/stereo/Birchfield, S., Tomasi, C.: Depth discontinuities by pixel-to-pixel stereo. Int. J. Comput. Vis. 17(3), 269–293 (1999)Lawrence Zitnick, C., Bing Kang, S.: Stereo for image-based rendering using image over-segmentation. Int. J. Comput. Vis. 75(1), 49–65 (2007)Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient belief propagation for early vision. Int. J. Comput. Vis. 70(1), 41–54 (2006)Yang, Q., Wang, L., Yang, R., Stewnius, H., Nistr, D.: Stereo matching with color-weighted correlation, hierarchical belief propagation, and occlusion handling. IEEE Trans. Pattern Anal. Mach. Intell. 31(3), 492–504 (2009)Gehrig, S., Eberli, F., Meyer, T.: A real-time low-power stereo vision engine using semi-global matching. Lect. Notes Comput. Sci. 5815/2009, 134–143 (2009)Wedel, A., Brox, T., Vaudrey, T., Rabe, C., Franke, U., Cremers, D.: Stereoscopic scene flow computation for 3d motion understanding. Int. J. Comput. Vis. 95, 29–51 (2011)Hirschmuller, H.: Stereo processing by semiglobal matching and mutual information. IEEE Trans. Pattern Anal. Mach. Intell. 30(2), 328–341 (2008)Leung, C., Appleton, B., Sun, C.: Iterated dynamic programming and quadtree subregioning for fast stereo matching. Image Vis. Comput. 26(10), 1371–1383 (2008)Hartley, R.I., Zisserman, A.: Multiple view geometry in computer vision, 2nd edn. Cambridge University Press, ISBN: 0521540518 (2004)Spiegel, M.R., Stepthens, L.J.: Statistics, 4th edn. Mc Graw Hill (2008)Kerre, E.E.: Fuzzy sets and approximate reasoning. Xian Jiaotong University Press (1998)Dubois, D., Prade, H.: Fuzzy sets and systems: theory and applications. Academic Press, New York (1980)Lee, C.C.: Fuzzy logic in control systems: Fuzzy logic controller-parts 1 and 2. IEEE Trans. Syst. Man Cybern. 20(2), 404–435 (1990)Fodor, J.C.: A new look at fuzzy-connectives. Fuzzy Sets Syst. 57(2), 141–148 (1993)Nalpantidis, L., Gasteratos, A.: Stereo vision for robotic applications in the presence of non-ideal lightning conditions. Image Vis. Comput. 28(6), 940–951 (2010

    Alignment control using visual servoing and mobilenet single-shot multi-box detection (SSD): a review

    Get PDF
    The concept is highly critical for robotic technologies that rely on visual feedback. In this context, robot systems tend to be unresponsive due to reliance on pre-programmed trajectory and path, meaning the occurrence of a change in the environment or the absence of an object. This review paper aims to provide comprehensive studies on the recent application of visual servoing and DNN. PBVS and Mobilenet-SSD were chosen algorithms for alignment control of the film handler mechanism of the portable x-ray system. It also discussed the theoretical framework features extraction and description, visual servoing, and Mobilenet-SSD. Likewise, the latest applications of visual servoing and DNN was summarized, including the comparison of Mobilenet-SSD with other sophisticated models. As a result of a previous study presented, visual servoing and MobileNet-SSD provide reliable tools and models for manipulating robotics systems, including where occlusion is present. Furthermore, effective alignment control relies significantly on visual servoing and deep neural reliability, shaped by different parameters such as the type of visual servoing, feature extraction and description, and DNNs used to construct a robust state estimator. Therefore, visual servoing and MobileNet-SSD are parameterized concepts that require enhanced optimization to achieve a specific purpose with distinct tools

    Visual Servoing

    Get PDF
    The goal of this book is to introduce the visional application by excellent researchers in the world currently and offer the knowledge that can also be applied to another field widely. This book collects the main studies about machine vision currently in the world, and has a powerful persuasion in the applications employed in the machine vision. The contents, which demonstrate that the machine vision theory, are realized in different field. For the beginner, it is easy to understand the development in the vision servoing. For engineer, professor and researcher, they can study and learn the chapters, and then employ another application method

    Advances in Stereo Vision

    Get PDF
    Stereopsis is a vision process whose geometrical foundation has been known for a long time, ever since the experiments by Wheatstone, in the 19th century. Nevertheless, its inner workings in biological organisms, as well as its emulation by computer systems, have proven elusive, and stereo vision remains a very active and challenging area of research nowadays. In this volume we have attempted to present a limited but relevant sample of the work being carried out in stereo vision, covering significant aspects both from the applied and from the theoretical standpoints

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available

    Design and Development of Robotic Part Assembly System under Vision Guidance

    Get PDF
    Robots are widely used for part assembly across manufacturing industries to attain high productivity through automation. The automated mechanical part assembly system contributes a major share in production process. An appropriate vision guided robotic assembly system further minimizes the lead time and improve quality of the end product by suitable object detection methods and robot control strategies. An approach is made for the development of robotic part assembly system with the aid of industrial vision system. This approach is accomplished mainly in three phases. The first phase of research is mainly focused on feature extraction and object detection techniques. A hybrid edge detection method is developed by combining both fuzzy inference rule and wavelet transformation. The performance of this edge detector is quantitatively analysed and compared with widely used edge detectors like Canny, Sobel, Prewitt, mathematical morphology based, Robert, Laplacian of Gaussian and wavelet transformation based. A comparative study is performed for choosing a suitable corner detection method. The corner detection technique used in the study are curvature scale space, Wang-Brady and Harris method. The successful implementation of vision guided robotic system is dependent on the system configuration like eye-in-hand or eye-to-hand. In this configuration, there may be a case that the captured images of the parts is corrupted by geometric transformation such as scaling, rotation, translation and blurring due to camera or robot motion. Considering such issue, an image reconstruction method is proposed by using orthogonal Zernike moment invariants. The suggested method uses a selection process of moment order to reconstruct the affected image. This enables the object detection method efficient. In the second phase, the proposed system is developed by integrating the vision system and robot system. The proposed feature extraction and object detection methods are tested and found efficient for the purpose. In the third stage, robot navigation based on visual feedback are proposed. In the control scheme, general moment invariants, Legendre moment and Zernike moment invariants are used. The selection of best combination of visual features are performed by measuring the hamming distance between all possible combinations of visual features. This results in finding the best combination that makes the image based visual servoing control efficient. An indirect method is employed in determining the moment invariants for Legendre moment and Zernike moment. These moments are used as they are robust to noise. The control laws, based on these three global feature of image, perform efficiently to navigate the robot in the desire environment
    corecore