4,329 research outputs found

    Fuzzy Free Path Detection from Disparity Maps by Using Least-Squares Fitting to a Plane

    Full text link
    A method to detect obstacle-free paths in real-time which works as part of a cognitive navigation aid system for visually impaired people is proposed. It is based on the analysis of disparity maps obtained from a stereo vision system which is carried by the blind user. The presented detection method consists of a fuzzy logic system that assigns a certainty to be part of a free path to each group of pixels, depending on the parameters of a planar-model fitting. We also present experimental results on different real outdoor scenarios showing that our method is the most reliable in the sense that it minimizes the false positives rate.N. Ortigosa acknowledges the support of Universidad Politecnica de Valencia under grant FPI-UPV 2008 and Spanish Ministry of Science and Innovation under grant MTM2010-15200. S. Morillas acknowledges the support of Universidad Politecnica de Valencia under grant PAID-05-12-SP20120696.Ortigosa Araque, N.; Morillas GĂłmez, S. (2014). Fuzzy Free Path Detection from Disparity Maps by Using Least-Squares Fitting to a Plane. Journal of Intelligent and Robotic Systems. 75(2):313-330. https://doi.org/10.1007/s10846-013-9997-1S313330752Cai, L., He, L., Xu, Y., Zhao, Y., Yang, X.: Multi-object detection and tracking by stereovision. Pattern Recognit. 43(12), 4028–4041 (2010)Hikosaka, N., Watanabe, K., Umeda, K.: Obstacle detection of a humanoid on a plane using a relative disparity map obtained by a small range image sensor. In: Proceedings of the IEEE International Conference on Robotics and Automation, vol. 1, pp. 3048–3053 (2007)Benenson, R., Mathias, M., Timofte, R., Van Gool, L.: Fast stixel computation for fast pedestrian detection. In: ECCV, CVVT workshop, October (2012)Huang, Y., Fu, S., Thompson, C.: Stereovision-based object segmentation for automotive applications. EURASIP J. Appl. Signal Process. 2005(14), 2322–2329 (2005)Duan, B.B., Liu, W., Fu, P.Y., Yang, C.Y., Wen, X.Z., Yuan, H.: Real-time on-road vehicle and motorcycle detection using a single camera. In: IEEE International Conference on Industrial Technology, pp. 579–584. IEEE (2009)Oliveira L, Nunes, U.: On integration of features and classifiers for robust vehicle detection. In: IEEE International Conference on Intelligent Transportation Systems, pp. 414–419. IEEE (2008)Sun, Z., Bebis, G., Miller, R.: On-road vehicle detection: A review. IEEE Trans. Pattern Anal. Mach. Intell. 28(5), 694–711 (2006)Sun, H.J., Yang, J.Y.: Obstacle detection for mobile vehicle using neural network and fuzzy logic. Neural Netw. Distrib. Process. 4555(1), 99–104 (2001)Hui, N.B., Pratihar, D.K.: Soft computing-based navigation schemes for a real wheeled robot moving among static obstacles. J. Intell. Robot. Syst. 51(3), 333–368 (2008)Menon, A., Akmeliawati, R., Demidenko, S.: Towards a simple mobile robot with obstacle avoidance and target seeking capabilities using fuzzy logic. In: Proceedings IEEE Instrumentation and Measurement Technology Conference, vol. 1–5, pp. 1003–1008 (2008)Moreno-Garcia, J., Rodriguez-Benitez, L., Fernandez-Caballero, A., Lopez, M.T.: Video sequence motion tracking by fuzzification techniques. Appl. Soft Comput. 10(1), 318–331 (2010)Nguyen, T.H., Nguyen, J.S., Pham, D.M., Nguyen, H.T.: Real-time obstacle detection for an autonomous wheelchair using stereoscopic cameras. Conf. Proc. IEEE Eng. Med. Biol. Soc. 2007(1), 4775–4778 (2007)Nguyen, J.S., Nguyen, T.H., Nguyen, H.T.: Semi-autonomous wheelchair system using stereoscopic cameras. In: Annual International Conference of the IEEE Engineering in Medicine and Biology Society, vol. 1–20, pp. 5068–5071 (2009)Grosso, E., Tistarelli, M.: Active/dynamic stereo vision. IEEE Trans. Pattern Anal. Mach. Intell. 17(9), 868–879 (1995)Kubota, S., Nakano, T., Okamoto, Y.: A global optimization for real-time on-board stereo obstacle detection systems. In: IEEE Intelligent Vehicles Symposium, pp. 7–12. IEEE (2007)Ortigosa, N., Morillas, S., Peris-FajarnĂ©s, G., Dunai, L.: Fuzzy free path detection based on dense disparity maps obtained from stereo cameras. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 20(2), 245–259 (2012)Murray, D., Little, J.J.: Using real-time stereo vision for mobile robot navigation. Auton. Robot. 8(2), 161–171 (2000)Badino, H., Mester, R., Vaudrey, T., Franke, U.: Stereo-based free space computation in complex traffic scenarios. In: IEEE Southwest Symposium on Image Analysis & Interpretation, pp. 189–192 (2008)Hoilund, C., Moeslund, T.B., Madsen, C.L., Trivedi, M.M.: Free space computation from stochastic occupancy grids based on iconic kalman filtered disparity maps. In: Proceedings International Conference on Computer Vision Theory and Applications, vol. 1, pp. 164–167 (2010)Franke, U., Joos, A.: Real-time stereo vision for urban traffic scene understanding. In: IEEE Intelligent Vehicles Symposium, pp. 273–278. IEEE (2000)Wedel, A., Badino, H., Rabe, C., Loose, H., Franke, U., Cremers, D.: B-spline modeling of road surfaces with an application to free-space estimation. IEEE Trans. Intell. Transp. Syst. 10(4), 572–583 (2009)Vergauwen, M., Pollefeys, M., Van Gool, L.: A stereo-vision system for support of planetary surface exploration. Mach. Vis. Appl. 14(1), 5–14 (2003)Tarel, J.P., Leng, S.S., Charbonnier, P.: Accurate and robust image alignment for road profile reconstruction. In: IEEE International Conference on Image Processing, pp. 365–368. IEEE (2007)Kostavelis, I., Gasteratos, A.: Stereovision-based algorithm for obstacle avoidance. In: Lecture Notes in Computer Science, pp. 195–204. Intelligent Robotics and Applications (2009)Cerri, P., Grisleri, P.: Free space detection on highways using time correlation between stabilized sub-pixel precision ipm images. In: IEEE International Conference on Robotics and Automation, pp. 2223–2228. IEEE (2005)Labayrade, R., Aubert, D., Tarel, J.P.: Real time obstacle detection in stereo vision on non-flat road geometry through v-disparity representation. In: IEEE Intelligent Vehicle Symposium, pp. 646–651. INRIA (2002)Ortigosa, N., Morillas, S., Peris-FajarnĂ©s, G., Dunai, L.: Disparity maps for free path detection. In: Proceedings International Conference on Computer Vision Theory and Applications, vol. 1, pp. 310–315 (2010)Ortigosa, N., Morillas, S., Peris-FajarnĂ©s, G.: Obstacle-free pathway detection by means of depth maps. J. Intell. Robot. Syst. 63(1), 115–129 (2011)http://www.casblip.comBach y Rita, P., Collins, C., Sauders, B., White, B., Scadden, L.: Vision substitution by tactile image projection. Nature 221, 963964 (1969)Sampaio, E., Maris, S., Bach y Rita, P.: Brain plasticity: visual acuity of blind persons via the tongue. Brain Res. 908, 204207 (2001)http://www.seeingwithsound.comCapelle, C., Trullemans, C., Arno, P., Veraart, C.: A real-time experimental prototype for enhancement of vision rehabilitation using auditory substitution. IEEE Trans. Biomed. Eng. 45, 12791293 (1998)Lee, S.W., Kang, S.K., Lee, S.A.: A walking guidance system for the visually impaired. Int. J. Pattern Recognit. 22, 11711186 (2008)Chen, C.L., Liao, Y.F., Tai, C.L.: Image-to-midi mapping based on dynamic fuzzy color segmentation for visually impaired people. Pattern Recognit. Lett. 32, 549–560 (2011)Lombardi, P., Zanin, M., Messelodi, S.: Unified stereovision for ground, road, and obstacle detection. In: Proceedings on the Intelligent Vehicles Symposium, 2005, pp. 783–788. IEEE (2005)Yu, Q., Araujo, H., Wang, H.: Stereo-vision based real time obstacle detection for urban environments. In: Proceedings on the International Conference of Advanced Robotics, vol. 1, pp. 1671–1676 (2003)Benenson, R., Timofte, R., Van Gool, L.: Stixels estimation without depth map computation. In: ICCV, CVVT workshop (2011)Li, X., Yao, X., Murphey, Y.L., Karlsen, R., Gerhart, G.: A real-time vehicle detection and tracking system in outdoor traffic scenes. In: Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on, vol. 2, pp. 761–764 (2004)Zhang, Z.Y.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000)Dhond, U.R., Aggarwal, J.K.: Structure from stereo: a review. IEEE Trans. Syst. Man Cybern. 19, 1489–1510 (1989)Scharstein, D., Szeliski, R.: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comput. Vis. 47(1/2/3), 7–42 (2002)Middlebury Stereo Vision Page. http://vision.middlebury.edu/stereo/Birchfield, S., Tomasi, C.: Depth discontinuities by pixel-to-pixel stereo. Int. J. Comput. Vis. 17(3), 269–293 (1999)Lawrence Zitnick, C., Bing Kang, S.: Stereo for image-based rendering using image over-segmentation. Int. J. Comput. Vis. 75(1), 49–65 (2007)Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient belief propagation for early vision. Int. J. Comput. Vis. 70(1), 41–54 (2006)Yang, Q., Wang, L., Yang, R., Stewnius, H., Nistr, D.: Stereo matching with color-weighted correlation, hierarchical belief propagation, and occlusion handling. IEEE Trans. Pattern Anal. Mach. Intell. 31(3), 492–504 (2009)Gehrig, S., Eberli, F., Meyer, T.: A real-time low-power stereo vision engine using semi-global matching. Lect. Notes Comput. Sci. 5815/2009, 134–143 (2009)Wedel, A., Brox, T., Vaudrey, T., Rabe, C., Franke, U., Cremers, D.: Stereoscopic scene flow computation for 3d motion understanding. Int. J. Comput. Vis. 95, 29–51 (2011)Hirschmuller, H.: Stereo processing by semiglobal matching and mutual information. IEEE Trans. Pattern Anal. Mach. Intell. 30(2), 328–341 (2008)Leung, C., Appleton, B., Sun, C.: Iterated dynamic programming and quadtree subregioning for fast stereo matching. Image Vis. Comput. 26(10), 1371–1383 (2008)Hartley, R.I., Zisserman, A.: Multiple view geometry in computer vision, 2nd edn. Cambridge University Press, ISBN: 0521540518 (2004)Spiegel, M.R., Stepthens, L.J.: Statistics, 4th edn. Mc Graw Hill (2008)Kerre, E.E.: Fuzzy sets and approximate reasoning. Xian Jiaotong University Press (1998)Dubois, D., Prade, H.: Fuzzy sets and systems: theory and applications. Academic Press, New York (1980)Lee, C.C.: Fuzzy logic in control systems: Fuzzy logic controller-parts 1 and 2. IEEE Trans. Syst. Man Cybern. 20(2), 404–435 (1990)Fodor, J.C.: A new look at fuzzy-connectives. Fuzzy Sets Syst. 57(2), 141–148 (1993)Nalpantidis, L., Gasteratos, A.: Stereo vision for robotic applications in the presence of non-ideal lightning conditions. Image Vis. Comput. 28(6), 940–951 (2010

    On Advanced Mobility Concepts for Intelligent Planetary Surface Exploration

    Get PDF
    Surface exploration by wheeled rovers on Earth's Moon (the two Lunokhods) and Mars (Nasa's Sojourner and the two MERs) have been followed since many years already very suc-cessfully, specifically concerning operations over long time. However, despite of this success, the explored surface area was very small, having in mind a total driving distance of about 8 km (Spirit) and 21 km (Opportunity) over 6 years of operation. Moreover, ESA will send its ExoMars rover in 2018 to Mars, and NASA its MSL rover probably this year. However, all these rovers are lacking sufficient on-board intelligence in order to overcome longer dis-tances, driving much faster and deciding autonomously on path planning for the best trajec-tory to follow. In order to increase the scientific output of a rover mission it seems very nec-essary to explore much larger surface areas reliably in much less time. This is the main driver for a robotics institute to combine mechatronics functionalities to develop an intelligent mo-bile wheeled rover with four or six wheels, and having specific kinematics and locomotion suspension depending on the operational terrain of the rover to operate. DLR's Robotics and Mechatronics Center has a long tradition in developing advanced components in the field of light-weight motion actuation, intelligent and soft manipulation and skilled hands and tools, perception and cognition, and in increasing the autonomy of any kind of mechatronic systems. The whole design is supported and is based upon detailed modeling, optimization, and simula-tion tasks. We have developed efficient software tools to simulate the rover driveability per-formance on various terrain characteristics such as soft sandy and hard rocky terrains as well as on inclined planes, where wheel and grouser geometry plays a dominant role. Moreover, rover optimization is performed to support the best engineering intuitions, that will optimize structural and geometric parameters, compare various kinematics suspension concepts, and make use of realistic cost functions like mass and consumed energy minimization, static sta-bility, and more. For self-localization and safe navigation through unknown terrain we make use of fast 3D stereo algorithms that were successfully used e.g. in unmanned air vehicle ap-plications and on terrestrial mobile systems. The advanced rover design approach is applica-ble for lunar as well as Martian surface exploration purposes. A first mobility concept ap-proach for a lunar vehicle will be presented

    Monocular Vision as a Range Sensor

    Get PDF
    One of the most important abilities for a mobile robot is detecting obstacles in order to avoid collisions. Building a map of these obstacles is the next logical step. Most robots to date have used sensors such as passive or active infrared, sonar or laser range finders to locate obstacles in their path. In contrast, this work uses a single colour camera as the only sensor, and consequently the robot must obtain range information from the camera images. We propose simple methods for determining the range to the nearest obstacle in any direction in the robot’s field of view, referred to as the Radial Obstacle Profile. The ROP can then be used to determine the amount of rotation between two successive images, which is important for constructing a 360Âș view of the surrounding environment as part of map construction

    Viewfinder: final activity report

    Get PDF
    The VIEW-FINDER project (2006-2009) is an 'Advanced Robotics' project that seeks to apply a semi-autonomous robotic system to inspect ground safety in the event of a fire. Its primary aim is to gather data (visual and chemical) in order to assist rescue personnel. A base station combines the gathered information with information retrieved from off-site sources. The project addresses key issues related to map building and reconstruction, interfacing local command information with external sources, human-robot interfaces and semi-autonomous robot navigation. The VIEW-FINDER system is a semi-autonomous; the individual robot-sensors operate autonomously within the limits of the task assigned to them, that is, they will autonomously navigate through and inspect an area. Human operators monitor their operations and send high level task requests as well as low level commands through the interface to any nodes in the entire system. The human interface has to ensure the human supervisor and human interveners are provided a reduced but good and relevant overview of the ground and the robots and human rescue workers therein

    A vision system planner for increasing the autonomy of the Extravehicular Activity Helper/Retriever

    Get PDF
    The Extravehicular Activity Retriever (EVAR) is a robotic device currently being developed by the Automation and Robotics Division at the NASA Johnson Space Center to support activities in the neighborhood of the Space Shuttle or Space Station Freedom. As the name implies, the Retriever's primary function will be to provide the capability to retrieve tools and equipment or other objects which have become detached from the spacecraft, but it will also be able to rescue a crew member who may have become inadvertently de-tethered. Later goals will include cooperative operations between a crew member and the Retriever such as fetching a tool that is required for servicing or maintenance operations. This paper documents a preliminary design for a Vision System Planner (VSP) for the EVAR that is capable of achieving visual objectives provided to it by a high level task planner. Typical commands which the task planner might issue to the VSP relate to object recognition, object location determination, and obstacle detection. Upon receiving a command from the task planner, the VSP then plans a sequence of actions to achieve the specified objective using a model-based reasoning approach. This sequence may involve choosing an appropriate sensor, selecting an algorithm to process the data, reorienting the sensor, adjusting the effective resolution of the image using lens zooming capability, and/or requesting the task planner to reposition the EVAR to obtain a different view of the object. An initial version of the Vision System Planner which realizes the above capabilities using simulated images has been implemented and tested. The remaining sections describe the architecture and capabilities of the VSP and its relationship to the high level task planner. In addition, typical plans that are generated to achieve visual goals for various scenarios are discussed. Specific topics to be addressed will include object search strategies, repositioning of the EVAR to improve the quality of information obtained from the sensors, and complementary usage of the sensors and redundant capabilities

    Spatial context-aware person-following for a domestic robot

    Get PDF
    Domestic robots are in the focus of research in terms of service providers in households and even as robotic companion that share the living space with humans. A major capability of mobile domestic robots that is joint exploration of space. One challenge to deal with this task is how could we let the robots move in space in reasonable, socially acceptable ways so that it will support interaction and communication as a part of the joint exploration. As a step towards this challenge, we have developed a context-aware following behav- ior considering these social aspects and applied these together with a multi-modal person-tracking method to switch between three basic following approaches, namely direction-following, path-following and parallel-following. These are derived from the observation of human-human following schemes and are activated depending on the current spatial context (e.g. free space) and the relative position of the interacting human. A combination of the elementary behaviors is performed in real time with our mobile robot in different environments. First experimental results are provided to demonstrate the practicability of the proposed approach
    • 

    corecore