7 research outputs found

    Sequential Action Selection for Budgeted Localization in Robots

    Get PDF
    International audienceRecent years have seen a fast growth in the number of applications of Machine Learning algorithms from Computer Science to Robotics. Nevertheless, while most such attempts were successful in maximizing robot performance after a long learning phase, to our knowledge none of them explicitly takes into account the budget in the algorithm evaluation: e.g. budget limitation on the learning duration or on the maximum number of possible actions by the robot. In this paper we introduce an algorithm for robot spatial localization based on image classification using a sequential budgeted learning framework. This aims to allow the learning of policies under an explicit budget. In this case our model uses a constraint on the number of actions that can be used by the robot. We apply this algorithm to a localization problem on a simulated environment. Our approach enables to reduce the problem to a classification task under budget constraint. The model has been compared, on the one hand, to simple neural networks for the classification part and, on the other hand, to different techniques of policy selection. The results show that the model can effectively learn an efficient policy (i.e. alternating between sensor measurement and movement to get additional information in different positions) in order to optimize its localization performance under each tested fixed budget

    A Comprehensive Review on Autonomous Navigation

    Full text link
    The field of autonomous mobile robots has undergone dramatic advancements over the past decades. Despite achieving important milestones, several challenges are yet to be addressed. Aggregating the achievements of the robotic community as survey papers is vital to keep the track of current state-of-the-art and the challenges that must be tackled in the future. This paper tries to provide a comprehensive review of autonomous mobile robots covering topics such as sensor types, mobile robot platforms, simulation tools, path planning and following, sensor fusion methods, obstacle avoidance, and SLAM. The urge to present a survey paper is twofold. First, autonomous navigation field evolves fast so writing survey papers regularly is crucial to keep the research community well-aware of the current status of this field. Second, deep learning methods have revolutionized many fields including autonomous navigation. Therefore, it is necessary to give an appropriate treatment of the role of deep learning in autonomous navigation as well which is covered in this paper. Future works and research gaps will also be discussed

    Position control of a quadrotor UAV using stereo computer vision

    Get PDF
    Quadrotors are a recent popular consumer product, prompting further developments with regards to combining attitude and position control for such unmanned aerial vehicles (UAVs). The quadrotor’s attitude (orientation) can be controlled using measurements from small and inexpensive sensors such as gyroscopes, accelerometers, and magnetometers combined in a complementary filter. The quadrotor’s position can be controlled using a global positioning system (GPS); this has shown to be an effective method but is not reliable in urban or indoor environments where quadrotors may be flown. An alternative method such as visual servoing may provide reliable position control in urban environments. Visual servoing uses one or more on-board cameras to detect visual features in their field-of-view. If more than one camera is used, a more accurate position can be determined. This work discusses the the application of stereo visual servoing to a quadrotor for position control combined in a Kalman filter with IMU readings to provide a better estimate of its position

    Mobile robot vavigation using a vision based approach

    Get PDF
    PhD ThesisThis study addresses the issue of vision based mobile robot navigation in a partially cluttered indoor environment using a mapless navigation strategy. The work focuses on two key problems, namely vision based obstacle avoidance and vision based reactive navigation strategy. The estimation of optical flow plays a key role in vision based obstacle avoidance problems, however the current view is that this technique is too sensitive to noise and distortion under real conditions. Accordingly, practical applications in real time robotics remain scarce. This dissertation presents a novel methodology for vision based obstacle avoidance, using a hybrid architecture. This integrates an appearance-based obstacle detection method into an optical flow architecture based upon a behavioural control strategy that includes a new arbitration module. This enhances the overall performance of conventional optical flow based navigation systems, enabling a robot to successfully move around without experiencing collisions. Behaviour based approaches have become the dominant methodologies for designing control strategies for robot navigation. Two different behaviour based navigation architectures have been proposed for the second problem, using monocular vision as the primary sensor and equipped with a 2-D range finder. Both utilize an accelerated version of the Scale Invariant Feature Transform (SIFT) algorithm. The first architecture employs a qualitative-based control algorithm to steer the robot towards a goal whilst avoiding obstacles, whereas the second employs an intelligent control framework. This allows the components of soft computing to be integrated into the proposed SIFT-based navigation architecture, conserving the same set of behaviours and system structure of the previously defined architecture. The intelligent framework incorporates a novel distance estimation technique using the scale parameters obtained from the SIFT algorithm. The technique employs scale parameters and a corresponding zooming factor as inputs to train a neural network which results in the determination of physical distance. Furthermore a fuzzy controller is designed and integrated into this framework so as to estimate linear velocity, and a neural network based solution is adopted to estimate the steering direction of the robot. As a result, this intelligent iv approach allows the robot to successfully complete its task in a smooth and robust manner without experiencing collision. MS Robotics Studio software was used to simulate the systems, and a modified Pioneer 3-DX mobile robot was used for real-time implementation. Several realistic scenarios were developed and comprehensive experiments conducted to evaluate the performance of the proposed navigation systems. KEY WORDS: Mobile robot navigation using vision, Mapless navigation, Mobile robot architecture, Distance estimation, Vision for obstacle avoidance, Scale Invariant Feature Transforms, Intelligent framework

    A hierarchical active binocular robot vision architecture for scene exploration and object appearance learning

    Get PDF
    This thesis presents an investigation of a computational model of hierarchical visual behaviours within an active binocular robot vision architecture. The robot vision system is able to localise multiple instances of the same object class, while simultaneously maintaining vergence and directing its gaze to attend and recognise objects within cluttered, complex scenes. This is achieved by implementing all image analysis in an egocentric symbolic space without creating explicit pixel-space maps and without the need for calibration or other knowledge of the camera geometry. One of the important aspects of the active binocular vision paradigm requires that visual features in both camera eyes must be bound together in order to drive visual search to saccade, locate and recognise putative objects or salient locations in the robot's field of view. The system structure is based on the “attentional spotlight” metaphor of biological systems and a collection of abstract and reactive visual behaviours arranged in a hierarchical structure. Several studies have shown that the human brain represents and learns objects for recognition by snapshots of 2-dimensional views of the imaged scene that happens to contain the object of interest during active interaction (exploration) of the environment. Likewise, psychophysical findings specify that the primate’s visual cortex represents common everyday objects by a hierarchical structure of their parts or sub-features and, consequently, recognise by simple but imperfect 2D view object part approximations. This thesis incorporates the above observations into an active visual learning behaviour in the hierarchical active binocular robot vision architecture. By actively exploring the object viewing sphere (as higher mammals do), the robot vision system automatically synthesises and creates its own part-based object representation from multiple observations while a human teacher indicates the object and supplies a classification name. Its is proposed to adopt the computational concepts of a visual learning exploration mechanism that controls the accumulation of visual evidence and directs attention towards the spatial salient object parts. The behavioural structure of the binocular robot vision architecture is loosely modelled by a WHAT and WHERE visual streams. The WHERE stream maintains and binds spatial attention on the object part coordinates that egocentrically characterises the location of the object of interest and extracts spatio-temporal properties of feature coordinates and descriptors. The WHAT stream either determines the identity of an object or triggers a learning behaviour that stores view-invariant feature descriptions of the object part. Therefore, the robot vision is capable to perform a collection of different specific visual tasks such as vergence, detection, discrimination, recognition localisation and multiple same-instance identification. This classification of tasks enables the robot vision system to execute and fulfil specified high-level tasks, e.g. autonomous scene exploration and active object appearance learning

    Visual Behaviours for Binocular Tracking?

    No full text
    1 Introduction Recent advances in robotics research have been reported to apply visual tracking as a mean to accomplish tasks, such as trajectory reconstruction, object recognition [4], navigation and egomotion estimation [9]. When objects have constrained shapes or motions, or move in simple backgrounds, some tracking systems have been successfully employed [12,1]. However, in complex and dynamic environments, most systems still lack robustness, reactivity and flexibility. In nature, biological sys? Partially funded by projects JNICT-PBIC/TPR/2550/95 and PRAXIS/3/3.1/TPR/23/94. tems interact constantly with unstructured environments, and show high reliability in executing many important tasks. This fact led some researchers to design their particular applications based on biological evidence [20]
    corecore