7,976 research outputs found

    Vision-based interface applied to assistive robots

    Get PDF
    This paper presents two vision-based interfaces for disabled people to command a mobile robot for personal assistance. The developed interfaces can be subdivided according to the algorithm of image processing implemented for the detection and tracking of two different body regions. The first interface detects and tracks movements of the user's head, and these movements are transformed into linear and angular velocities in order to command a mobile robot. The second interface detects and tracks movements of the user's hand, and these movements are similarly transformed. In addition, this paper also presents the control laws for the robot. The experimental results demonstrate good performance and balance between complexity and feasibility for real-time applications.Fil: PĂ©rez Berenguer, MarĂ­a Elisa. Universidad Nacional de San Juan. Facultad de IngenierĂ­a. Departamento de ElectrĂłnica y AutomĂĄtica. Gabinete de TecnologĂ­a MĂ©dica; Argentina. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas; ArgentinaFil: Soria, Carlos Miguel. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas. Centro CientĂ­fico TecnolĂłgico Conicet - San Juan. Instituto de AutomĂĄtica. Universidad Nacional de San Juan. Facultad de IngenierĂ­a. Instituto de AutomĂĄtica; ArgentinaFil: LĂłpez Celani, Natalia Martina. Universidad Nacional de San Juan. Facultad de IngenierĂ­a. Departamento de ElectrĂłnica y AutomĂĄtica. Gabinete de TecnologĂ­a MĂ©dica; Argentina. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas; ArgentinaFil: Nasisi, Oscar Herminio. Universidad Nacional de San Juan. Facultad de IngenierĂ­a. Instituto de AutomĂĄtica; ArgentinaFil: Mut, Vicente Antonio. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas. Centro CientĂ­fico TecnolĂłgico Conicet - San Juan. Instituto de AutomĂĄtica. Universidad Nacional de San Juan. Facultad de IngenierĂ­a. Instituto de AutomĂĄtica; Argentin

    Efficient and secure real-time mobile robots cooperation using visual servoing

    Get PDF
    This paper deals with the challenging problem of navigation in formation of mobiles robots fleet. For that purpose, a secure approach is used based on visual servoing to control velocities (linear and angular) of the multiple robots. To construct our system, we develop the interaction matrix which combines the moments in the image with robots velocities and we estimate the depth between each robot and the targeted object. This is done without any communication between the robots which eliminate the problem of the influence of each robot errors on the whole. For a successful visual servoing, we propose a powerful mechanism to execute safely the robots navigation, exploiting a robot accident reporting system using raspberry Pi3. In addition, in case of problem, a robot accident detection reporting system testbed is used to send an accident notification, in the form of a specifical message. Experimental results are presented using nonholonomic mobiles robots with on-board real time cameras, to show the effectiveness of the proposed method

    Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain

    Get PDF
    Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors

    Trajectory Servoing: Image-Based Trajectory Tracking without Absolute Positioning

    Get PDF
    The thesis describes an image based visual servoing (IBVS) system for a non-holonomic robot to achieve good trajectory following without real-time robot pose information and without a known visual map of the environment. We call it trajectory servoing. The critical component is a feature based, indirect SLAM method to provide a pool of available features with estimated depth and covariance, so that they may be propagated forward in time to generate image feature trajectories with uncertainty information for visual servoing. Short and long distance experiments show the benefits of trajectory servoing for navigating unknown areas without absolute positioning. Trajectory servoing is shown to be more accurate than SLAM pose-based feedback and further improved by a weighted least square controller using covariance from the underlying SLAM system.M.S

    Mobile robot visual navigation based on fuzzy logic and optical flow approaches

    Get PDF
    This paper presents the design of mobile robot visual navigation system in indoor environment based on fuzzy logic controllers (FLC) and optical flow (OF) approach. The proposed control system contains two Takagi–Sugeno fuzzy logic controllers for obstacle avoidance and goal seeking based on video acquisition and image processing algorithm. The first steering controller uses OF values calculated by Horn–Schunck algorithm to detect and estimate the positions of the obstacles. To extract information about the environment, the image is divided into two parts. The second FLC is used to guide the robot to the direction of the final destination. The efficiency of the proposed approach is verified in simulation using Visual Reality Toolbox. Simulation results demonstrate that the visual based control system allows autonomous navigation without any collision with obstacles.Peer ReviewedPostprint (author's final draft

    Navigation without localisation: reliable teach and repeat based on the convergence theorem

    Full text link
    We present a novel concept for teach-and-repeat visual navigation. The proposed concept is based on a mathematical model, which indicates that in teach-and-repeat navigation scenarios, mobile robots do not need to perform explicit localisation. Rather than that, a mobile robot which repeats a previously taught path can simply `replay' the learned velocities, while using its camera information only to correct its heading relative to the intended path. To support our claim, we establish a position error model of a robot, which traverses a taught path by only correcting its heading. Then, we outline a mathematical proof which shows that this position error does not diverge over time. Based on the insights from the model, we present a simple monocular teach-and-repeat navigation method. The method is computationally efficient, it does not require camera calibration, and it can learn and autonomously traverse arbitrarily-shaped paths. In a series of experiments, we demonstrate that the method can reliably guide mobile robots in realistic indoor and outdoor conditions, and can cope with imperfect odometry, landmark deficiency, illumination variations and naturally-occurring environment changes. Furthermore, we provide the navigation system and the datasets gathered at http://www.github.com/gestom/stroll_bearnav.Comment: The paper will be presented at IROS 2018 in Madri

    Ultimate SLAM? Combining Events, Images, and IMU for Robust Visual SLAM in HDR and High Speed Scenarios

    Full text link
    Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. These cameras do not suffer from motion blur and have a very high dynamic range, which enables them to provide reliable visual information during high speed motions or in scenes characterized by high dynamic range. However, event cameras output only little information when the amount of motion is limited, such as in the case of almost still motion. Conversely, standard cameras provide instant and rich information about the environment most of the time (in low-speed and good lighting scenarios), but they fail severely in case of fast motions, or difficult lighting such as high dynamic range or low light scenes. In this paper, we present the first state estimation pipeline that leverages the complementary advantages of these two sensors by fusing in a tightly-coupled manner events, standard frames, and inertial measurements. We show on the publicly available Event Camera Dataset that our hybrid pipeline leads to an accuracy improvement of 130% over event-only pipelines, and 85% over standard-frames-only visual-inertial systems, while still being computationally tractable. Furthermore, we use our pipeline to demonstrate - to the best of our knowledge - the first autonomous quadrotor flight using an event camera for state estimation, unlocking flight scenarios that were not reachable with traditional visual-inertial odometry, such as low-light environments and high-dynamic range scenes.Comment: 8 pages, 9 figures, 2 table
    • 

    corecore