28 research outputs found

    Wheeled Mobile Robots: State of the Art Overview and Kinematic Comparison Among Three Omnidirectional Locomotion Strategies

    Get PDF
    In the last decades, mobile robotics has become a very interesting research topic in the feld of robotics, mainly because of population ageing and the recent pandemic emergency caused by Covid-19. Against this context, the paper presents an overview on wheeled mobile robot (WMR), which have a central role in nowadays scenario. In particular, the paper describes the most commonly adopted locomotion strategies, perception systems, control architectures and navigation approaches. After having analyzed the state of the art, this paper focuses on the kinematics of three omnidirectional platforms: a four mecanum wheels robot (4WD), a three omni wheel platform (3WD) and a two swerve-drive system (2SWD). Through a dimensionless approach, these three platforms are compared to understand how their mobility is afected by the wheel speed limitations that are present in every practical application. This original comparison has not been already presented by the literature and it can be used to improve our understanding of the kinematics of these mobile robots and to guide the selection of the most appropriate locomotion system according to the specifc application

    Remarks on the classification of wheeled mobile robots

    Get PDF

    Whole-Body Impedance Control of Wheeled Humanoid Robots

    Full text link

    Enhanced vision-based localization and control for navigation of non-holonomic omnidirectional mobile robots in GPS-denied environments

    Get PDF
    New Zealand’s economy relies on primary production to a great extent, where use of the technological advances can have a significant impact on the productivity. Robotics and automation can play a key role in increasing productivity in primary sector, leading to a boost in national economy. This thesis investigates novel methodologies for design, control, and navigation of a mobile robotic platform, aimed for field service applications, specifically in agricultural environments such as orchards to automate the agricultural tasks. The design process of this robotic platform as a non-holonomic omnidirectional mobile robot, includes an innovative integrated application of CAD, CAM, CAE, and RP for development and manufacturing of the platform. Robot Operating System (ROS) is employed for the optimum embedded software system design and development to enable control, sensing, and navigation of the platform. 3D modelling and simulation of the robotic system is performed through interfacing ROS and Gazebo simulator, aiming for off-line programming, optimal control system design, and system performance analysis. Gazebo simulator provides 3D simulation of the robotic system, sensors, and control interfaces. It also enables simulation of the world environment, allowing the simulated robot to operate in a modelled environment. The model based controller for kinematic control of the non-holonomic omnidirectional platform is tested and validated through experimental results obtained from the simulated and the physical robot. The challenges of the kinematic model based controller including the mathematical and kinematic singularities are discussed and the solution to enable an optimal kinematic model based controller is presented. The kinematic singularity associated with the non-holonomic omnidirectional robots is solved using a novel fuzzy logic based approach. The proposed approach is successfully validated and tested through the simulation and experimental results. Development of a reliable localization system is aimed to enable navigation of the platform in GPS-denied environments such as orchards. For this aim, stereo visual odometry (SVO) is considered as the core of the non-GPS localization system. Challenges of SVO are introduced and the SVO accumulative drift is considered as the main challenge to overcome. SVO drift is identified in form of rotational and translational drift. Sensor fusion is employed to improve the SVO rotational drift through the integration of IMU and SVO. A novel machine learning approach is proposed to improve the SVO translational drift using Neural-Fuzzy system and RBF neural network. The machine learning system is formulated as a drift estimator for each image frame, then correction is applied at that frame to avoid the accumulation of the drift over time. The experimental results and analyses are presented to validate the effectiveness of the methodology in improving the SVO accuracy. An enhanced SVO is aimed through combination of sensor fusion and machine learning methods to improve the SVO rotational and translational drifts. Furthermore, to achieve a robust non-GPS localization system for the platform, sensor fusion of the wheel odometry and the enhanced SVO is performed to increase the accuracy of the overall system, as well as the robustness of the non-GPS localization system. The experimental results and analyses are conducted to support the methodology

    Support polygon in the hybrid legged-wheeled CENTAURO robot: modelling and control

    Get PDF
    Search for the robot capable to perform well in the real-world has sparked an interest in the hybrid locomotion systems. The hybrid legged-wheeled robots combine the advantages of the standard legged and wheeled platforms by switching between the quick and efficient wheeled motion on the flat grounds and the more versatile legged mobility on the unstructured terrains. With the locomotion flexibility offered by the hybrid mobility and appropriate control tools, these systems have high potential to excel in practical applications adapting effectively to real-world during locomanipuation operations. In contrary to their standard well-studied counterparts, kinematics of this newer type of robotic platforms has not been fully understood yet. This gap may lead to unexpected results when the standard locomotion methods are applied to hybrid legged-wheeled robots. To better understand mobility of the hybrid legged-wheeled robots, the model that describes the support polygon of a general hybrid legged-wheeled robot as a function of the wheel angular velocities without assumptions on the robot kinematics or wheel camber angle is proposed and analysed in this thesis. Based on the analysis of the developed support polygon model, a robust omnidirectional driving scheme has been designed. A continuous wheel motion is resolved through the Inverse Kinematics (IK) scheme, which generates robot motion compliant with the Non-Sliding Pure-Rolling (NSPR) condition. A higher-level scheme resolving a steering motion to comply with the non-holonomic constraint and to tackle the structural singularity is proposed. To improve the robot performance in presence to the unpredicted circumstances, the IK scheme has been enhanced with the introduction of a new reactive support polygon adaptation task. To this end, a novel quadratic programming task has been designed to push the system Support Polygon Vertices (SPVs) away from the robot Centre of Mass (CoM), while respecting the leg workspace limits. The proposed task has been expressed through the developed SPV model to account for the hardware limits. The omnidirectional driving and reactive control schemes have been verified in the simulation and hardware experiments. To that end, the simulator for the CENTAURO robot that models the actuation dynamics and the software framework for the locomotion research have been developed

    Modeling, Analysis, and Control of a Mobile Robot for \u3ci\u3eIn Vivo\u3c/i\u3e Fluoroscopy of Human Joints during Natural Movements

    Get PDF
    In this dissertation, the modeling, analysis and control of a multi-degree of freedom (mdof) robotic fluoroscope was investigated. A prototype robotic fluoroscope exists, and consists of a 3 dof mobile platform with two 2 dof Cartesian manipulators mounted symmetrically on opposite sides of the platform. One Cartesian manipulator positions the x-ray generator and the other Cartesian manipulator positions the x-ray imaging device. The robotic fluoroscope is used to x-ray skeletal joints of interest of human subjects performing natural movement activities. In order to collect the data, the Cartesian manipulators must keep the x-ray generation and imaging devices accurately aligned while dynamically tracking the desired skeletal joint of interest. In addition to the joint tracking, this also requires the robotic platform to move along with the subject, allowing the manipulators to operate within their ranges of motion. A comprehensive dynamic model of the robotic fluoroscope prototype was created, incorporating the dynamic coupling of the system. Empirical data collected from an RGB-D camera were used to create a human kinematic model that can be used to simulate the joint of interest target dynamics. This model was incorporated into a computer simulation that was validated by comparing the simulation results with actual prototype experiments using the same human kinematic model inputs. The computer simulation was used in a comprehensive dynamic analysis of the prototype and in the development and evaluation of sensing, control, and signal processing approaches that optimize the subject and joint tracking performance characteristics. The modeling and simulation results were used to develop real-time control strategies, including decoupling techniques that reduce tracking error on the prototype. For a normal walking activity, the joint tracking error was less than 20 mm, and the subject tracking error was less than 140 mm

    Stabilization of Mobile Manipulators

    Get PDF
    The focus of this work is to generate a method of stabilization in a system generated through the marriage of a mobile robot and a manipulator. While the stability of a rigid manipulator is a solved problem, upon the introduction of flexibilities into the manipulator base structure there is the simultaneous introduction of an unmodeled, induced, oscillatory disturbance to the manipulator system from the mobile base suspension and mounting. Under normal circumstances, the disturbance can be modeled through experimentation and then a form of vibration suppression control can be employed to damp the induced oscillations in the base. This approach is satisfactory for disturbances that are measured, however the hardware necessary to measure the induced oscillations in the manipulator base is generally not included in mobile manipulation systems. Because of this lack of sensing hardware it becomes difficult to directly compensate for the induced disturbances in the system. Rather than developing a direct method for compensation, efforts are made to find postures of the manipulator where the flexibilities of the system are passive. In these postures the manipulator behaves as if it is on a rigid base, this allows the use of higher feedback gains and simpler control architectures.Ph.D

    Contact aware robust semi-autonomous teleoperation of mobile manipulators

    Get PDF
    In the context of human-robot collaboration, cooperation and teaming, the use of mobile manipulators is widespread on applications involving unpredictable or hazardous environments for humans operators, like space operations, waste management and search and rescue on disaster scenarios. Applications where the manipulator's motion is controlled remotely by specialized operators. Teleoperation of manipulators is not a straightforward task, and in many practical cases represent a common source of failures. Common issues during the remote control of manipulators are: increasing control complexity with respect the mechanical degrees of freedom; inadequate or incomplete feedback to the user (i.e. limited visualization or knowledge of the environment); predefined motion directives may be incompatible with constraints or obstacles imposed by the environment. In the latter case, part of the manipulator may get trapped or blocked by some obstacle in the environment, failure that cannot be easily detected, isolated nor counteracted remotely. While control complexity can be reduced by the introduction of motion directives or by abstraction of the robot motion, the real-time constraint of the teleoperation task requires the transfer of the least possible amount of data over the system's network, thus limiting the number of physical sensors that can be used to model the environment. Therefore, it is of fundamental to define alternative perceptive strategies to accurately characterize different interaction with the environment without relying on specific sensory technologies. In this work, we present a novel approach for safe teleoperation, that takes advantage of model based proprioceptive measurement of the robot dynamics to robustly identify unexpected collisions or contact events with the environment. Each identified collision is translated on-the-fly into a set of local motion constraints, allowing the exploitation of the system redundancies for the computation of intelligent control laws for automatic reaction, without requiring human intervention and minimizing the disturbance of the task execution (or, equivalently, the operator efforts). More precisely, the described system consist in two different building blocks. The first, for detecting unexpected interactions with the environment (perceptive block). The second, for intelligent and autonomous reaction after the stimulus (control block). The perceptive block is responsible of the contact event identification. In short, the approach is based on the claim that a sensorless collision detection method for robot manipulators can be extended to the field of mobile manipulators, by embedding it within a statistical learning framework. The control deals with the intelligent and autonomous reaction after the contact or impact with the environment occurs, and consist on an motion abstraction controller with a prioritized set of constrains, where the highest priority correspond to the robot reconfiguration after a collision is detected; when all related dynamical effects have been compensated, the controller switch again to the basic control mode
    corecore