4,146 research outputs found

    Trends in the control of hexapod robots: a survey

    Get PDF
    The static stability of hexapods motivates their design for tasks in which stable locomotion is required, such as navigation across complex environments. This task is of high interest due to the possibility of replacing human beings in exploration, surveillance and rescue missions. For this application, the control system must adapt the actuation of the limbs according to their surroundings to ensure that the hexapod does not tumble during locomotion. The most traditional approach considers their limbs as robotic manipulators and relies on mechanical models to actuate them. However, the increasing interest in model-free models for the control of these systems has led to the design of novel solutions. Through a systematic literature review, this paper intends to overview the trends in this field of research and determine in which stage the design of autonomous and adaptable controllers for hexapods is.The first author received funding through a doctoral scholarship from the Portuguese Foundation for Science and Technology (FCT) (Grant No. SFRH/BD/145818/2019), with funds from the Portuguese Ministry of Science, Technology and Higher Education and the European Social Fund through the Programa Operacional Regional Norte. This work has been supported by the FCT national funds, under the national support to R&D units grant, through the reference project UIDB/04436/2020 and UIDP/04436/2020

    Machine learning-based agoraphilic navigation algorithm for use in dynamic environments with a moving goal

    Get PDF
    This paper presents a novel development of a new machine learning-based control system for the Agoraphilic (free-space attraction) concept of navigating robots in unknown dynamic environments with a moving goal. Furthermore, this paper presents a new methodology to generate training and testing datasets to develop a machine learning-based module to improve the performances of Agoraphilic algorithms. The new algorithm presented in this paper utilises the free-space attraction (Agoraphilic) concept to safely navigate a mobile robot in a dynamically cluttered environment with a moving goal. The algorithm uses tracking and prediction strategies to estimate the position and velocity vectors of detected moving obstacles and the goal. This predictive methodology enables the algorithm to identify and incorporate potential future growing free-space passages towards the moving goal. This is supported by the new machine learning-based controller designed specifically to efficiently account for the high uncertainties inherent in the robot’s operational environment with a moving goal at a reduced computational cost. This paper also includes comparative and experimental results to demonstrate the improvements of the algorithm after introducing the machine learning technique. The presented experiments demonstrated the success of the algorithm in navigating robots in dynamic environments with the challenge of a moving goal

    Uncertainty Minimization in Robotic 3D Mapping Systems Operating in Dynamic Large-Scale Environments

    Get PDF
    This dissertation research is motivated by the potential and promise of 3D sensing technologies in safety and security applications. With specific focus on unmanned robotic mapping to aid clean-up of hazardous environments, under-vehicle inspection, automatic runway/pavement inspection and modeling of urban environments, we develop modular, multi-sensor, multi-modality robotic 3D imaging prototypes using localization/navigation hardware, laser range scanners and video cameras. While deploying our multi-modality complementary approach to pose and structure recovery in dynamic real-world operating conditions, we observe several data fusion issues that state-of-the-art methodologies are not able to handle. Different bounds on the noise model of heterogeneous sensors, the dynamism of the operating conditions and the interaction of the sensing mechanisms with the environment introduce situations where sensors can intermittently degenerate to accuracy levels lower than their design specification. This observation necessitates the derivation of methods to integrate multi-sensor data considering sensor conflict, performance degradation and potential failure during operation. Our work in this dissertation contributes the derivation of a fault-diagnosis framework inspired by information complexity theory to the data fusion literature. We implement the framework as opportunistic sensing intelligence that is able to evolve a belief policy on the sensors within the multi-agent 3D mapping systems to survive and counter concerns of failure in challenging operating conditions. The implementation of the information-theoretic framework, in addition to eliminating failed/non-functional sensors and avoiding catastrophic fusion, is able to minimize uncertainty during autonomous operation by adaptively deciding to fuse or choose believable sensors. We demonstrate our framework through experiments in multi-sensor robot state localization in large scale dynamic environments and vision-based 3D inference. Our modular hardware and software design of robotic imaging prototypes along with the opportunistic sensing intelligence provides significant improvements towards autonomous accurate photo-realistic 3D mapping and remote visualization of scenes for the motivating applications

    Infrastructure-Aided Localization and State Estimation for Autonomous Mobile Robots

    Get PDF
    A slip-aware localization framework is proposed for mobile robots experiencing wheel slip in dynamic environments. The framework fuses infrastructure-aided visual tracking data (via fisheye lenses) and proprioceptive sensory data from a skid-steer mobile robot to enhance accuracy and reduce variance of the estimated states. The slip-aware localization framework includes: the visual thread to detect and track the robot in the stereo image through computationally efficient 3D point cloud generation using a region of interest; and the ego motion thread which uses a slip-aware odometry mechanism to estimate the robot pose utilizing a motion model considering wheel slip. Covariance intersection is used to fuse the pose prediction (using proprioceptive data) and the visual thread, such that the updated estimate remains consistent. As confirmed by experiments on a skid-steer mobile robot, the designed localization framework addresses state estimation challenges for indoor/outdoor autonomous mobile robots which experience high-slip, uneven torque distribution at each wheel (by the motion planner), or occlusion when observed by an infrastructure-mounted camera. The proposed system is real-time capable and scalable to multiple robots and multiple environmental cameras

    Perceptual compasses: spatial navigation in multisensory environments

    Get PDF
    Moving through space is a crucial activity in daily human life. The main objective of my Ph.D. project consisted of investigating how people exploit the multisensory sources of information available (vestibular, visual, auditory) to efficiently navigate. Specifically, my Ph.D. aimed at i) examining the multisensory integration mechanisms underlying spatial navigation; ii) establishing the crucial role of vestibular signals in spatial encoding and processing, and its interaction with environmental landmarks; iii) providing the neuroscientific basis to develop tailored assessment protocols and rehabilitation procedures to enhance orientation and mobility based on the integration of different sensory modalities, especially addressed to improve the compromised navigational performance of visually impaired (VI) people. To achieve these aims, we conducted behavioral experiments on adult participants, including psychophysics procedures, galvanic stimulation, and modeling. In particular, the experiments involved active spatial navigation tasks with audio-visual landmarks and selfmotion discrimination tasks with and without acoustic landmarks using a motion platform (Rotational-Translational Chair) and an acoustic virtual reality tool. Besides, we applied Galvanic Vestibular Stimulation to directly modulate signals coming from the vestibular system during behavioral tasks that involved interaction with audio-visual landmarks. In addition, when appropriate, we compared the obtained results with predictions coming from the Maximum Likelihood Estimation model, to verify the potential optimal integration between the available multisensory cues. i) Results on multisensory navigation showed a sub-group of integrators and another of non-integrators, revealing inter-individual differences in audio-visual processing while moving through the environment. Finding these idiosyncrasies in a homogeneous sample of adults emphasizes the role of individual perceptual characteristics in multisensory perception, highlighting how important it is to plan tailored rehabilitation protocols considering each individual’s perceptual preferences and experiences. ii) We also found a robust inherent overestimation bias when estimating passive self-motion stimuli. This finding shed new light on how our brain processes and elaborates the available cues building a more functional representation of the world. We also demonstrated a novel impact of the vestibular signals on the encoding of visual environmental cues without actual self-motion information. The role that vestibular inputs play in visual cues perception, and space encoding has multiple consequences on humans’ ability to functionally navigate in space and interact with environmental objects, especially when vestibular signals are impaired due to intrinsic (vestibular disorders) or environmental conditions (altered gravity, e.g. spaceflight missions). Finally, iii) the combination of the Rotational-Translational Chair and the acoustic virtual reality tool revealed a slight improvement in self-motion perception for VI people when exploiting acoustic cues. This approach shows to be a successful technique for evaluating audio-vestibular perception and improving spatial representation abilities of VI people, providing the basis to develop new rehabilitation procedures focused on multisensory perception. Overall, the findings resulting from my Ph.D. project broaden the scientific knowledge about spatial navigation in multisensory environments, yielding new insights into the exploration of the brain mechanisms associated with mobility, orientation, and locomotion abilities

    Deep Haptic Model Predictive Control for Robot-Assisted Dressing

    Full text link
    Robot-assisted dressing offers an opportunity to benefit the lives of many people with disabilities, such as some older adults. However, robots currently lack common sense about the physical implications of their actions on people. The physical implications of dressing are complicated by non-rigid garments, which can result in a robot indirectly applying high forces to a person's body. We present a deep recurrent model that, when given a proposed action by the robot, predicts the forces a garment will apply to a person's body. We also show that a robot can provide better dressing assistance by using this model with model predictive control. The predictions made by our model only use haptic and kinematic observations from the robot's end effector, which are readily attainable. Collecting training data from real world physical human-robot interaction can be time consuming, costly, and put people at risk. Instead, we train our predictive model using data collected in an entirely self-supervised fashion from a physics-based simulation. We evaluated our approach with a PR2 robot that attempted to pull a hospital gown onto the arms of 10 human participants. With a 0.2s prediction horizon, our controller succeeded at high rates and lowered applied force while navigating the garment around a persons fist and elbow without getting caught. Shorter prediction horizons resulted in significantly reduced performance with the sleeve catching on the participants' fists and elbows, demonstrating the value of our model's predictions. These behaviors of mitigating catches emerged from our deep predictive model and the controller objective function, which primarily penalizes high forces.Comment: 8 pages, 12 figures, 1 table, 2018 IEEE International Conference on Robotics and Automation (ICRA
    • 

    corecore