1,289 research outputs found
VIRTUAL ROBOT LOCOMOTION ON VARIABLE TERRAIN WITH ADVERSARIAL REINFORCEMENT LEARNING
Reinforcement Learning (RL) is a machine learning technique where an agent learns to perform a complex action by going through a repeated process of trial and error to maximize a well-defined reward function. This form of learning has found applications in robot locomotion where it has been used to teach robots to traverse complex terrain. While RL algorithms may work well in training robot locomotion, they tend to not generalize well when the agent is brought into an environment that it has never encountered before. Possible solutions from the literature include training a destabilizing adversary alongside the locomotive learning agent. The destabilizing adversary aims to destabilize the agent by applying external forces to it, which may help the locomotive agent learn to deal with unexpected scenarios. For this project, we will train a robust, simulated quadruped robot to traverse a variable terrain. We compare and analyze Proximal Policy Optimization (PPO) with and without the use of an adversarial agent, and determine which use of PPO produces the best results
Towards an Autonomous Walking Robot for Planetary Surfaces
In this paper, recent progress in the development of
the DLR Crawler - a six-legged, actively compliant walking
robot prototype - is presented. The robot implements
a walking layer with a simple tripod and a more complex
biologically inspired gait. Using a variety of proprioceptive
sensors, different reflexes for reactively crossing obstacles
within the walking height are realised. On top of
the walking layer, a navigation layer provides the ability
to autonomously navigate to a predefined goal point in
unknown rough terrain using a stereo camera. A model
of the environment is created, the terrain traversability is
estimated and an optimal path is planned. The difficulty
of the path can be influenced by behavioral parameters.
Motion commands are sent to the walking layer and the
gait pattern is switched according to the estimated terrain
difficulty. The interaction between walking layer and navigation
layer was tested in different experimental setups
Web crawler research methodology
In economic and social sciences it is crucial to test theoretical models against reliable and big enough databases. The general research challenge is to build up a well-structured database that suits well to the given research question and that is cost efficient at the same time. In this paper we focus on crawler programs that proved to be an effective tool of data base building in very different problem settings. First we explain how crawler programs work and illustrate a complex research process mapping business relationships using social media information sources. In this case we illustrate how search robots can be used to collect data for mapping complex network relationship to characterize business relationships in a well defined environment. After that extend the case and present a framework of three structurally different research models where crawler programs can be applied successfully: exploration, classification and time series analysis. In the case of exploration we present findings about the Hungarian web agency industry when no previous statistical data was available about their operations. For classification we show how the top visited Hungarian web domains can be divided into predefined categories of e-business models. In the third research we used a crawler to gather the values of concrete pre-defined records containing ticket prices of low cost airlines from one single site. Based on the experiences we highlight some conceptual conclusions and opportunities of crawler based research in e-business. --e-business research,web search,web crawler,Hungarian web,social network analyis
- âŠ