2,432 research outputs found
DREAM Architecture: a Developmental Approach to Open-Ended Learning in Robotics
Robots are still limited to controlled conditions, that the robot designer
knows with enough details to endow the robot with the appropriate models or
behaviors. Learning algorithms add some flexibility with the ability to
discover the appropriate behavior given either some demonstrations or a reward
to guide its exploration with a reinforcement learning algorithm. Reinforcement
learning algorithms rely on the definition of state and action spaces that
define reachable behaviors. Their adaptation capability critically depends on
the representations of these spaces: small and discrete spaces result in fast
learning while large and continuous spaces are challenging and either require a
long training period or prevent the robot from converging to an appropriate
behavior. Beside the operational cycle of policy execution and the learning
cycle, which works at a slower time scale to acquire new policies, we introduce
the redescription cycle, a third cycle working at an even slower time scale to
generate or adapt the required representations to the robot, its environment
and the task. We introduce the challenges raised by this cycle and we present
DREAM (Deferred Restructuring of Experience in Autonomous Machines), a
developmental cognitive architecture to bootstrap this redescription process
stage by stage, build new state representations with appropriate motivations,
and transfer the acquired knowledge across domains or tasks or even across
robots. We describe results obtained so far with this approach and end up with
a discussion of the questions it raises in Neuroscience
Robot Consciousness: Physics and Metaphysics Here and Abroad
Interest has been renewed in the study of consciousness, both theoretical and applied, following developments in 20th and early 21st-century logic, metamathematics, computer science, and the brain sciences. In this evolving narrative, I explore several theoretical questions about the types of artificial intelligence and offer several conjectures about how they affect possible future developments in this exceptionally transformative field of research. I also address the practical significance of the advances in artificial intelligence in view of the cautions issued by prominent scientists, politicians, and ethicists about the possible dangers of such sufficiently advanced general intelligence, including by implication the search for extraterrestrial intelligence
The Rise Of The Machines: Robotis, A Frontier In Educational And Industrial Robots In Korea
Robot industry is quickly becoming one of the fastest growing markets in the world. Already used in various fields, robots are replacing more and more labors. The prospects for this industry is quite bright since many countries in the world are adopting programs and policies to develop the robot markets. In this paper, we will look into a Korean venture firm that is growing together with the robot industry: ROBOTIS. Beginning with the growth story of ROBOTIS, we will analyze the business environment the firm is facing. We will also look into the main products of ROBOTIS and how they correspond with the trend of the robot industry
On the utility of dreaming: a general model for how learning in artificial agents can benefit from data hallucination
We consider the benefits of dream mechanisms â that is, the ability to simulate new experiences based on past ones â in a machine learning context. Specifically, we are interested in learning for artificial agents that act in the world, and operationalize âdreamingâ as a mechanism by which such an agent can use its own model of the learning environment to generate new hypotheses and training data.
We first show that it is not necessarily a given that such a data-hallucination process is useful, since it can easily lead to a training set dominated by spurious imagined data until an ill-defined convergence point is reached. We then analyse a notably successful implementation of a machine learning-based dreaming mechanism by Ha and Schmidhuber (Ha, D., & Schmidhuber, J. (2018). World models. arXiv e-prints, arXiv:1803.10122). On that basis, we then develop a general framework by which an agent can generate simulated data to learn from in a manner that is beneficial to the agent. This, we argue, then forms a general method for an operationalized dream-like mechanism.
We finish by demonstrating the general conditions under which such mechanisms can be useful in machine learning, wherein the implicit simulator inference and extrapolation involved in dreaming act without reinforcing inference error even when inference is incomplete
On the utility of dreaming: a general model for how learning in artificial agents can benefit from data hallucination
We consider the benefits of dream mechanisms â that is, the ability to simulate new experiences based on past ones â in a machine learning context. Specifically, we are interested in learning for artificial agents that act in the world, and operationalize âdreamingâ as a mechanism by which such an agent can use its own model of the learning environment to generate new hypotheses and training data.
We first show that it is not necessarily a given that such a data-hallucination process is useful, since it can easily lead to a training set dominated by spurious imagined data until an ill-defined convergence point is reached. We then analyse a notably successful implementation of a machine learning-based dreaming mechanism by Ha and Schmidhuber (Ha, D., & Schmidhuber, J. (2018). World models. arXiv e-prints, arXiv:1803.10122). On that basis, we then develop a general framework by which an agent can generate simulated data to learn from in a manner that is beneficial to the agent. This, we argue, then forms a general method for an operationalized dream-like mechanism.
We finish by demonstrating the general conditions under which such mechanisms can be useful in machine learning, wherein the implicit simulator inference and extrapolation involved in dreaming act without reinforcing inference error even when inference is incomplete
- âŠ