92,034 research outputs found
Rapid trial-and-error learning with simulation supports flexible tool use and physical reasoning
Many animals, and an increasing number of artificial agents, display
sophisticated capabilities to perceive and manipulate objects. But human beings
remain distinctive in their capacity for flexible, creative tool use -- using
objects in new ways to act on the world, achieve a goal, or solve a problem. To
study this type of general physical problem solving, we introduce the Virtual
Tools game. In this game, people solve a large range of challenging physical
puzzles in just a handful of attempts. We propose that the flexibility of human
physical problem solving rests on an ability to imagine the effects of
hypothesized actions, while the efficiency of human search arises from rich
action priors which are updated via observations of the world. We instantiate
these components in the "Sample, Simulate, Update" (SSUP) model and show that
it captures human performance across 30 levels of the Virtual Tools game. More
broadly, this model provides a mechanism for explaining how people condense
general physical knowledge into actionable, task-specific plans to achieve
flexible and efficient physical problem-solving.Comment: This manuscript is in press at PNAS. It is an extended version of a
paper "Rapid Trial-and-Error Learning in Physical Problem Solving" accepted
for oral presentation at the 41st Annual Meeting of the Cognitive Science
Society (2019). It represents ongoing work on the part of the author
Response Time and Puzzle Solving Skills in Gamers vs. Non- gamers
Video gaming requires rapid response times, problem solving skills, adaptive learning and attention to detail by continuously engaging cognitive and physical reactions to cues provided via visual stimuli. Gaming more than nine hours a week has been said to positively affect individuals’ reaction times and problem solving skills. Given the advancements of technology and video gaming, an increase in research on the effects gaming has on motor and cognitive skills has yet to come. PURPOSE: To compare the response times and problem solving skills between gamers and non-gamers. METHODS : Subject (N=68) were required to complete a survey, the tower of Hanoi puzzle, and a set of ten trials on a MOART board designed to measure response time. Gamers 9+ hrs/wk (N=24), sometimes gamers 1-8 hrs/wk (N=18), non-gamers 0 hrs/wk (N=26). On day 1 participants completed a series of 10 trials on the Moart Board which measured their reaction and movement times. On day two, individuals completed three trials on the Tower of Hanoi which was used to measure problem solving skills. Their objective was to move the stack of blocks from peg one to peg three while following two rules; only move one block at a time, and do not stack a bigger block on top of a smaller block. A one-way ANOVA (α =.05) was used to compare the aggregated mean scores in the Tower of Hanoi puzzle and the Response time of the MOART board. RESULTS: There was no statistical significance when comparing the groups for puzzle completion and error time when solving the Tower of Hanoi until the third trial. During the third trial of completion the significance between gamers and non-gamers was (p=0.016). Response time was only noted as statistically significant when comparing gamers and non-gamers (p=0.007). CONCLUSION: There was not statistical significance between gamers and non-gamers in many of the trials. However, there was a notable trend in the percent of subjects completing the trial. By trial 3, 80% of gamers completed the tower compared to only 38% of non-gamers. Not only were gamers solving the puzzle faster than the partial and non-gamers but there were more gamers solving the puzzle than any other group. There was no significance between gamers and sometime gamers (0.130) or sometime gamers and non-gamers (0.620). However, significance was present between gamers vs. non-gamers (0.014)
Design of the Artificial: lessons from the biological roots of general intelligence
Our desire and fascination with intelligent machines dates back to the
antiquity's mythical automaton Talos, Aristotle's mode of mechanical thought
(syllogism) and Heron of Alexandria's mechanical machines and automata.
However, the quest for Artificial General Intelligence (AGI) is troubled with
repeated failures of strategies and approaches throughout the history. This
decade has seen a shift in interest towards bio-inspired software and hardware,
with the assumption that such mimicry entails intelligence. Though these steps
are fruitful in certain directions and have advanced automation, their singular
design focus renders them highly inefficient in achieving AGI. Which set of
requirements have to be met in the design of AGI? What are the limits in the
design of the artificial? Here, a careful examination of computation in
biological systems hints that evolutionary tinkering of contextual processing
of information enabled by a hierarchical architecture is the key to build AGI.Comment: Theoretical perspective on AGI (Artificial General Intelligence
Analysis of Students' Metacognition Level in Solving Scientific Literacy on the Topic of Static Fluid
The purpose of this study is to describe students' metacognition level in solving scientific literacy. This research use the descriptive method. The subject of this research is 99 students of grade XI in SMA Batik 2 Surakarta. Data collection methods used are test methods which its instruments based on an indicator of scientific literacy and metacognition ability. Data analysis techniques use quantitative descriptive analysis. The results showed that the achievement of scientific literacy in science as a body of knowledge, science as a way of thinking, science as a way of investigating, and science as an interaction between technology and society is still low at below 35%. This is due to 84% student occupy in low metacognition level that is 30% students in tacit use level, 54% students in aware use level, and only 16% students occupy in high metacognition level that is in strategic use level
Recommended from our members
Unpacking capabilities underlying design (thinking) process
Engineering graduates must know how to frame and solve non-routine problems. While design classes explicitly teach problem framing and solving, it is lacking throughout much of the rest of the engineering curriculum and is often relegated to capstone classes at the end of the students’ educational experience. This paper explores problem framing and solving through the lens of experiential learning theory. It captures core problem framing and solving approaches from critical, design and systems thinking and concludes with a table of learning outcomes that might be drawn upon in designing an engineering curriculum that more fully develops the problem framing and solving capabilities of its students
Reset-free Trial-and-Error Learning for Robot Damage Recovery
The high probability of hardware failures prevents many advanced robots
(e.g., legged robots) from being confidently deployed in real-world situations
(e.g., post-disaster rescue). Instead of attempting to diagnose the failures,
robots could adapt by trial-and-error in order to be able to complete their
tasks. In this situation, damage recovery can be seen as a Reinforcement
Learning (RL) problem. However, the best RL algorithms for robotics require the
robot and the environment to be reset to an initial state after each episode,
that is, the robot is not learning autonomously. In addition, most of the RL
methods for robotics do not scale well with complex robots (e.g., walking
robots) and either cannot be used at all or take too long to converge to a
solution (e.g., hours of learning). In this paper, we introduce a novel
learning algorithm called "Reset-free Trial-and-Error" (RTE) that (1) breaks
the complexity by pre-generating hundreds of possible behaviors with a dynamics
simulator of the intact robot, and (2) allows complex robots to quickly recover
from damage while completing their tasks and taking the environment into
account. We evaluate our algorithm on a simulated wheeled robot, a simulated
six-legged robot, and a real six-legged walking robot that are damaged in
several ways (e.g., a missing leg, a shortened leg, faulty motor, etc.) and
whose objective is to reach a sequence of targets in an arena. Our experiments
show that the robots can recover most of their locomotion abilities in an
environment with obstacles, and without any human intervention.Comment: 18 pages, 16 figures, 3 tables, 6 pseudocodes/algorithms, video at
https://youtu.be/IqtyHFrb3BU, code at
https://github.com/resibots/chatzilygeroudis_2018_rt
Technology assessment of advanced automation for space missions
Six general classes of technology requirements derived during the mission definition phase of the study were identified as having maximum importance and urgency, including autonomous world model based information systems, learning and hypothesis formation, natural language and other man-machine communication, space manufacturing, teleoperators and robot systems, and computer science and technology
Inductive machine learning of optimal modular structures: Estimating solutions using support vector machines
Structural optimization is usually handled by iterative methods requiring repeated samples of a physics-based model, but this process can be computationally demanding. Given a set of previously optimized structures of the same topology, this paper uses inductive learning to replace this optimization process entirely by deriving a function that directly maps any given load to an optimal geometry. A support vector machine is trained to determine the optimal geometry of individual modules of a space frame structure given a specified load condition. Structures produced by learning are compared against those found by a standard gradient descent optimization, both as individual modules and then as a composite structure. The primary motivation for this is speed, and results show the process is highly efficient for cases in which similar optimizations must be performed repeatedly. The function learned by the algorithm can approximate the result of optimization very closely after sufficient training, and has also been found effective at generalizing the underlying optima to produce structures that perform better than those found by standard iterative methods
- …