823 research outputs found
Introduction to Psychology
Introduction to Psychology is a modified version of Psychology 2e - OpenStax
Once More, With Feeling: Partnering With Learners to Re-see the College Experience Through Metaphor and Sensory Language
This study focuses on better understanding students and their internal worlds through conceptual metaphor theory and sensory language. Using a phenomenological and arts-based approach, I examined students’ metaphorical constructions of their college experiences and the sensory language and information informing those constructions. By engaging participants in a multimodal process to re-see their experience through connoisseurship and criticism, I explored the following research questions: How do students metaphorically structure their college experience? What sensory language do college students use to describe the metaphorical dimensions of their college experience? How does sensory information shape the metaphorical structuring of their college experience? Through conversations centered on participant-generated images and chosen sensory language, I identified five complex metaphors that represented participants’ constructions of their college experience: college is an unwieldy package; college is up, forward, and out; college is current and future nostalgia; college is a prism; and college is a movie and peers are the soundtrack. By considering these themes, it may be possible for educators to better partner with diverse learners to design personally meaningful experiences that support student development and success. This dissertation is available in open access at AURA (https://aura.antioch.edu) and OhioLINK ETD Center (https://etd.ohiolink.edu)
Prioritized Planning for Target-Oriented Manipulation via Hierarchical Stacking Relationship Prediction
In scenarios involving the grasping of multiple targets, the learning of
stacking relationships between objects is fundamental for robots to execute
safely and efficiently. However, current methods lack subdivision for the
hierarchy of stacking relationship types. In scenes where objects are mostly
stacked in an orderly manner, they are incapable of performing human-like and
high-efficient grasping decisions. This paper proposes a perception-planning
method to distinguish different stacking types between objects and generate
prioritized manipulation order decisions based on given target designations. We
utilize a Hierarchical Stacking Relationship Network (HSRN) to discriminate the
hierarchy of stacking and generate a refined Stacking Relationship Tree (SRT)
for relationship description. Considering that objects with high stacking
stability can be grasped together if necessary, we introduce an elaborate
decision-making planner based on the Partially Observable Markov Decision
Process (POMDP), which leverages observations and generates the least
grasp-consuming decision chain with robustness and is suitable for
simultaneously specifying multiple targets. To verify our work, we set the
scene to the dining table and augment the REGRAD dataset with a set of common
tableware models for network training. Experiments show that our method
effectively generates grasping decisions that conform to human requirements,
and improves the implementation efficiency compared with existing methods on
the basis of guaranteeing the success rate.Comment: 8 pages, 8 figure
EARL: Eye-on-Hand Reinforcement Learner for Dynamic Grasping with Active Pose Estimation
In this paper, we explore the dynamic grasping of moving objects through
active pose tracking and reinforcement learning for hand-eye coordination
systems. Most existing vision-based robotic grasping methods implicitly assume
target objects are stationary or moving predictably. Performing grasping of
unpredictably moving objects presents a unique set of challenges. For example,
a pre-computed robust grasp can become unreachable or unstable as the target
object moves, and motion planning must also be adaptive. In this work, we
present a new approach, Eye-on-hAnd Reinforcement Learner (EARL), for enabling
coupled Eye-on-Hand (EoH) robotic manipulation systems to perform real-time
active pose tracking and dynamic grasping of novel objects without explicit
motion prediction. EARL readily addresses many thorny issues in automated
hand-eye coordination, including fast-tracking of 6D object pose from vision,
learning control policy for a robotic arm to track a moving object while
keeping the object in the camera's field of view, and performing dynamic
grasping. We demonstrate the effectiveness of our approach in extensive
experiments validated on multiple commercial robotic arms in both simulations
and complex real-world tasks.Comment: Presented on IROS 2023 Corresponding author Siddarth Jai
The Artful Eye: Exploring Visual Engagement with Artworks in Different Contexts
Artworks are increasingly experienced in non-traditional platforms, from digital collections on museum websites to virtual gallery tours, making it important to investigate the context-dependent and context-independent aspects of aesthetic experience. While some studies have shown that artworks in the museum elicit a higher visual engagement than when presented on a screen, others reported divergent findings. This thesis suggests that such discrepancies may be due to the interaction between the artwork's physical and contextual characteristics and investigates how diverse aspects of viewing behaviour change between the museum, on-screen laboratory, and virtual gallery laboratory contexts. Fifteen paintings by different Australian artists from the Art Gallery of New South Wales (AGNSW) were included as stimuli for the studies in this thesis. Mobile and screen-based eye movement recordings were used to index visual engagement (number of fixations, total and average fixation duration) with artworks across the three different contexts.
Our first study (Chapter 2) compared the visual engagement of museum visitors in the AGNSW to that of participants looking at their digital reproductions in laboratory. We focused on how aspects of viewing behaviour, including viewing distance in the gallery condition and eye gaze measures such as fixation count, total fixation duration and average fixation duration are affected by the artworks’ physical characteristics, including size and image statistics properties such as Fourier amplitude spectrum, fractal dimension and entropy. The effects of these factors on visual engagement were then explored in a virtual gallery replica of the exhibition (Chapter 3). In a virtual gallery context, we also tested the impact of two additional context-dependent factors: the curatorial arrangement and further manipulations of the relative size of the paintings. Overall, the results show significant differences in viewing behaviour across different contexts, but also that the effects of presentation contexts are modulated by the artworks’ physical characteristics.
In the final two studies, the thesis explores the effect of mere exposure on viewing behaviour in different contexts (Chapter 4) and the spatial and temporal image statistics of fixated compared to non-fixated regions of artworks in both the museum and on- screen viewing contexts (Chapter 5). The results show that visual engagement in the museum, but not on-screen, is enhanced by previous exposure to digital reproductions of artworks. Finally, Chapter 5 demonstrates that fixated and randomly selected regions differed in both spatial and temporal image statistics with more pronounced differences in the on-screen viewing condition.
In sum, the thesis demonstrates that a combination of context-dependent variables (e.g., navigation, curatorial setting and relative size) and the low-level properties (e.g., fractal dimension, amplitude spectrum, entropy) of artworks influence visual engagement
CabiNet: Scaling Neural Collision Detection for Object Rearrangement with Procedural Scene Generation
We address the important problem of generalizing robotic rearrangement to
clutter without any explicit object models. We first generate over 650K
cluttered scenes - orders of magnitude more than prior work - in diverse
everyday environments, such as cabinets and shelves. We render synthetic
partial point clouds from this data and use it to train our CabiNet model
architecture. CabiNet is a collision model that accepts object and scene point
clouds, captured from a single-view depth observation, and predicts collisions
for SE(3) object poses in the scene. Our representation has a fast inference
speed of 7 microseconds per query with nearly 20% higher performance than
baseline approaches in challenging environments. We use this collision model in
conjunction with a Model Predictive Path Integral (MPPI) planner to generate
collision-free trajectories for picking and placing in clutter. CabiNet also
predicts waypoints, computed from the scene's signed distance field (SDF), that
allows the robot to navigate tight spaces during rearrangement. This improves
rearrangement performance by nearly 35% compared to baselines. We
systematically evaluate our approach, procedurally generate simulated
experiments, and demonstrate that our approach directly transfers to the real
world, despite training exclusively in simulation. Robot experiment demos in
completely unknown scenes and objects can be found at this http
https://cabinet-object-rearrangement.github.i
Human-in-the-Loop Task and Motion Planning for Imitation Learning
Imitation learning from human demonstrations can teach robots complex
manipulation skills, but is time-consuming and labor intensive. In contrast,
Task and Motion Planning (TAMP) systems are automated and excel at solving
long-horizon tasks, but they are difficult to apply to contact-rich tasks. In
this paper, we present Human-in-the-Loop Task and Motion Planning (HITL-TAMP),
a novel system that leverages the benefits of both approaches. The system
employs a TAMP-gated control mechanism, which selectively gives and takes
control to and from a human teleoperator. This enables the human teleoperator
to manage a fleet of robots, maximizing data collection efficiency. The
collected human data is then combined with an imitation learning framework to
train a TAMP-gated policy, leading to superior performance compared to training
on full task demonstrations. We compared HITL-TAMP to a conventional
teleoperation system -- users gathered more than 3x the number of demos given
the same time budget. Furthermore, proficient agents (75\%+ success) could be
trained from just 10 minutes of non-expert teleoperation data. Finally, we
collected 2.1K demos with HITL-TAMP across 12 contact-rich, long-horizon tasks
and show that the system often produces near-perfect agents. Videos and
additional results at https://hitltamp.github.io .Comment: Conference on Robot Learning (CoRL) 202
In-Hand 3D Object Scanning from an RGB Sequence
We propose a method for in-hand 3D scanning of an unknown object with a
monocular camera. Our method relies on a neural implicit surface representation
that captures both the geometry and the appearance of the object, however, by
contrast with most NeRF-based methods, we do not assume that the camera-object
relative poses are known. Instead, we simultaneously optimize both the object
shape and the pose trajectory. As direct optimization over all shape and pose
parameters is prone to fail without coarse-level initialization, we propose an
incremental approach that starts by splitting the sequence into carefully
selected overlapping segments within which the optimization is likely to
succeed. We reconstruct the object shape and track its poses independently
within each segment, then merge all the segments before performing a global
optimization. We show that our method is able to reconstruct the shape and
color of both textured and challenging texture-less objects, outperforms
classical methods that rely only on appearance features, and that its
performance is close to recent methods that assume known camera poses.Comment: CVPR 202
Demonstrating Large-Scale Package Manipulation via Learned Metrics of Pick Success
Automating warehouse operations can reduce logistics overhead costs,
ultimately driving down the final price for consumers, increasing the speed of
delivery, and enhancing the resiliency to workforce fluctuations. The past few
years have seen increased interest in automating such repeated tasks but mostly
in controlled settings. Tasks such as picking objects from unstructured,
cluttered piles have only recently become robust enough for large-scale
deployment with minimal human intervention.
This paper demonstrates a large-scale package manipulation from unstructured
piles in Amazon Robotics' Robot Induction (Robin) fleet, which utilizes a pick
success predictor trained on real production data. Specifically, the system was
trained on over 394K picks. It is used for singulating up to 5 million packages
per day and has manipulated over 200 million packages during this paper's
evaluation period.
The developed learned pick quality measure ranks various pick alternatives in
real-time and prioritizes the most promising ones for execution. The pick
success predictor aims to estimate from prior experience the success
probability of a desired pick by the deployed industrial robotic arms in
cluttered scenes containing deformable and rigid objects with partially known
properties. It is a shallow machine learning model, which allows us to evaluate
which features are most important for the prediction. An online pick ranker
leverages the learned success predictor to prioritize the most promising picks
for the robotic arm, which are then assessed for collision avoidance. This
learned ranking process is demonstrated to overcome the limitations and
outperform the performance of manually engineered and heuristic alternatives.
To the best of the authors' knowledge, this paper presents the first
large-scale deployment of learned pick quality estimation methods in a real
production system.Comment: Robotics: Science and Systems (RSS 2023) conference, July 10 - 14,
2023, Daegu, Republic of Kore
- …