8 research outputs found
One-Shot Learning for Semantic Segmentation
Low-shot learning methods for image classification support learning from
sparse data. We extend these techniques to support dense semantic image
segmentation. Specifically, we train a network that, given a small set of
annotated images, produces parameters for a Fully Convolutional Network (FCN).
We use this FCN to perform dense pixel-level prediction on a test image for the
new semantic class. Our architecture shows a 25% relative meanIoU improvement
compared to the best baseline methods for one-shot segmentation on unseen
classes in the PASCAL VOC 2012 dataset and is at least 3 times faster.Comment: To appear in the proceedings of the British Machine Vision Conference
(BMVC) 2017. The code is available at https://github.com/lzzcd001/OSLS
Probabilistic Human Action Prediction and Wait-sensitive Planning for Responsive Human-robot Collaboration
© 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.A novel representation for the human component of multi-step,
human-robot collaborative activity is presented. The goal of the system is to
predict in a probabilistic manner when the human will perform different
subtasks that may require robot assistance. The representation is a graphical
model where the start and end of each subtask is explicitly represented as a
probabilistic variable conditioned upon prior intervals. This formulation
allows the inclusion of uncertain perceptual detections as evidence to drive
the predictions. Next, given a cost function that describes the penalty for
different wait times, we develop a planning algorithm which selects
robot-actions that minimize the expected cost based upon the distribution
over predicted human-action timings. We demonstrate the approach in assembly
tasks where the robot must provide the right part at the right time depending
upon the choices made by the human operator during the assembly
Modeling structured activity to support human-robot collaboration in the presence of task and sensor uncertainty
©2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Presented at the IROS Workshop on Cognitive Robotics Systems held in conjunction with the IEEE/RSJ International Conference on Intelligent Robots and Systems, November 3-8, 2013 at Tokyo Big Sight, Japan.A representation for structured activities is developed that allows a robot to probabilistically infer which task actions a human is currently performing and to predict which future actions will be executed and when they will occur. The goal is to enable a robot to anticipate collaborative actions in the presence of uncertain sensing and task ambiguity. The system can represent multi-path tasks where the task variations may contain partially ordered actions or even optional actions that may be skipped altogether. The task is represented by an AND-OR tree structure from which a probabilistic graphical model is constructed. Inference methods for that model are derived that support a planning and execution system for the robot that attempts to minimize a cost function based upon
expected human idle time. We demonstrate the theory in both simulation and actual human-robot performance of a two-waybranch assembly task. In particular we show that the inference model can robustly anticipate the actions of the human even in the presence of unreliable or noisy detections because of its integration of all its sensing information along with knowledge of task structure
Anticipating Human Actions for Collaboration in the Presence of Task and Sensor Uncertainty
© 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Presented at the 2014 IEEE International Conference on Robotics and Automation (ICRA), 31 May-7 June 2014, Hong Kong, China.DOI: 10.1109/ICRA.2014.6907165A representation for structured activities is developed that allows a robot to probabilistically infer which task actions a human is currently performing and to predict which future actions will be executed and when they will occur. The goal is to enable a robot to anticipate collaborative actions in the presence of uncertain sensing and task ambiguity. The system can represent multi-path tasks where the task variations may contain partially ordered actions or even optional actions
that may be skipped altogether. The task is represented by an
AND-OR tree structure from which a probabilistic graphical model is constructed. Inference methods for that model are derived that support a planning and execution system for the robot which attempts to minimize a cost function based upon expected human idle time. We demonstrate the theory in both simulation and actual human-robot performance of a two-way- branch
assembly task. In particular we show that the inference model can robustly anticipate the actions of the human even
in the presence of unreliable or noisy detections because of its
integration of all its sensing information along with knowledge of task structure
Collaborative Planning for Mixed-Autonomy Lane Merging
Collaborative Planning for Mixed-Autonomy Lane Mergin
Bobick, “Anticipating human actions for collaboration in the presence of task and sensor uncertainty
Abstract-A representation for structured activities is developed that allows a robot to probabilistically infer which task actions a human is currently performing and to predict which future actions will be executed and when they will occur. The goal is to enable a robot to anticipate collaborative actions in the presence of uncertain sensing and task ambiguity. The system can represent multi-path tasks where the task variations may contain partially ordered actions or even optional actions that may be skipped altogether. The task is represented by an AND-OR tree structure from which a probabilistic graphical model is constructed. Inference methods for that model are derived that support a planning and execution system for the robot which attempts to minimize a cost function based upon expected human idle time. We demonstrate the theory in both simulation and actual human-robot performance of a two-waybranch assembly task. In particular we show that the inference model can robustly anticipate the actions of the human even in the presence of unreliable or noisy detections because of its integration of all its sensing information along with knowledge of task structure