1,094 research outputs found
The Immune System: the ultimate fractionated cyber-physical system
In this little vision paper we analyze the human immune system from a
computer science point of view with the aim of understanding the architecture
and features that allow robust, effective behavior to emerge from local sensing
and actions. We then recall the notion of fractionated cyber-physical systems,
and compare and contrast this to the immune system. We conclude with some
challenges.Comment: In Proceedings Festschrift for Dave Schmidt, arXiv:1309.455
Periscope: A Robotic Camera System to Support Remote Physical Collaboration
We investigate how robotic camera systems can offer new capabilities to
computer-supported cooperative work through the design, development, and
evaluation of a prototype system called Periscope. With Periscope, a local
worker completes manipulation tasks with guidance from a remote helper who
observes the workspace through a camera mounted on a semi-autonomous robotic
arm that is co-located with the worker. Our key insight is that the helper, the
worker, and the robot should all share responsibility of the camera view--an
approach we call shared camera control. Using this approach, we present a set
of modes that distribute the control of the camera between the human
collaborators and the autonomous robot depending on task needs. We demonstrate
the system's utility and the promise of shared camera control through a
preliminary study where 12 dyads collaboratively worked on assembly tasks.
Finally, we discuss design and research implications of our work for future
robotic camera systems that facilitate remote collaboration.Comment: This is a pre-print of the article accepted for publication in PACM
HCI and will be presented at CSCW 202
A framework of teleoperated and stereo vision guided mobile manipulation for industrial automation
Smart and flexible manufacturing requests the adoption of industrial mobile manipulators in factory. The goal of autonomous mobile manipulation is the execution of complex manipulation tasks in unstructured and dynamic environments. It is significant that a mobile manipulator is able to detect and grasp the object in a fast and accurate manner. In this research, we developed a stereo vision system providing qualified point cloud data of the object. A modified and improved iterative closest point algorithm is applied to recognize the targeted object greatly avoiding the local minimum in template matching. Moreover, a stereo vision guided teleoperation control algorithm using virtual fixtures technology is adopted to enhance robot teaching ability. Combining these two functions, the mobile manipulator is able to learn semi-autonomously and work autonomously. The key components and the system performance are then tested and proved in both simulation and experiments
Kinetic Blocks: Actuated Constructive Assembly for Interaction and Display
Pin-based shape displays not only give physical form to digital information, they have the inherent ability to accurately move and manipulate objects placed on top of them. In this paper we focus on such object manipulation: we present ideas and techniques that use the underlying shape change to give kinetic ability to otherwise inanimate objects. First, we describe the shape display's ability to assemble, disassemble, and reassemble structures from simple passive building blocks through stacking, scaffolding, and catapulting. A technical evaluation demonstrates the reliability of the presented techniques. Second, we introduce special kinematic blocks that are actuated and sensed through the underlying pins. These blocks translate vertical pin movements into other degrees of freedom like rotation or horizontal movement. This interplay of the shape display with objects on its surface allows us to render otherwise inaccessible forms, like overhangs, and enables richer input and output
SocialAI: Benchmarking Socio-Cognitive Abilities in Deep Reinforcement Learning Agents
Building embodied autonomous agents capable of participating in social
interactions with humans is one of the main challenges in AI. Within the Deep
Reinforcement Learning (DRL) field, this objective motivated multiple works on
embodied language use. However, current approaches focus on language as a
communication tool in very simplified and non-diverse social situations: the
"naturalness" of language is reduced to the concept of high vocabulary size and
variability. In this paper, we argue that aiming towards human-level AI
requires a broader set of key social skills: 1) language use in complex and
variable social contexts; 2) beyond language, complex embodied communication in
multimodal settings within constantly evolving social worlds. We explain how
concepts from cognitive sciences could help AI to draw a roadmap towards
human-like intelligence, with a focus on its social dimensions. As a first
step, we propose to expand current research to a broader set of core social
skills. To do this, we present SocialAI, a benchmark to assess the acquisition
of social skills of DRL agents using multiple grid-world environments featuring
other (scripted) social agents. We then study the limits of a recent SOTA DRL
approach when tested on SocialAI and discuss important next steps towards
proficient social agents. Videos and code are available at
https://sites.google.com/view/socialai.Comment: under review. This paper extends and generalizes work in
arXiv:2104.1320
- …