190 research outputs found
Mechanical design of small-size humanoid robot TWNHR-3
[[abstract]]In this paper, a mechanical structure with 26 DOFs (degrees of freedom) is proposed so that an implemented small-size humanoid robot named TWNHR-3 is able to accomplish the man-like walking motion. The height and weight of the implemented robot is 46 cm and 3.1 kg, respectively. There are 2 DOFs on the head, 2 DOFs on the trunk, 4 DOFs on each arm, and 7 DOFs on each leg. Some basic walking experiments of TWNHR-3 are presented to illustrate that the proposed mechanical structure lets the robot move forward, turn, and slip effectively.[[conferencetype]]ćé[[conferencedate]]20071105~20071108[[iscallforpapers]]Y[[conferencelocation]]Taipei, Taiwa
Recommended from our members
The emergence of care robotics - A patent and publication analysis
Care robots are a means to support elderly people affected by physical or mental handicaps to remain as autonomous as possible or regain already lost autonomy (e.g. running stairs). They also support care-takers when working with handicapped. We review the emergence of care robotics and particularly offer answers to two research questions: Which organizations and individuals in which countries have been and are active in research and development? How has research and development emerged with regard to activity focus, intensity levels and cooperation?
The analysis rests on PATSTAT patent and ISI Web of Science publication data. Bibliographic and network analyses are conducted on country, organization (i.e. universities and firms) and individual levels. We find that care robotics research and development activities have constantly increased since the late 1970s. Today Japanese universities and firms are the most active players, while in early stages US and European organizations pioneered care robotics research. Starting from six disjunctive small networks, several highly interconnected care robotics research networks have evolved. However, most cooperation clusters are still found within the same country. Only few international hubs emerged. Among them are two Japanese organizations (ATR, AIST) and Carnegie Mellon University, US.This is the accepted manuscript. The final version is available from Elsevier at http://www.sciencedirect.com/science/article/pii/S004016251400275
Autonomous behaviour in tangible user interfaces as a design factor
PhD ThesisThis thesis critically explores the design space of autonomous and actuated artefacts, considering
how autonomous behaviours in interactive technologies might shape and influence usersâ
interactions and behaviours.
Since the invention of gearing and clockwork, mechanical devices were built that both fascinate
and intrigue people through their mechanical actuation. There seems to be something magical
about moving devices, which draws our attention and piques our interest. Progress in the
development of computational hardware is allowing increasingly complex commercial products
to be available to broad consumer-markets. New technologies emerge very fast, ranging from
personal devices with strong computational power to diverse user interfaces, like multi-touch
surfaces or gestural input devices. Electronic systems are becoming smaller and smarter, as they
comprise sensing, controlling and actuation. From this, new opportunities arise in integrating
more sensors and technology in physical objects.
These trends raise some specific questions around the impacts smarter systems might have
on people and interaction: how do people perceive smart systems that are tangible and what
implications does this perception have for user interface design? Which design opportunities are
opened up through smart systems? There is a tendency in humans to attribute life-like qualities
onto non-animate objects, which evokes social behaviour towards technology. Maybe it would be
possible to build user interfaces that utilise such behaviours to motivate people towards frequent
use, or even motivate them to build relationships in which the users care for their devices. Their
aim is not to increase the efficiency of user interfaces, but to create interfaces that are more
engaging to interact with and excite people to bond with these tangible objects.
This thesis sets out to explore autonomous behaviours in physical interfaces. More specifically, I
am interested in the factors that make a user interpret an interface as autonomous. Through a
review of literature concerned with animated objects, autonomous technology and robots, I have
mapped out a design space exploring the factors that are important in developing autonomous
interfaces. Building on this and utilising workshops conducted with other researchers, I have
vi
developed a framework that identifies key elements for the design of Tangible Autonomous
Interfaces (TAIs). To validate the dimensions of this framework and to further unpack the
impacts on users of interacting with autonomous interfaces I have adopted a âresearch through
designâ approach. I have iteratively designed and realised a series of autonomous, interactive
prototypes, which demonstrate the potential of such interfaces to establish themselves as social
entities. Through two deeper case studies, consisting of an actuated helium balloon and desktop
lamp, I provide insights into how autonomy could be implemented into Tangible User Interfaces.
My studies revealed that through their autonomous behaviour (guided by the framework) these
devices established themselves, in interaction, as social entities. They furthermore turned out to
be acceptable, especially if people were able to find a purpose for them in their lives. This thesis
closes with a discussion of findings and provides specific implications for design of autonomous
behaviour in interfaces
Augmented Reality and Robotics: A Survey and Taxonomy for AR-enhanced Human-Robot Interaction and Robotic Interfaces
This paper contributes to a taxonomy of augmented reality and robotics based on a survey of 460 research papers. Augmented and mixed reality (AR/MR) have emerged as a new way to enhance human-robot interaction (HRI) and robotic interfaces (e.g., actuated and shape-changing interfaces). Recently, an increasing number of studies in HCI, HRI, and robotics have demonstrated how AR enables better interactions between people and robots. However, often research remains focused on individual explorations and key design strategies, and research questions are rarely analyzed systematically. In this paper, we synthesize and categorize this research field in the following dimensions: 1) approaches to augmenting reality; 2) characteristics of robots; 3) purposes and benefits; 4) classification of presented information; 5) design components and strategies for visual augmentation; 6) interaction techniques and modalities; 7) application domains; and 8) evaluation strategies. We formulate key challenges and opportunities to guide and inform future research in AR and robotics
A Posture Sequence Learning System for an Anthropomorphic Robotic Hand
The paper presents a cognitive architecture for posture learning of an anthropomorphic robotic hand. Our approach is aimed to allow the robotic system to perform complex perceptual operations, to interact with a human user and to integrate the perceptions by a cognitive representation of the scene and the observed actions. The anthropomorphic robotic hand imitates the gestures acquired by the vision system in order to learn meaningful movements, to build its knowledge by different conceptual spaces and to perform complex interaction with the human operator
Humanoid Robots
For many years, the human being has been trying, in all ways, to recreate the complex mechanisms that form the human body. Such task is extremely complicated and the results are not totally satisfactory. However, with increasing technological advances based on theoretical and experimental researches, man gets, in a way, to copy or to imitate some systems of the human body. These researches not only intended to create humanoid robots, great part of them constituting autonomous systems, but also, in some way, to offer a higher knowledge of the systems that form the human body, objectifying possible applications in the technology of rehabilitation of human beings, gathering in a whole studies related not only to Robotics, but also to Biomechanics, Biomimmetics, Cybernetics, among other areas. This book presents a series of researches inspired by this ideal, carried through by various researchers worldwide, looking for to analyze and to discuss diverse subjects related to humanoid robots. The presented contributions explore aspects about robotic hands, learning, language, vision and locomotion
Universal Event and Motion Editor for Robots\u27 Theatre
Most of work on motion of mobile robots is to generate plans for avoiding obstacles or perform some meaningful and useful actions. In modern robot theatres and entertainment robots the motions of the robot are scripted and thus the performance or behavior of the robot is always the same. In this work we want to propose a new approach to robot motion generation. We want our robot to behave more like real people. People do not move in mechanical way like robots. When a human is supposed to execute some motion, these motions are similar to one another but always slightly or not so slightly different. We want to reproduce this property based on the introduced by us new concept of probabilistic regular expression, a method to describe sets of interrelated similar actions instead of single actions. Our goal is not only to create motions for humanoid robots that will look more naturally and less mechanically, but also to program robots that will combine basic movements from certain library in many different and partially random ways. While the basic motions were created ahead of time, their combinations are specified in our new language. Although now our method is only for motions and does not take inputs from sensors into account, in future the language can be extended to input/output sequences, thus the robot will be able to adapt the motion in different ways, to some sets of sequences of input stimuli. The inputs will come from sensors, possibly attached to limbs of controlling humans from whom the patterns of motion will be acquired
Towards gestural understanding for intelligent robots
Fritsch JN. Towards gestural understanding for intelligent robots. Bielefeld: UniversitÀt Bielefeld; 2012.A strong driving force of scientific progress in the technical sciences is the quest for systems that assist humans in their daily life and make their life easier and more enjoyable. Nowadays smartphones are probably the most typical instances of such systems. Another class of systems that is getting increasing attention are intelligent robots. Instead of offering a smartphone touch screen to select actions, these systems are intended to offer a more natural human-machine interface to their users. Out of the large range of actions performed by humans, gestures performed with the hands play a very important role especially when humans interact with their direct surrounding like, e.g., pointing to an object or manipulating it. Consequently, a robot has to understand such gestures to offer an intuitive interface. Gestural understanding is, therefore, a key capability on the way to intelligent robots.
This book deals with vision-based approaches for gestural understanding. Over the past two decades, this has been an intensive field of research which has resulted in a variety of algorithms to analyze human hand motions. Following a categorization of different gesture types and a review of other sensing techniques, the design of vision systems that achieve hand gesture understanding for intelligent robots is analyzed. For each of the individual algorithmic steps â hand detection, hand tracking, and trajectory-based gesture recognition â a separate Chapter introduces common techniques and algorithms and provides example methods. The resulting recognition algorithms are considering gestures in isolation and are often not sufficient for interacting with a robot who can only understand such gestures when incorporating the context like, e.g., what object was pointed at or manipulated.
Going beyond a purely trajectory-based gesture recognition by incorporating context is an important prerequisite to achieve gesture understanding and is addressed explicitly in a separate Chapter of this book. Two types of context, user-provided context and situational context, are reviewed and existing approaches to incorporate context for gestural understanding are reviewed. Example approaches for both context types provide a deeper algorithmic insight into this field of research. An overview of recent robots capable of gesture recognition and understanding summarizes the currently realized human-robot interaction quality.
The approaches for gesture understanding covered in this book are manually designed while humans learn to recognize gestures automatically during growing up. Promising research targeted at analyzing developmental learning in children in order to mimic this capability in technical systems is highlighted in the last Chapter completing this book as this research direction may be highly influential for creating future gesture understanding systems
- âŠ