483 research outputs found

    A recurrent emotional CMAC neural network controller for vision-based mobile robots

    Get PDF
    Vision-based mobile robots often suffer from the difficulties of high nonlinear dynamics and precise positioning requirements, which leads to the development demand of more powerful nonlinear approximation in controlling and monitoring of mobile robots. This paper proposes a recurrent emotional cerebellar model articulation controller (RECMAC) neural network in meeting such demand. In particular, the proposed network integrates a recurrent loop and an emotional learning mechanism into a cerebellar model articulation controller (CMAC), which is implemented as the main component of the controller module of a vision-based mobile robot. Briefly, the controller module consists of a sliding surface, the RECMAC, and a compensator controller. The incorporation of the recurrent structure in a slide model neural network controller ensures the retaining of the previous states of the robot to improve its dynamic mapping ability. The convergence of the proposed system is guaranteed by applying the Lyapunov stability analysis theory. The proposed system was validated and evaluated by both simulation and a practical moving-target tracking task. The experimentation demonstrated that the proposed system outperforms other popular neural network-based control systems, and thus it is superior in approximating highly nonlinear dynamics in controlling vision-based mobile robots

    Hum Factors Ergon Manuf

    Get PDF
    This paper reviews the experiences of 63 case studies of small businesses (< 250 employees) with manufacturing automation equipment acquired through a health/safety intervention grant program. The review scope included equipment technologies classified as industrial robots (n = 17), computer numerical control (CNC) machining (n = 29), or other programmable automation systems (n = 17). Descriptions of workers' compensation (WC) claim injuries and identified risk factors that motivated acquisition of the equipment were extracted from grant applications. Other aspects of the employer experiences, including qualitative and quantitative assessment of effects on risk factors for musculoskeletal disorders (MSD), effects on productivity, and employee acceptance of the intervention were summarized from the case study reports. Case studies associated with a combination of large reduction in risk factors, lower cost per affected employee, and reported increases in productivity were: CNC stone cutting system, CNC/vertical machining system, automated system for bottling, CNC/routing system for plastics products manufacturing, and a CNC/Cutting system for vinyl/carpet. Six case studies of industrial robots reported quantitative reductions in MSD risk factors in these diverse manufacturing industries: Snack Foods; Photographic Film, Paper, Plate, and Chemical; Machine Shops; Leather Good and Allied Products; Plastic Products; and Iron and Steel Forging. This review of health/safety intervention case studies indicates that advanced (programmable) manufacturing automation, including industrial robots, reduced workplace musculoskeletal risk factors and improved process productivity in most cases.CC999999/ImCDC/Intramural CDC HHSUnited States

    Gestures in human-robot interaction

    Get PDF
    Gesten sind ein Kommunikationsweg, der einem Betrachter Informationen oder Absichten übermittelt. Daher können sie effektiv in der Mensch-Roboter-Interaktion, oder in der Mensch-Maschine-Interaktion allgemein, verwendet werden. Sie stellen eine Möglichkeit für einen Roboter oder eine Maschine dar, um eine Bedeutung abzuleiten. Um Gesten intuitiv benutzen zukönnen und Gesten, die von Robotern ausgeführt werden, zu verstehen, ist es notwendig, Zuordnungen zwischen Gesten und den damit verbundenen Bedeutungen zu definieren -- ein Gestenvokabular. Ein Menschgestenvokabular definiert welche Gesten ein Personenkreis intuitiv verwendet, um Informationen zu übermitteln. Ein Robotergestenvokabular zeigt welche Robotergesten zu welcher Bedeutung passen. Ihre effektive und intuitive Benutzung hängt von Gestenerkennung ab, das heißt von der Klassifizierung der Körperbewegung in diskrete Gestenklassen durch die Verwendung von Mustererkennung und maschinellem Lernen. Die vorliegende Dissertation befasst sich mit beiden Forschungsbereichen. Als eine Voraussetzung für die intuitive Mensch-Roboter-Interaktion wird zunächst ein Aufmerksamkeitsmodell für humanoide Roboter entwickelt. Danach wird ein Verfahren für die Festlegung von Gestenvokabulare vorgelegt, das auf Beobachtungen von Benutzern und Umfragen beruht. Anschliessend werden experimentelle Ergebnisse vorgestellt. Eine Methode zur Verfeinerung der Robotergesten wird entwickelt, die auf interaktiven genetischen Algorithmen basiert. Ein robuster und performanter Gestenerkennungsalgorithmus wird entwickelt, der auf Dynamic Time Warping basiert, und sich durch die Verwendung von One-Shot-Learning auszeichnet, das heißt durch die Verwendung einer geringen Anzahl von Trainingsgesten. Der Algorithmus kann in realen Szenarien verwendet werden, womit er den Einfluss von Umweltbedingungen und Gesteneigenschaften, senkt. Schließlich wird eine Methode für das Lernen der Beziehungen zwischen Selbstbewegung und Zeigegesten vorgestellt.Gestures consist of movements of body parts and are a mean of communication that conveys information or intentions to an observer. Therefore, they can be effectively used in human-robot interaction, or in general in human-machine interaction, as a way for a robot or a machine to infer a meaning. In order for people to intuitively use gestures and understand robot gestures, it is necessary to define mappings between gestures and their associated meanings -- a gesture vocabulary. Human gesture vocabulary defines which gestures a group of people would intuitively use to convey information, while robot gesture vocabulary displays which robot gestures are deemed as fitting for a particular meaning. Effective use of vocabularies depends on techniques for gesture recognition, which considers classification of body motion into discrete gesture classes, relying on pattern recognition and machine learning. This thesis addresses both research areas, presenting development of gesture vocabularies as well as gesture recognition techniques, focusing on hand and arm gestures. Attentional models for humanoid robots were developed as a prerequisite for human-robot interaction and a precursor to gesture recognition. A method for defining gesture vocabularies for humans and robots, based on user observations and surveys, is explained and experimental results are presented. As a result of the robot gesture vocabulary experiment, an evolutionary-based approach for refinement of robot gestures is introduced, based on interactive genetic algorithms. A robust and well-performing gesture recognition algorithm based on dynamic time warping has been developed. Most importantly, it employs one-shot learning, meaning that it can be trained using a low number of training samples and employed in real-life scenarios, lowering the effect of environmental constraints and gesture features. Finally, an approach for learning a relation between self-motion and pointing gestures is presented

    Short-term human–robot interaction adaptability in real-world environments

    Get PDF
    “This is a post-peer-review, pre-copyedit version of an article published inInternational Journal of Social Robotics. The final authenticated version is available online at: http://dx.doi.org/10.1007/s12369-019-00606-y"In recent years there has been an increasing interest in deploying robotic systems in public environments able to effectively interact with people. To properly work in the wild, such systems should be robust and be able to deal with complex and unpredictable events that seldom happen in controlled laboratory conditions. Moreover, having to deal with untrained users adds further complexity to the problem and makes the task of defining effective interactions especially difficult. In this work, a Cognitive System that relies on planning is extended with adaptive capabilities and embedded in a Tiago robot. The result is a system able to help a person to complete a predefined game by offering various degrees of assistance. The robot may decide to change the level of assistance depending on factors such as the state of the game or the user performance at a given time. We conducted two days of experiments during a public fair. We selected random users to interact with the robot and only for one time. We show that, despite the short-term nature of human–robot interactions, the robot can effectively adapt its way of providing help, leading to better user performances as compared to a robot not providing this degree of flexibility.Peer ReviewedPostprint (author's final draft

    Prospectus, March 30, 2011

    Get PDF
    APRIL FOOLS\u27 ARTICLES A NO-GO THIS YEAR, Parkland Offers New Hip Course, New Rules in Effect on Campus, Chuck Shepherd\u27s News of the Weird, Album Review: Justin Bieber\u27s My World 2.0, Parkland President to Resign, Parkland Announces New Dress Code Policy, Fine Dining to Arrive at Parkland, Cyber-Clowning for College Credit!, End of the Line for Free Wireless, Chuck Shepherd\u27s News of the Weird, First Sims Medieval, Now Sims Halo?, New Mascot in the Works for Parklandhttps://spark.parkland.edu/prospectus_2011/1000/thumbnail.jp

    Development of the huggable social robot Probo: on the conceptual design and software architecture

    Get PDF
    This dissertation presents the development of a huggable social robot named Probo. Probo embodies a stuffed imaginary animal, providing a soft touch and a huggable appearance. Probo's purpose is to serve as a multidisciplinary research platform for human-robot interaction focused on children. In terms of a social robot, Probo is classified as a social interface supporting non-verbal communication. Probo's social skills are thereby limited to a reactive level. To close the gap with higher levels of interaction, an innovative system for shared control with a human operator is introduced. The software architecture de nes a modular structure to incorporate all systems into a single control center. This control center is accompanied with a 3D virtual model of Probo, simulating all motions of the robot and providing a visual feedback to the operator. Additionally, the model allows us to advance on user-testing and evaluation of newly designed systems. The robot reacts on basic input stimuli that it perceives during interaction. The input stimuli, that can be referred to as low-level perceptions, are derived from vision analysis, audio analysis, touch analysis and object identification. The stimuli will influence the attention and homeostatic system, used to de ne the robot's point of attention, current emotional state and corresponding facial expression. The recognition of these facial expressions has been evaluated in various user-studies. To evaluate the collaboration of the software components, a social interactive game for children, Probogotchi, has been developed. To facilitate interaction with children, Probo has an identity and corresponding history. Safety is ensured through Probo's soft embodiment and intrinsic safe actuation systems. To convey the illusion of life in a robotic creature, tools for the creation and management of motion sequences are put into the hands of the operator. All motions generated from operator triggered systems are combined with the motions originating from the autonomous reactive systems. The resulting motion is subsequently smoothened and transmitted to the actuation systems. With future applications to come, Probo is an ideal platform to create a friendly companion for hospitalised children

    Write a Book IQP

    Get PDF
    2050: The settlement on Mars has been cut off from Earth for nearly 5 years. In spite of their efforts to conserve what little food and water and oxygen they still have, they are running out of time... The Desperates back on Earth have mastered Darwinian survival, while the STEM-Heads have pursued a more discreet evasion of Death since the Collapse of 2045. Yet all of them dream of escaping from their overheated, overpopulated Hell called Home. As the mission to clean-up after First Mars leads a small STEM-Head band towards Kennedy Space Center, rumors of a distant paradise reach Desperate leaders, and, all of sudden, all eyes are back on Mars..
    corecore