2,837 research outputs found

    Intuitive adaptive orientation control of assistive robots for people living with upper limb disabilities

    Get PDF
    Robotic assistive devices enhance the autonomy of individuals living with physical disabilities in their day-to-day life. Although the first priority for such devices is safety, they must also be intuitive and efficient from an engineering point of view in order to be adopted by a broad range of users. This is especially true for assistive robotic arms, as they are used for the complex control tasks of daily living. One challenge in the control of such assistive robots is the management of the end-effector orientation which is not always intuitive for the human operator, especially for neophytes. This paper presents a novel orientation control algorithm designed for robotic arms in the context of human-robot interaction. This work aims at making the control of the robot's orientation easier and more intuitive for the user, in particular, individuals living with upper limb disabilities. The performance and intuitiveness of the proposed orientation control algorithm is assessed through two experiments with 25 able-bodied subjects and shown to significantly improve on both aspects

    More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch

    Full text link
    For humans, the process of grasping an object relies heavily on rich tactile feedback. Most recent robotic grasping work, however, has been based only on visual input, and thus cannot easily benefit from feedback after initiating contact. In this paper, we investigate how a robot can learn to use tactile information to iteratively and efficiently adjust its grasp. To this end, we propose an end-to-end action-conditional model that learns regrasping policies from raw visuo-tactile data. This model -- a deep, multimodal convolutional network -- predicts the outcome of a candidate grasp adjustment, and then executes a grasp by iteratively selecting the most promising actions. Our approach requires neither calibration of the tactile sensors, nor any analytical modeling of contact forces, thus reducing the engineering effort required to obtain efficient grasping policies. We train our model with data from about 6,450 grasping trials on a two-finger gripper equipped with GelSight high-resolution tactile sensors on each finger. Across extensive experiments, our approach outperforms a variety of baselines at (i) estimating grasp adjustment outcomes, (ii) selecting efficient grasp adjustments for quick grasping, and (iii) reducing the amount of force applied at the fingers, while maintaining competitive performance. Finally, we study the choices made by our model and show that it has successfully acquired useful and interpretable grasping behaviors.Comment: 8 pages. Published on IEEE Robotics and Automation Letters (RAL). Website: https://sites.google.com/view/more-than-a-feelin

    Wall Climbing Robot

    Get PDF
    This report basically discusses the preliminary research done and basic understanding of the chosen topic, which is Wall Climbing Robots. Theobjective of this project is to develop of robot manipulator capable of ascending glass wall to do cleaning duties. The operation of the wall climbing robot in this project will be based on the pneumatic conceptthat is the suction of air to make robot can move along the vertical glass. This project will require a program using Programmable Logic Controller (PLC) to be the controller to its movement. The PLC ladder program will execute a sequence automatically according to the pre defined sequence of robot motion

    Design of a Multi-Mode Hybrid Micro-Gripper for Surface Mount Technology Component Assembly

    Get PDF
    In the last few decades, industrial sectors such as smart manufacturing and aerospace have rapidly developed, contributing to the increase in production of more complex electronic boards based on SMT (Surface Mount Technology). The assembly phases in manufacturing these electronic products require the availability of technological solutions able to deal with many heterogeneous products and components. The small batch production and pre-production are often executed manually or with semi-automated stations. The commercial automated machines currently available offer high performance, but they are highly rigid. Therefore, a great effort is needed to obtain machines and devices with improved reconfigurability and flexibility for minimizing the set-up time and processing the high heterogeneity of components. These high-level objectives can be achieved acting in different ways. Indeed, a work station can be seen as a set of devices able to interact and cooperate to perform a specific task. Therefore, the reconfigurability of a work station can be achieved through reconfigurable and flexible devices and their hardware and software integration and control For this reason, significant efforts should be focused on the conception and development of innovative devices to cope with the continuous downscaling and increasing variety of the products in this growing field. In this context, this paper presents the design and development of a multi-mode hybrid micro-gripper devoted to manipulate and assemble a wide range of micro- and meso-SMT components with different dimensions and proprieties. It exploits two different handling technologies: the vacuum and friction

    Numerical fluid dynamics simulation for drones’ chemical detection

    Get PDF
    The risk associated with chemical, biological, radiological, nuclear, and explosive (CBRNe) threats in the last two decades has grown as a result of easier access to hazardous materials and agents, potentially increasing the chance for dangerous events. Consequently, early detection of a threat following a CBRNe event is a mandatory requirement for the safety and security of human operators involved in the management of the emergency. Drones are nowadays one of the most advanced and versatile tools available, and they have proven to be successfully used in many different application fields. The use of drones equipped with inexpensive and selective detectors could be both a solution to improve the early detection of threats and, at the same time, a solution for human operators to prevent dangerous situations. To maximize the drone’s capability of detecting dangerous volatile substances, fluid dynamics numerical simulations may be used to understand the optimal configuration of the detectors positioned on the drone. This study serves as a first step to investigate how the fluid dynamics of the drone propeller flow and the different sensors position on-board could affect the conditioning and acquisition of data. The first consequence of this approach may lead to optimizing the position of the detectors on the drone based not only on the specific technology of the sensor, but also on the type of chemical agent dispersed in the environment, eventually allowing to define a technological solution to enhance the detection process and ensure the safety and security of first responders

    Using humanoid robots to study human behavior

    Get PDF
    Our understanding of human behavior advances as our humanoid robotics work progresses-and vice versa. This team's work focuses on trajectory formation and planning, learning from demonstration, oculomotor control and interactive behaviors. They are programming robotic behavior based on how we humans “program” behavior in-or train-each other

    Building Embodied Conversational Agents:Observations on human nonverbal behaviour as a resource for the development of artificial characters

    Get PDF
    "Wow this is so cool!" This is what I most probably yelled, back in the 90s, when my first computer program on our MSX computer turned out to do exactly what I wanted it to do. The program contained the following instruction: COLOR 10(1.1) After hitting enter, it would change the screen color from light blue to dark yellow. A few years after that experience, Microsoft Windows was introduced. Windows came with an intuitive graphical user interface that was designed to allow all people, so also those who would not consider themselves to be experienced computer addicts, to interact with the computer. This was a major step forward in human-computer interaction, as from that point forward no complex programming skills were required anymore to perform such actions as adapting the screen color. Changing the background was just a matter of pointing the mouse to the desired color on a color palette. "Wow this is so cool!". This is what I shouted, again, 20 years later. This time my new smartphone successfully skipped to the next song on Spotify because I literally told my smartphone, with my voice, to do so. Being able to operate your smartphone with natural language through voice-control can be extremely handy, for instance when listening to music while showering. Again, the option to handle a computer with voice instructions turned out to be a significant optimization in human-computer interaction. From now on, computers could be instructed without the use of a screen, mouse or keyboard, and instead could operate successfully simply by telling the machine what to do. In other words, I have personally witnessed how, within only a few decades, the way people interact with computers has changed drastically, starting as a rather technical and abstract enterprise to becoming something that was both natural and intuitive, and did not require any advanced computer background. Accordingly, while computers used to be machines that could only be operated by technically-oriented individuals, they had gradually changed into devices that are part of many people’s household, just as much as a television, a vacuum cleaner or a microwave oven. The introduction of voice control is a significant feature of the newer generation of interfaces in the sense that these have become more "antropomorphic" and try to mimic the way people interact in daily life, where indeed the voice is a universally used device that humans exploit in their exchanges with others. The question then arises whether it would be possible to go even one step further, where people, like in science-fiction movies, interact with avatars or humanoid robots, whereby users can have a proper conversation with a computer-simulated human that is indistinguishable from a real human. An interaction with a human-like representation of a computer that behaves, talks and reacts like a real person would imply that the computer is able to not only produce and understand messages transmitted auditorily through the voice, but also could rely on the perception and generation of different forms of body language, such as facial expressions, gestures or body posture. At the time of writing, developments of this next step in human-computer interaction are in full swing, but the type of such interactions is still rather constrained when compared to the way humans have their exchanges with other humans. It is interesting to reflect on how such future humanmachine interactions may look like. When we consider other products that have been created in history, it sometimes is striking to see that some of these have been inspired by things that can be observed in our environment, yet at the same do not have to be exact copies of those phenomena. For instance, an airplane has wings just as birds, yet the wings of an airplane do not make those typical movements a bird would produce to fly. Moreover, an airplane has wheels, whereas a bird has legs. At the same time, an airplane has made it possible for a humans to cover long distances in a fast and smooth manner in a way that was unthinkable before it was invented. The example of the airplane shows how new technologies can have "unnatural" properties, but can nonetheless be very beneficial and impactful for human beings. This dissertation centers on this practical question of how virtual humans can be programmed to act more human-like. The four studies presented in this dissertation all have the equivalent underlying question of how parts of human behavior can be captured, such that computers can use it to become more human-like. Each study differs in method, perspective and specific questions, but they are all aimed to gain insights and directions that would help further push the computer developments of human-like behavior and investigate (the simulation of) human conversational behavior. The rest of this introductory chapter gives a general overview of virtual humans (also known as embodied conversational agents), their potential uses and the engineering challenges, followed by an overview of the four studies
    • 

    corecore