1,096 research outputs found

    Mechatronics versus Robotics

    Get PDF
    In Bolton, mechatronics is defined as the integration of electronics, control engineering, and mechanical engineering, thus recognizing the fundamental role of control in joining electronics and mechanics. A robot is commonly considered as a typical mechatronic system, which integrates software, control, electronics, and mechanical designs in a synergistic manner. Robotics can be considered as a part of mechatronics; i.e., all robots are mechatronic systems, but not all mechatronic systems are robots. Advanced robots usually plan their actions by combining an assigned functional task with the knowledge about the environment in which they operate. By using a simplified approach, advanced robots could be defined as mechatronic devices governed by a smart brain, placed at a higher hierarchical level. Actuators are building blocks of any mechatronic system. Such systems, however, have a huge application span, ranging from low-cost consumer applications to high-end, high-precision industrial manufacturing equipment

    Hybrid motion planning approach for robot dexterous hands

    Get PDF
    This paper presents a manipulation planning approach for robot hands that enables the generation of finger trajectories. The planner is based on a hybrid approach that combines discrete-continuous kinematics using a fully discrete transition system. One of the main contributions of this work consists in the representation of the universe of different submodel combinations, as states in a discrete transition system. The manipulated object geometry is taken into account and the system composed by the object and the hand is modeled as a set of closed kinematical chains. The methodology enables the synthesis of complex manipulation trajectories, when one or more fingers change the contact condition with the object. Contact condition changes include rolling contact, sliding contact, contact loss and contact establishment. Tests were carried out employing a three finger manipulation task in computer simulations and with an experimental setup

    Programming Robots for Activities of Everyday Life

    Get PDF
    Text-based programming remains a challenge to novice programmers in\ua0all programming domains including robotics. The use of robots is gainingconsiderable traction in several domains since robots are capable of assisting\ua0humans in repetitive and hazardous tasks. In the near future, robots willbe used in tasks of everyday life in homes, hotels, airports, museums, etc.\ua0However, robotic missions have been either predefined or programmed usinglow-level APIs, making mission specification task-specific and error-prone.\ua0To harness the full potential of robots, it must be possible to define missionsfor specific applications domains as needed. The specification of missions of\ua0robotic applications should be performed via easy-to-use, accessible ways, and\ua0at the same time, be accurate, and unambiguous. Simplicity and flexibility in\ua0programming such robots are important, since end-users come from diverse\ua0domains, not necessarily with suffcient programming knowledge.The main objective of this licentiate thesis is to empirically understand the\ua0state-of-the-art in languages and tools used for specifying robot missions byend-users. The findings will form the basis for interventions in developing\ua0future languages for end-user robot programming.During the empirical study, DSLs for robot mission specification were\ua0analyzed through published literature, their websites, user manuals, samplemissions and using the languages to specify missions for supported robots.After extracting data from 30 environments, 133 features were identified.\ua0A feature matrix mapping the features to the environments was developedwith a feature model for robotic mission specification DSLs.Our results show that most end-user facing environments exist in the\ua0education domain for teaching novice programmers and STEM subjects. Mostof the visual languages are developed using Blockly and Scratch libraries.\ua0The end-user domain abstraction needs more work since most of the visualenvironments abstract robotic and programming language concepts but not\ua0end-user concepts. In future works, it is important to focus on the development\ua0of reusable libraries for end-user concepts; and further, explore how end-user\ua0facing environments can be adapted for novice programmers to learn\ua0general programming skills and robot programming in low resource settings\ua0in developing countries, like Uganda

    Conceptual Design Evaluation of Mechatronic Systems

    Get PDF
    The definition of the conceptual design phase has been expressed in many different phrasings, but all of them lead to the same conclusion. The conceptual design phase is of the highest importance during the design process, due to the fact that many crucial decisions concerning the progress of the design need to be taken with very little to none information and knowledge about the design object. This implies to very high uncertainty about the effects that these decisions will have later on. During the conceptual design of a mechatronic system, the system to be designed is modeled, and several solutions (alternatives) to the design problem are generated and evaluated so that the most fitting one to the design specifications and requirements is chosen. The purpose of this chapter is to mention some of the most widely used methods of system modeling, mainly through hierarchical representations of their subsystems, and also to present a method for the generation and evaluation of the design alternatives

    Enhanced online programming for industrial robots

    Get PDF
    The use of robots and automation levels in the industrial sector is expected to grow, and is driven by the on-going need for lower costs and enhanced productivity. The manufacturing industry continues to seek ways of realizing enhanced production, and the programming of articulated production robots has been identified as a major area for improvement. However, realizing this automation level increase requires capable programming and control technologies. Many industries employ offline-programming which operates within a manually controlled and specific work environment. This is especially true within the high-volume automotive industry, particularly in high-speed assembly and component handling. For small-batch manufacturing and small to medium-sized enterprises, online programming continues to play an important role, but the complexity of programming remains a major obstacle for automation using industrial robots. Scenarios that rely on manual data input based on real world obstructions require that entire production systems cease for significant time periods while data is being manipulated, leading to financial losses. The application of simulation tools generate discrete portions of the total robot trajectories, while requiring manual inputs to link paths associated with different activities. Human input is also required to correct inaccuracies and errors resulting from unknowns and falsehoods in the environment. This study developed a new supported online robot programming approach, which is implemented as a robot control program. By applying online and offline programming in addition to appropriate manual robot control techniques, disadvantages such as manual pre-processing times and production downtimes have been either reduced or completely eliminated. The industrial requirements were evaluated considering modern manufacturing aspects. A cell-based Voronoi generation algorithm within a probabilistic world model has been introduced, together with a trajectory planner and an appropriate human machine interface. The robot programs so achieved are comparable to manually programmed robot programs and the results for a Mitsubishi RV-2AJ five-axis industrial robot are presented. Automated workspace analysis techniques and trajectory smoothing are used to accomplish this. The new robot control program considers the working production environment as a single and complete workspace. Non-productive time is required, but unlike previously reported approaches, this is achieved automatically and in a timely manner. As such, the actual cell-learning time is minimal

    Learning Algorithm Design for Human-Robot Skill Transfer

    Get PDF
    In this research, we develop an intelligent learning scheme for performing human-robot skills transfer. Techniques adopted in the scheme include the Dynamic Movement Prim- itive (DMP) method with Dynamic Time Warping (DTW), Gaussian Mixture Model (G- MM) with Gaussian Mixture Regression (GMR) and the Radical Basis Function Neural Networks (RBFNNs). A series of experiments are conducted on a Baxter robot, a NAO robot and a KUKA iiwa robot to verify the effectiveness of the proposed design.During the design of the intelligent learning scheme, an online tracking system is de- veloped to control the arm and head movement of the NAO robot using a Kinect sensor. The NAO robot is a humanoid robot with 5 degrees of freedom (DOF) for each arm. The joint motions of the operator’s head and arm are captured by a Kinect V2 sensor, and this information is then transferred into the workspace via the forward and inverse kinematics. In addition, to improve the tracking performance, a Kalman filter is further employed to fuse motion signals from the operator sensed by the Kinect V2 sensor and a pair of MYO armbands, so as to teleoperate the Baxter robot. In this regard, a new strategy is developed using the vector approach to accomplish a specific motion capture task. For instance, the arm motion of the operator is captured by a Kinect sensor and programmed through a processing software. Two MYO armbands with embedded inertial measurement units are worn by the operator to aid the robots in detecting and replicating the operator’s arm movements. For this purpose, the armbands help to recognize and calculate the precise velocity of motion of the operator’s arm. Additionally, a neural network based adaptive controller is designed and implemented on the Baxter robot to illustrate the validation forthe teleoperation of the Baxter robot.Subsequently, an enhanced teaching interface has been developed for the robot using DMP and GMR. Motion signals are collected from a human demonstrator via the Kinect v2 sensor, and the data is sent to a remote PC for teleoperating the Baxter robot. At this stage, the DMP is utilized to model and generalize the movements. In order to learn from multiple demonstrations, DTW is used for the preprocessing of the data recorded on the robot platform, and GMM is employed for the evaluation of DMP to generate multiple patterns after the completion of the teaching process. Next, we apply the GMR algorithm to generate a synthesized trajectory to minimize position errors in the three dimensional (3D) space. This approach has been tested by performing tasks on a KUKA iiwa and a Baxter robot, respectively.Finally, an optimized DMP is added to the teaching interface. A character recombination technology based on DMP segmentation that uses verbal command has also been developed and incorporated in a Baxter robot platform. To imitate the recorded motion signals produced by the demonstrator, the operator trains the Baxter robot by physically guiding it to complete the given task. This is repeated five times, and the generated training data set is utilized via the playback system. Subsequently, the DTW is employed to preprocess the experimental data. For modelling and overall movement control, DMP is chosen. The GMM is used to generate multiple patterns after implementing the teaching process. Next, we employ the GMR algorithm to reduce position errors in the 3D space after a synthesized trajectory has been generated. The Baxter robot, remotely controlled by the user datagram protocol (UDP) in a PC, records and reproduces every trajectory. Additionally, Dragon Natural Speaking software is adopted to transcribe the voice data. This proposed approach has been verified by enabling the Baxter robot to perform a writing task of drawing robot has been taught to write only one character

    Manipulation Framework for Compliant Humanoid COMAN: Application to a Valve Turning Task

    Get PDF
    With the purpose of achieving a desired interaction performance for our compliant humanoid robot (COMAN), in this paper we propose a semi-autonomous control framework and evaluate it experimentally in a valve turning setup. The control structure consists of various modules and interfaces to identify the valve, locate the robot in front of it and perform the manipulation. The manipulation module implements four motion primitives (Reach, Grasp, Rotate and Disengage) and realizes the corresponding desired impedance profile for each phase to accomplish the task. In this direction, to establish a stable and compliant contact between the valve and the robot hands, while being able to generate the sufficient rotational torques depending on the valve's friction, Rotate incorporates a novel dual-arm impedance control technique to plan and realize a task-appropriate impedance profile. Results of the implementation of the proposed control framework are firstly evaluated in simulation studies using Gazebo. Subsequent experimental results highlight the efficiency of the proposed impedance planning and control in generation of the required interaction forces to accomplish the task

    Computer Aided Drafting Virtual Reality Interface

    Get PDF
    Computer Aided Drafting (CAD) is pervasive in engineering fields today. It has become indispensable for planning, creating, visualizing, troubleshooting, collaborating, and communicating designs before they exist in physical form. From the beginning, CAD was created to be used by means of a mouse, keyboard, and monitor. Along the way, other, more specialized interface devices were created specifically for CAD that allowed for easier and more intuitive navigation within a 3D space, but they were at best stopgap solutions. Virtual Reality (VR) allows users to navigate and interact with digital 3D objects and environments the same way they would in the real world. For this reason, VR is a natural CAD interface solution. Using VR as an interface for CAD software, creating will be more intuitive and visualizing will be second nature. For this project, a prototype VR CAD program was created using Unreal Engine for use with the HTC Vive to compare against traditional WIMP (windows, icons, menus, pointer) interface CAD programs for the time it takes to learn each program, create similar models, and impressions of using each program, specifically the intuitiveness of the user interface and model manipulation. FreeCAD, SolidWorks, and Blender were the three traditional interface modeling programs chosen to compare against VR because of their wide-spread use for modeling in 3D printing, industry, and gaming, respectively. During the course of the project, two VR modeling programs were released, Google Blocks and MakeVR Pro; because they were of a similar type as the prototype software created in Unreal Engine, they were included for comparison as part of this project. The comparison showed that the VR CAD programs were faster to learn and create models and more intuitive to use than the traditional interface CAD programs
    • …
    corecore