5,400 research outputs found

    A Novel Energy-Efficient Hexapod Robot Design using a Rotary Encoder-Embedded Weight-Bearing Wheel

    Get PDF
    The direct proportionality that exists between the joint actuator rated torque of conventional hexapod robots and the payload mass makes them unsuitable for applications that require energy efficiency. In this paper, we propose a novel hexapod robot design which involves the incorporation of a rotary encoder-embedded weight-bearing wheel to relax the stringent limitations on the choice of the robot’s joint actuator torques and the battery capacity. The results of the prototype implementation showed that our design inherits the merits of easy linear distance measurement via the embedded rotary encoder, low actuator torque and high payload capability from wheeled robots. Keywords: Hexapod robots; Joint actuator; Wheeled robots; Weight-bearing wheel; Rotary encoder; Actuator torque DOI: 10.7176/CTI/8-0

    Visual Guided Approach-to-Grasp for Humanoid Robots

    Get PDF
    Vision based control for robots has been an active area of research for more than 30 years and significant progresses in the theory and application have been reported (Hutchinson et al., 1996; Kragic & Christensen, 2002; Chaumette & Hutchinson, 2006). Vision is a very important non-contact measurement method for robots. Especially in the field of humanoi

    Human robot interaction in a crowded environment

    No full text
    Human Robot Interaction (HRI) is the primary means of establishing natural and affective communication between humans and robots. HRI enables robots to act in a way similar to humans in order to assist in activities that are considered to be laborious, unsafe, or repetitive. Vision based human robot interaction is a major component of HRI, with which visual information is used to interpret how human interaction takes place. Common tasks of HRI include finding pre-trained static or dynamic gestures in an image, which involves localising different key parts of the human body such as the face and hands. This information is subsequently used to extract different gestures. After the initial detection process, the robot is required to comprehend the underlying meaning of these gestures [3]. Thus far, most gesture recognition systems can only detect gestures and identify a person in relatively static environments. This is not realistic for practical applications as difficulties may arise from people‟s movements and changing illumination conditions. Another issue to consider is that of identifying the commanding person in a crowded scene, which is important for interpreting the navigation commands. To this end, it is necessary to associate the gesture to the correct person and automatic reasoning is required to extract the most probable location of the person who has initiated the gesture. In this thesis, we have proposed a practical framework for addressing the above issues. It attempts to achieve a coarse level understanding about a given environment before engaging in active communication. This includes recognizing human robot interaction, where a person has the intention to communicate with the robot. In this regard, it is necessary to differentiate if people present are engaged with each other or their surrounding environment. The basic task is to detect and reason about the environmental context and different interactions so as to respond accordingly. For example, if individuals are engaged in conversation, the robot should realize it is best not to disturb or, if an individual is receptive to the robot‟s interaction, it may approach the person. Finally, if the user is moving in the environment, it can analyse further to understand if any help can be offered in assisting this user. The method proposed in this thesis combines multiple visual cues in a Bayesian framework to identify people in a scene and determine potential intentions. For improving system performance, contextual feedback is used, which allows the Bayesian network to evolve and adjust itself according to the surrounding environment. The results achieved demonstrate the effectiveness of the technique in dealing with human-robot interaction in a relatively crowded environment [7]

    Design, Development, and Evaluation of a Teleoperated Master-Slave Surgical System for Breast Biopsy under Continuous MRI Guidance

    Get PDF
    The goal of this project is to design and develop a teleoperated master-slave surgical system that can potentially assist the physician in performing breast biopsy with a magnetic resonance imaging (MRI) compatible robotic system. MRI provides superior soft-tissue contrast compared to other imaging modalities such as computed tomography or ultrasound and is used for both diagnostic and therapeutic procedures. The strong magnetic field and the limited space inside the MRI bore, however, restrict direct means of breast biopsy while performing real-time imaging. Therefore, current breast biopsy procedures employ a blind targeting approach based on magnetic resonance (MR) images obtained a priori. Due to possible patient involuntary motion or inaccurate insertion through the registration grid, such approach could lead to tool tip positioning errors thereby affecting diagnostic accuracy and leading to a long and painful process, if repeated procedures are required. Hence, it is desired to develop the aforementioned teleoperation system to take advantages of real-time MR imaging and avoid multiple biopsy needle insertions, improving the procedure accuracy as well as reducing the sampling errors. The design, implementation, and evaluation of the teleoperation system is presented in this dissertation. A MRI-compatible slave robot is implemented, which consists of a 1 degree of freedom (DOF) needle driver, a 3-DOF parallel mechanism, and a 2-DOF X-Y stage. This slave robot is actuated with pneumatic cylinders through long transmission lines except the 1-DOF needle driver is actuated with a piezo motor. Pneumatic actuation through long transmission lines is then investigated using proportional pressure valves and controllers based on sliding mode control are presented. A dedicated master robot is also developed, and the kinematic map between the master and the slave robot is established. The two robots are integrated into a teleoperation system and a graphical user interface is developed to provide visual feedback to the physician. MRI experiment shows that the slave robot is MRI-compatible, and the ex vivo test shows over 85%success rate in targeting with the MRI-compatible robotic system. The success in performing in vivo animal experiments further confirm the potential of further developing the proposed robotic system for clinical applications

    Keyframe-based visual–inertial odometry using nonlinear optimization

    Get PDF
    Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate visual–inertial odometry or simultaneous localization and mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that nonlinear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual–inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter. This competitive reference implementation performs tightly coupled filtering-based visual–inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy

    Robot Egomotion from the Deformation of Active Contours

    Get PDF
    Traditional sources of information for image-based computer vision algorithms have been points, lines, corners, and recently SIFT features (Lowe, 2004), which seem to represent at present the state of the art in feature definition. Alternatively, the present work explores the possibility of using tracked contours as informative features, especially in applications no

    A survey on fractional order control techniques for unmanned aerial and ground vehicles

    Get PDF
    In recent years, numerous applications of science and engineering for modeling and control of unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) systems based on fractional calculus have been realized. The extra fractional order derivative terms allow to optimizing the performance of the systems. The review presented in this paper focuses on the control problems of the UAVs and UGVs that have been addressed by the fractional order techniques over the last decade

    A path planning and path-following control framework for a general 2-trailer with a car-like tractor

    Full text link
    Maneuvering a general 2-trailer with a car-like tractor in backward motion is a task that requires significant skill to master and is unarguably one of the most complicated tasks a truck driver has to perform. This paper presents a path planning and path-following control solution that can be used to automatically plan and execute difficult parking and obstacle avoidance maneuvers by combining backward and forward motion. A lattice-based path planning framework is developed in order to generate kinematically feasible and collision-free paths and a path-following controller is designed to stabilize the lateral and angular path-following error states during path execution. To estimate the vehicle state needed for control, a nonlinear observer is developed which only utilizes information from sensors that are mounted on the car-like tractor, making the system independent of additional trailer sensors. The proposed path planning and path-following control framework is implemented on a full-scale test vehicle and results from simulations and real-world experiments are presented.Comment: Preprin
    • …
    corecore