76 research outputs found
Design and development of the sEMG-based exoskeleton strength enhancer for the legs
This paper reviews the different exoskeleton designs and presents a working prototype of a surface electromyography (EMG) controlled exoskeleton to enhance the strength of the lower leg. The Computer Aided Design (CAD) model of the exoskeleton is designed,3D printed with respect to the golden ratio of human anthropometry, and tested structurally. The exoskeleton control system is designed on the LabVIEW National Instrument platform and embedded in myRIO. Surface EMG sensors (sEMG) and flex sensors are usedcoherently to create different state filters for the EMG, human body posture and control for the mechanical exoskeleton actuation. The myRIO is used to process sEMG signals and send control signals to the exoskeleton. Thus,the complete exoskeleton system consists of sEMG as primary sensor and flex sensor as a secondary sensor while the whole control system is designed in LabVIEW. FEA simulation and tests show that the exoskeleton is suitable for an average human weight of 62 kg plus excess force with different reactive spring forces. However, due to the mechanical properties of the exoskeleton actuator, it will require an additional liftto provide the rapid reactive impulse force needed to increase biomechanical movement such as squatting up. Finally, with the increasing availability of such assistive devices on the market, the important aspect of ethical, social and legal issues have also emerged and discussed in this paper
What makes a social robot good at interacting with humans?
This paper discusses the nuances of a social robot, how and why social robots are becoming increasingly significant, and what they are currently being used for. This paper also reflects on the current design of social robots as a means of interaction with humans and also reports potential solutions about several important questions around the futuristic design of these robots. The specific questions explored in this paper are: “Do social robots need to look like living creatures that already exist in the world for humans to interact well with them?”; “Do social robots need to have animated faces for humans to interact well with them?”; “Do social robots need to have the ability to speak a coherent human language for humans to interact well with them?” and “Do social robots need to have the capability to make physical gestures for humans to interact well with them?”. This paper reviews both verbal as well as nonverbal social and conversational cues that could be incorporated into the design of social robots, and also briefly discusses the emotional bonds that may be built between humans and robots. Facets surrounding acceptance of social robots by humans and also ethical/moral concerns have also been discussed
ROS based autonomous control of a humanoid robot
International audienceThis paper presents an artificial neural network-based control architecture allowing autonomous mobile robot indoor navigation by emulating the cognition process of a human brain when navigating in an unknown environment. The proposed architecture is based on a simultaneous top-down and bottom up approach, which combines the a priori knowledge of the environment gathered from a previously examined floor plan with the visual information acquired in real time. Thus, in order to take the right decision during navigation, the robot is able to process both set of information, compare them in real time and react accordingly. The architecture is composed of two modules: a) A deliberative module, corresponding to the processing chain in charge of extracting a sequence of navigation signs expected to be found in the environment, generating an optimal path plan to reach the goal,computing and memorizing the sequence of signs [1]. The path planning stage allowing the computation of the sign sequence is based on a neural implementation of the resistive grid. b) A reactive module, integrating the said sequence information in order to use it to control online navigation and learning sensory-motor associations. It follows a perception-action mechanism that constantly evolves because of the dynamic interaction between the robot and its environment. It is composed of three layers: one layer using a cognitive mechanism and the other two using a reflex mechanism. Experimental results obtained from the physical implementation of the architecture in an indoor environment show the feasibility of this approach
Using robot operating system (ROS) and single board computer to control bioloid robot motion
This paper presents a research study on the adaptation of a novel technique for placing a programmable component over the structural component of a Robotis Bioloid humanoid robot. Assimilating intelligence plays an important role in the field of robotics that enables a computer to model or replicate some of the intelligent behaviors of human beings but with minimal human intervention. As a part of this effort, this paper revises the Bioloid robot structure so as to be able to control the robotic movement via a single board computer Beaglebone Black (BBB) and Robot operating system (ROS). ROS as the development frame work in conjunction with the main BBB controller that integrates robotic functions is an important aspect of this research, and is a first of its kind approach. A full ROS computation has been developed by which an API that will be usable by high level software using ROS services has also been developed. The human like body structure of the Bioloid robot and BeagleBone Black running ROS along with the intellectual components are used to make the robot walk efficiently
Robot Operating System (ROS) Controlled Anthropomorphic Robot Hand
This paper presents a new design of a dexterous robot hand by incorporating human hand factors. The robotic hand is a Robot Operating System (ROS) controlled standalone unit that can perform key tasks and work independently. Hardware such as actuators, electronics, sensors, pulleys etc. are embedded within or on the hand itself. Raspberry Pi, a single board computer which runs ROS and is used to control the hand movements as well as process the sensor signals is placed outside of the hand. It supports peripheral devices such as screen display, keyboard and mouse. The hand prototype is designed in Solid Works and 3D printed/built using aluminum sheet. The prototype is similar to the human hand in terms of shape and possesses key functionalities and abilities of the human hand, especially to imitate key movements of the human hand and be as dexterous as possible whilst keeping a low cost. Other important factors considered while prototyping the model were that the hand should be reliable, have a durable construction, and should be built using widely available off-the-shelf components and an open-source software. Though the prototype hand only has 6 degrees-of-freedom (DOF) compared to the 22 DOF of the human hand, it is able to perform most grasps effectively. The proposed model will allow other researchers to build similar robotic hands and perform specialized research
Correcting Projection Effects in CMEs Using GCS-Based Large Statistics of Multi-Viewpoint Observations
This study addresses the limitations of single-viewpoint observations of Coronal Mass Ejections (CMEs) by presenting results from a 3D catalog of 360 CMEs during solar cycle 24, fitted using the Graduated Cylindrical Shell (GCS) model. The data set combines 326 previously analyzed CMEs and 34 newly examined events, categorized by their source regions into active region (AR) eruptions, active prominence (AP) eruptions, and prominence eruptions (PE). Estimates of errors are made using a bootstrapping approach. The findings highlight that the average 3D speed of CMEs is ∼1.3 times greater than the 2D speed. PE CMEs tend to be slow, with an average speed of 432 km s−1. AR and AP speeds are higher, at 723 and 813 km s−1, respectively, with the latter having fewer slow CMEs. The distinctive behavior of AP CMEs is attributed to factors like overlying magnetic field distribution or geometric complexities leading to less accurate GCS fits. A linear fit of projected speed to width gives a gradient of ∼2 km s−1 deg−1, which increases to 5 km s−1 deg−1 when the GCS-fitted ‘true’ parameters are used. Notably, AR CMEs exhibit a high gradient of 7 km s−1 deg−1, while AP CMEs show a gradient of 4 km s−1 deg−1. PE CMEs, however, lack a significant speed-width relationship. We show that fitting multi-viewpoint CME images to a geometrical model such as GCS is important to study the statistical properties of CMEs, and can lead to a deeper insight into CME behavior that is essential for improving future space weather forecasting
- …