23,423 research outputs found

    Human-Robot Collaboration as a new paradigm in circular economy for WEEE management

    Get PDF
    E-waste is a priority waste stream as identified by the European Commission due to fast technological changes and eagerness of consumers to acquire new products. The value chain of the Waste on Electric and Electronic Equipment (WEEE) has to face several challenges: the EU directives requesting collection targets for 2019–2022, the costs of disassembly processes which is highly dependent on the applied technology and type of discarded device, and the sale of the obtained components and/or raw materials, with market prices varying according to uncontrolled variables at world level. This paper presents a human-robot collaboration for a recycling process where tasks are opportunistically assigned to either a human-being or a robot depending on the condition of the discarded electronic device. This solution presents some important advantages; i.e. tedious and dangerous tasks are assigned to robots whereas more value-added tasks are allocated to humans, thus preserving jobs and increasing job satisfaction. Furthermore, first results from a prototype show greater productivity and profitable projected investment

    Collaborative Control for a Robotic Wheelchair: Evaluation of Performance, Attention, and Workload

    Get PDF
    Powered wheelchair users often struggle to drive safely and effectively and in more critical cases can only get around when accompanied by an assistant. To address these issues, we propose a collaborative control mechanism that assists the user as and when they require help. The system uses a multiple–hypotheses method to predict the driver’s intentions and if necessary, adjusts the control signals to achieve the desired goal safely. The main emphasis of this paper is on a comprehensive evaluation, where we not only look at the system performance, but, perhaps more importantly, we characterise the user performance, in an experiment that combines eye–tracking with a secondary task. Without assistance, participants experienced multiple collisions whilst driving around the predefined route. Conversely, when they were assisted by the collaborative controller, not only did they drive more safely, but they were able to pay less attention to their driving, resulting in a reduced cognitive workload. We discuss the importance of these results and their implications for other applications of shared control, such as brain–machine interfaces, where it could be used to compensate for both the low frequency and the low resolution of the user input

    Fast human motion prediction for human-robot collaboration with wearable interfaces

    Full text link
    In this paper, we aim at improving human motion prediction during human-robot collaboration in industrial facilities by exploiting contributions from both physical and physiological signals. Improved human-machine collaboration could prove useful in several areas, while it is crucial for interacting robots to understand human movement as soon as possible to avoid accidents and injuries. In this perspective, we propose a novel human-robot interface capable to anticipate the user intention while performing reaching movements on a working bench in order to plan the action of a collaborative robot. The proposed interface can find many applications in the Industry 4.0 framework, where autonomous and collaborative robots will be an essential part of innovative facilities. A motion intention prediction and a motion direction prediction levels have been developed to improve detection speed and accuracy. A Gaussian Mixture Model (GMM) has been trained with IMU and EMG data following an evidence accumulation approach to predict reaching direction. Novel dynamic stopping criteria have been proposed to flexibly adjust the trade-off between early anticipation and accuracy according to the application. The output of the two predictors has been used as external inputs to a Finite State Machine (FSM) to control the behaviour of a physical robot according to user's action or inaction. Results show that our system outperforms previous methods, achieving a real-time classification accuracy of 94.3±2.9%94.3\pm2.9\% after 160.0msec±80.0msec160.0msec\pm80.0msec from movement onset
    corecore