2 research outputs found

    Vision-Based Machine Learning in Robot Soccer

    No full text
    Robots need to perceive their environment in order to properly interact with it. In the RoboCup Soccer Middle Size League (MSL) this happens primarily through cameras mounted on the robots. Machine Learning can be used to extract relevant features from camera imagery. The real-time analysis of camera data is a challenge for both traditional and Machine Learning algorithms, since all computations in the MSL have to be performed on the robot itself.This contribution shows that it is possible to process camera imagery in real-time using Machine Learning. It does this by presenting the current state of Machine Learning in MSL and providing two examples that won the Scientific and Technical Challenges at RoboCup 2021. Both examples focus on semantic detection of objects and humans in imagery. The Scientific Challenge winner presents how YOLOv5 can be used for object detection in the MSL. The Technical Challenge winner demonstrates how to improve interaction between robots and humans in soccer using OpenPose. This contributes towards the goal of RoboCup to arrive at robots that can beat the human soccer world champion by 2050

    Vision-Based Machine Learning in Robot Soccer

    No full text
    Robots need to perceive their environment in order to properly interact with it. In the RoboCup Soccer Middle Size League (MSL) this happens primarily through cameras mounted on the robots. Machine Learning can be used to extract relevant features from camera imagery. The real-time analysis of camera data is a challenge for both traditional and Machine Learning algorithms, since all computations in the MSL have to be performed on the robot itself. This contribution shows that it is possible to process camera imagery in real-time using Machine Learning. It does this by presenting the current state of Machine Learning in MSL and providing two examples that won the Scientific and Technical Challenges at RoboCup 2021. Both examples focus on semantic detection of objects and humans in imagery. The Scientific Challenge winner presents how YOLOv5 can be used for object detection in the MSL. The Technical Challenge winner demonstrates how to improve interaction between robots and humans in soccer using OpenPose. This contributes towards the goal of RoboCup to arrive at robots that can beat the human soccer world champion by 2050
    corecore