45,145 research outputs found

    An AI-based object detection approach for robotic competitions

    Get PDF
    Artificial Intelligence has been introduced in many applications, namely in artificial vision-based systems with object detection tasks. This paper presents an object localization system with a motivation to use it in autonomous mobile robots at robotics competitions. The system aims to allow robots to accomplish their tasks more efficiently. Object detection is performed using a camera and artificial intelligence based on the YOLOv4 Tiny detection model. An algorithm was developed that uses the data from the system to estimate the parameters of location, distance, and orientation based on the pinhole camera model and trigonometric modelling. It can be used in smart identification procedures of objects. Practical tests and results are presented, constantly locating the objects and with errors between 0.16 and 3.8 cm, concluding that the object localization system is adequate for autonomous mobile robots.The authors are grateful to the Foundation for Science and Technology (FCT, Portugal) for financial support through national funds FCT/MCTES (PIDDAC) to CeDRI (UIDB/05757/2020 and UIDP/05757/2020). The project that gave rise to these results received the support of a fellowship from ”la Caixa” Foundation (ID 100010434). The fellowship code is LCF/BQ/DI20/11780028. João Braun is a PhD Student at the Faculty of Engineering, University of Porto (FEUP) supervised by Prof. Paulo Costa.info:eu-repo/semantics/publishedVersio

    The Evolution of First Person Vision Methods: A Survey

    Full text link
    The emergence of new wearable technologies such as action cameras and smart-glasses has increased the interest of computer vision scientists in the First Person perspective. Nowadays, this field is attracting attention and investments of companies aiming to develop commercial devices with First Person Vision recording capabilities. Due to this interest, an increasing demand of methods to process these videos, possibly in real-time, is expected. Current approaches present a particular combinations of different image features and quantitative methods to accomplish specific objectives like object detection, activity recognition, user machine interaction and so on. This paper summarizes the evolution of the state of the art in First Person Vision video analysis between 1997 and 2014, highlighting, among others, most commonly used features, methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart Glasses, Computer Vision, Video Analytics, Human-machine Interactio
    corecore