2 research outputs found

    Real-time Coordinate Estimation for Self-Localization of the Humanoid Robot Soccer BarelangFC

    Get PDF
    In implementation, of the humanoid robot soccer consists of more than three robots when played soccer on the field. All the robots needed to be played the soccer as human done such as seeking, chasing, dribbling and kicking the ball. To do all of these commands, it is required a real-time localization system so that each robot will understand not only the robot position itself but also the other robots and even the object on the field’s environment. However, in real-time implementation and due to the limited ability of the robot computation, it is necessary to determine a method which has fast computation and able to save much memory. Therefore, in this paper we presented a real-time localization implementation method using the odometry and Monte Carlo Localization (MCL) method. In order to verify the performance of this method, some experiment has been carried out in real-time application. From the experimental result, the proposed method able to estimate the coordinate of each robot position in X and Y position on the field.Dalam implementasinya, robot humanoid soccer terdiri lebih dari tiga robot di lapangan ketika sedang bermain bola. Semua robot diharapkan dapat memainkan sepak bola seperti manusia seperti mencari, mengejar, menggiring bola dan menendang bola. Untuk melakukan semua perintah tersebut, diperlukan sistem lokalisasi real-time sehingga setiap robot tidak hanya memahami posisi robotnya sendiri tetapi juga robot-robot lain bahkan objek yang berada di sekitar lapangan. Namun dalam implementasi real-time dan karena keterbatasan kemampuan komputasi robot, diperlukan suatu metode komputasi yang cepat dan mampu menghemat banyak memori. Oleh karena itu, dalam makalah ini menyajikan metode implementasi lokalisasi real-time dengan menggunakan metode odometry and Monte Carlo Localization (MCL). Untuk memverifikasi kinerja metode ini, beberapa percobaan telah dilakukan dalam aplikasi real-time. Dari hasil percobaan, metode yang diusulkan mampu mengestimasi koordinat posisi robot pada posisi X dan Y di lapangan ketika sedang bermain bola

    Vision-based methods for state estimation and control of robotic systems with application to mobile and surgical robots

    Get PDF
    For autonomous systems that need to perceive the surrounding environment for the accomplishment of a given task, vision is a highly informative exteroceptive sensory source. When gathering information from the available sensors, in fact, the richness of visual data allows to provide a complete description of the environment, collecting geometrical and semantic information (e.g., object pose, distances, shapes, colors, lights). The huge amount of collected data allows to consider both methods exploiting the totality of the data (dense approaches), or a reduced set obtained from feature extraction procedures (sparse approaches). This manuscript presents dense and sparse vision-based methods for control and sensing of robotic systems. First, a safe navigation scheme for mobile robots, moving in unknown environments populated by obstacles, is presented. For this task, dense visual information is used to perceive the environment (i.e., detect ground plane and obstacles) and, in combination with other sensory sources, provide an estimation of the robot motion with a linear observer. On the other hand, sparse visual data are extrapolated in terms of geometric primitives, in order to implement a visual servoing control scheme satisfying proper navigation behaviours. This controller relies on visual estimated information and is designed in order to guarantee safety during navigation. In addition, redundant structures are taken into account to re-arrange the internal configuration of the robot and reduce its encumbrance when the workspace is highly cluttered. Vision-based estimation methods are relevant also in other contexts. In the field of surgical robotics, having reliable data about unmeasurable quantities is of great importance and critical at the same time. In this manuscript, we present a Kalman-based observer to estimate the 3D pose of a suturing needle held by a surgical manipulator for robot-assisted suturing. The method exploits images acquired by the endoscope of the robot platform to extrapolate relevant geometrical information and get projected measurements of the tool pose. This method has also been validated with a novel simulator designed for the da Vinci robotic platform, with the purpose to ease interfacing and employment in ideal conditions for testing and validation. The Kalman-based observers mentioned above are classical passive estimators, whose system inputs used to produce the proper estimation are theoretically arbitrary. This does not provide any possibility to actively adapt input trajectories in order to optimize specific requirements on the performance of the estimation. For this purpose, active estimation paradigm is introduced and some related strategies are presented. More specifically, a novel active sensing algorithm employing visual dense information is described for a typical Structure-from-Motion (SfM) problem. The algorithm generates an optimal estimation of a scene observed by a moving camera, while minimizing the maximum uncertainty of the estimation. This approach can be applied to any robotic platforms and has been validated with a manipulator arm equipped with a monocular camera
    corecore