436 research outputs found

    SISTEM DETEKSI BENDA PADA KONDISI TERHALANG MELALUI INTERAKSI MANUSIA DAN KOMPUTER MENGGUNAKAN METODE DIAGRAM VENN BERBASIS IMAGE PROCESSING

    Get PDF
    Untuk mendeteksi sebuah benda pada kondisi benda terhalang melalui interaksi manusia dan komputer, membutuhkan sebuah sistem yang mampu berkerja secara optimal, efektif dan efisien. Pada kondisi benda terhalang sistem harus bisa membedakan antara satu benda dengan benda lain, karena setiap benda saling menghalangi maka benda yang tampak, terlihat tidak sempurna dalam hal bentuk dan ukurannya. Komputer sulit mengenali bentuk dan ukuran dari benda yang terhalang, berbeda dengan manusia yang memiliki pengetahuan sebelumnya (prior knowledge) tentang banyak sekali benda yang sempurna. Meskipun benda itu terhalang, manusia masih tetap mengenalinya sebagai benda yang sempurna baik dari segi bentuk dan ukuran. Sedangkan, komputer tidak memiliki pengetahuan awal (prior knowledge) terhadap suatu benda sehingga mengenali benda apa adanya. Oleh karena itu dibutuhkan suatu sistem yang dapat mendeteksi benda pada kondisi benda terhalang dan di dalam group. Penelitian ini fokus pada proses pendeteksian benda dalam keadaan di dalam group ataupun terhalang oleh benda lain, dengan menggunakan metode diagram venn yang memilki 4 macam kondisi yaitu kondisi benda referensi sebagai acuan (posisi dan warna dominan), kondisi warna benda yang lebih dominan, kondisi posisi benda yang lebih dominan dan kasus-kasus lainnya dengan metode diagram venn. Selanjutnya sistem dibandingkan dengan pertanyaan yang diajukan oleh 25 orang responden terhadap 30 sampel gambar. Dari grafik pada setiap sampel gambar terlihat bahwa untuk mendeteksi sebuah benda responden membutuhkan rata-rata 4.46 pertanyaan pada masing-masing sampel dan kondisi benda, sedangkan sistem hanya membutuhkan rata-rata 2.57 pertanyaan yang diajukan untuk mendeteksi benda. Berdasarkan hasil-hasil ini maka dapat dikatakan sistem yang dibangun ini terbukti lebih efektif dan efisien dalam mendeteksi benda yang terhalang dan di dalam group. Kata kunci : deteksi benda, metode diagram venn, benda referensi, interaksi manusia dan komputer, kondisi terhalang, efektif dan efisien

    Benda Referensi sebagai Acuan Penyederhanaan untuk Deteksi Benda pada Kondisi Terhalang dengan Metoda Support Machine

    Get PDF
    Object in an unobstructed condition is a very difficult condition to recognize. This is due to the overall size of imperfect objects visible. For robot applications, of course this will be very annoying. This condition will lead to errors in decision-making for robots. On the other hand, the crowd of objects that are too much in the robot work environment will complicate the robot in executing objects desired by the user. Robot will easily recognize the object if the object is in a limited environment and not too much so that the focus of his work is more optimal. On too many objects, the probability of the target object will decrease as the number of other objects also allows becoming the candidate as the target object. For that we need an effort to simplify the work environment of the robot so that the focus can be more focused by dividing the environment that is too crowded into several parts. One of the benchmarks in dividing the environment is a reference object. This research will focus on the discussion and the role of reference objects to simplify the environment that is too complicated for robot work. The reference object will be determined by the robot itself from the features with predefined parameters. An important point in this research is how robots can be as intelligent as humans think in determining what references object criteria are used. As cluster determination for reference object used support vector machine method as guide of robot determine class and feature object. The desired end goal is the formation of a simple system in the robot, Simple in this sense is the maximum number of objects 3 pieces.Keywords : object detection, support vector machine, reference objectAbstrak— Sebuah benda dalam kondisi terhalang, merupakan kondisi yang sangat sulit untuk dikenali. Hal ini disebabkan keseluruhan bentuk benda tidak sempurna terlihat. Untuk aplikasi robot, tentu saja hal ini akan sangat mengganggu. Benda yang tidak sempurna terlihat akan menimbulkan kesalahan pada pengambilan keputusan bagi robot. Di sisi lain, kerumunan benda yang terlalu banyak pada lingkungan kerja robot, akan menyulitkan robot dalam mengeksekusi benda yang diinginkan oleh pengguna. Robot akan mudah mengenali benda apabila benda tersebut berada pada lingkungan yang terbatas dan tidak terlalu banyak sehingga fokus kerja nya lebih optimal. Pada benda yang terlalu banyak, probabilitas benda sasaran akan semakin kecil karena banyaknya benda-benda lain yang memungkin juga untuk menjadi kandidat sebagai benda sasaran. Untuk itu diperlukan suatu upaya menyederhanakan lingkungan kerja robot agar fokusnya bisa lebih terarah dengan cara  membagi lingkungan yang terlalu ramai tersebut menjadi beberapa bagian. Salah satu patokan dalam membagi lingkungan tersebut adalah benda referensi. Penelitian ini akan menitik beratkan pada pembahasan dan peran benda referensi untuk menyederhanakan lingkungan yang terlalu ramai bagi kerja robot. Benda referensi akan ditentukan oleh robot sendiri dari fitur –fitur benda  dengan parameter-parameter yang telah ditentukan. Point penting dalam penelitian ini adalah bagaimana robot bisa dengan cerdas dalam menentukan kriteria benda  referensi apa  yang digunakan. Sebagai penentuan kluster untuk benda referensi digunakan metoda support vector machine sebagai panduan robot menentukan kelas dan fitur benda. Tujuan akhir yang diinginkan adalah terbentuknya sistem yang sederhana dalam padangan robot, Sederhana dalam pengertian ini adalah jumlah benda maksimal 3 buah.Kata Kunci : deteksi benda, support vector machine. benda referens

    Semantics-based platform for context-aware and personalized robot interaction in the internet of robotic things

    Get PDF
    Robots are moving from well-controlled lab environments to the real world, where an increasing number of environments has been transformed into smart sensorized IoT spaces. Users will expect these robots to adapt to their preferences and needs, and even more so for social robots that engage in personal interactions. In this paper, we present declarative ontological models and a middleware platform for building services that generate interaction tasks for social robots in smart IoT environments. The platform implements a modular, data-driven workflow that allows developers of interaction services to determine the appropriate time, content and style of human-robot interaction tasks by reasoning on semantically enriched loT sensor data. The platform also abstracts the complexities of scheduling, planning and execution of these tasks, and can automatically adjust parameters to the personal profile and current context. We present motivational scenarios in three environments: a smart home, a smart office and a smart nursing home, detail the interfaces and executional paths in our platform and present a proof-of-concept implementation. (C) 2018 Elsevier Inc. All rights reserved

    The 1995 Goddard Conference on Space Applications of Artificial Intelligence and Emerging Information Technologies

    Get PDF
    This publication comprises the papers presented at the 1995 Goddard Conference on Space Applications of Artificial Intelligence and Emerging Information Technologies held at the NASA/Goddard Space Flight Center, Greenbelt, Maryland, on May 9-11, 1995. The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed

    Automatic motion of manipulator using sampling based motion planning algorithms - application in service robotics

    Get PDF
    The thesis presents new approaches for autonomous motion execution of a robotic arm. The calculation of the motion is called motion planning and requires the computation of robot arm's path. The text covers the calculation of the path and several algorithms have been therefore implemented and tested in several real scenarios. The work focuses on sampling based planners, which means that the path is created by connecting explicitly random generated points in the free space. The algorithms can be divided into three categories: those that are working in configuration space(C-Space)(C- Space is the set of all possible joint angles of a robotic arm) , the mixed approaches using both Cartesian and C-Space and those that are using only the Cartesian space. Although Cartesian space seems more appropriate, due to dimensionality, this work illustrates that the C-Space planners can achieve comparable or better results. Initially an enhanced approach for efficient collision detection in C-Space, used by the planners, is presented. Afterwards the N dimensional cuboid region, notated as Rq, is defined. The Rq configures the C-Space so that the sampling is done close to a selected, called center, cell. The approach is enhanced by the decomposition of the Cartesian space into cells. A cell is selected appropriately if: (a) is closer to the target position and (b) lies inside the constraints. Inverse kinematics(IK) are applied to calculate a centre configuration used later by the Rq. The CellBiRRT is proposed and combines all the features. Continuously mixed approaches that do not require goal configuration or an analytic solution of IK are presented. Rq regions as well as Cells are also integrated in these approaches. A Cartesian sampling based planner using quaternions for linear interpolation is also proposed and tested. The common feature of the so far algorithms is the feasibility which is normally against the optimality. Therefore an additional part of this work deals with the optimality of the path. An enhanced approach of CellBiRRT, called CellBiRRT*, is developed and promises to compute shorter paths in a reasonable time. An on-line method using both CellBiRRT and CellBiRRT* is proposed where the path of the robot arm is improved and recalculated even if sudden changes in the environment are detected. Benchmarking with the state of the art algorithms show the good performance of the proposed approaches. The good performance makes the algorithms suitable for real time applications. In this work several applications are described: Manipulative skills, an approach for an semi-autonomous control of the robot arm and a motion planning library. The motion planning library provides the necessary interface for easy use and further development of the motion planning algorithms. It can be used as the part connecting the manipulative skill designing and the motion of a robotic arm

    Manipulation Planning for Forceful Human-Robot-Collaboration

    Get PDF
    This thesis addresses the problem of manipulation planning for forceful human-robot collaboration. Particularly, the focus is on the scenario where a human applies a sequence of changing external forces through forceful operations (e.g. cutting a circular piece off a board) on an object that is grasped by a cooperative robot. We present a range of planners that 1) enable the robot to stabilize and position the object under the human applied forces by exploiting supports from both the object-robot and object-environment contacts; 2) improve task efficiency by minimizing the need of configuration and grasp changes required by the changing external forces; 3) improve human comfort during the forceful interaction by optimizing the defined comfort criteria. We first focus on the instance of using only robotic grasps, where the robot is supposed to grasp/regrasp the object multiple times to keep it stable under the changing external forces. We introduce a planner that can generate an efficient manipulation plan by intelligently deciding when the robot should change its grasp on the object as the human applies the forces, and choosing subsequent grasps such that they minimize the number of regrasps required in the long-term. The planner searches for such an efficient plan by first finding a minimal sequence of grasp configurations that are able to keep the object stable under the changing forces, and then generating connecting trajectories to switch between the planned configurations, i.e. planning regrasps. We perform the search for such a grasp (configuration) sequence by sampling stable configurations for the external forces, building an operation graph using these stable configurations and then searching the operation graph to minimize the number of regrasps. We solve the problem of bimanual regrasp planning under the assumption of no support surface, enabling the robot to regrasp an object in the air by finding intermediate configurations at which both the bimanual and unimanual grasps can hold the object stable under gravity. We present a variety of experiments to show the performance of our planner, particularly in minimizing the number of regrasps for forceful manipulation tasks and planning stable regrasps. We then explore the problem of using both the object-environment contacts and object-robot contacts, which enlarges the set of stable configurations and thus boosts the robot’s capability in stabilizing the object under external forces. We present a planner that can intelligently exploit the environment’s and robot’s stabilization capabilities within a unified planning framework to search for a minimal number of stable contact configurations. A big computational bottleneck in this planner is due to the static stability analysis of a large number of candidate configurations. We introduce a containment relation between different contact configurations, to efficiently prune the stability checking process. We present a set of real-robot and simulated experiments illustrating the effectiveness of the proposed framework. We present a detailed analysis of the proposed containment relationship, particularly in improving the planning efficiency. We present a planning algorithm to further improve the cooperative robot behaviour concerning human comfort during the forceful human-robot interaction. Particularly, we are interested in empowering the robot with the capability of grasping and positioning the object not only to ensure the object stability against the human applied forces, but also to improve human experience and comfort during the interaction. We address human comfort as the muscular activation level required to apply a desired external force, together with the human spatial perception, i.e. the so-called peripersonal-space comfort during the interaction. We propose to maximize both comfort metrics to optimize the robot and object configuration such that the human can apply a forceful operation comfortably. We present a set of human-robot drilling and cutting experiments which verify the efficiency of the proposed metrics in improving the overall comfort and HRI experience, without compromising the force stability. In addition to the above planning work, we present a conic formulation to approximate the distribution of a forceful operation in the wrench space with a polyhedral cone, which enables the planner to efficiently assess the stability of a system configuration even in the presence of force uncertainties that are inherent in the human applied forceful operations. We also develop a graphical user interface, which human users can easily use to specify various forceful tasks, i.e. sequences of forceful operations on selected objects, in an interactive manner. The user interface ties in human task specification, on-demand manipulation planning and robot-assisted fabrication together. We present a set of human-robot experiments using the interface demonstrating the feasibility of our system. In short, in this thesis we present a series of planners for object manipulation under changing external forces. We show the object contacts with the robot and the environment enable the robot to manipulate an object under external forces, while making the most of the object contacts has the potential to eliminate redundant changes during manipulation, e.g. regrasp, and thus improve task efficiency and smoothness. We also show the necessity of optimizing human comfort in planning for forceful human-robot manipulation tasks. We believe the work presented here can be a key component in a human-robot collaboration framework

    On realistic target coverage by autonomous drones

    Get PDF
    Low-cost mini-drones with advanced sensing and maneuverability enable a new class of intelligent sensing systems. To achieve the full potential of such drones, it is necessary to develop new enhanced formulations of both common and emerging sensing scenarios. Namely, several fundamental challenges in visual sensing are yet to be solved including (1) fitting sizable targets in camera frames; (2) positioning cameras at effective viewpoints matching target poses; and (3) accounting for occlusion by elements in the environment, including other targets. In this article, we introduce Argus, an autonomous system that utilizes drones to collect target information incrementally through a two-tier architecture. To tackle the stated challenges, Argus employs a novel geometric model that captures both target shapes and coverage constraints. Recognizing drones as the scarcest resource, Argus aims to minimize the number of drones required to cover a set of targets. We prove this problem is NP-hard, and even hard to approximate, before deriving a best-possible approximation algorithm along with a competitive sampling heuristic which runs up to 100× faster according to large-scale simulations. To test Argus in action, we demonstrate and analyze its performance on a prototype implementation. Finally, we present a number of extensions to accommodate more application requirements and highlight some open problems

    Advances in Human-Robot Interaction

    Get PDF
    Rapid advances in the field of robotics have made it possible to use robots not just in industrial automation but also in entertainment, rehabilitation, and home service. Since robots will likely affect many aspects of human existence, fundamental questions of human-robot interaction must be formulated and, if at all possible, resolved. Some of these questions are addressed in this collection of papers by leading HRI researchers
    corecore