5 research outputs found

    Vjerojatnosni model robotskoga djelovanja u fizičkoj interakciji s čovjekom

    Get PDF
    U doktorskom radu razvijen je vjerojatnosni model pomoću kojeg robot donosi odluke o svojem djelovanju putem fizičke interakcije s čovjekom. Klasifikacijom taktilnih podražaja na temelju kapacitivnog senzora, sile i prostornog položaja razaznaju se elementi i smisao interakcije. Kako bi model imao određenu autonomiju i mogućnost kretanja kroz prostor u sklopu istraživanja obrađen je problem prostornog kretanja. U sklopu istraživanja definirana je višekriterijska interpretacija radnog prostora u kojoj postoji distinkcija između objekata u okolini, čovjeka, ciljeva, samog robota te putanja robota. Model interakcije je oblikovan kao slijed radnji koje robot izvršava što u konačnici rezultira robotskim djelovanjem. Definiranje varijabli vjerojatnosti modela proizlazi iz interakcije s čovjekom. Naučeni obrasci predstavljaju dugoročno znanje na temelju kojih se oblikuje robotsko djelovanje u skladu s trenutnim stanjem okoline. Vremenskim razlikovanjem bližim događajima pridaje se značajno veći faktor utjecaja, a onim udaljenijim u prošlost mnogo manji. U laboratorijskim uvjetima provedeni su pokusi na realnom sustavu koji čine robotska ruka s integriranim senzorima momenata i upravljačkom jedinicom, računalo, kao i „umjetna koža“ koja posjeduje mogućnost razlučivanja ljudskog dodira i neposredne blizine prvenstveno biološkog materijala. Eksperimentima su utvrđena ograničenja primjene autonomnog djelovanja robota

    Graph-based Trajectory Planning through Programming by Demonstration

    No full text
    Autonomous robots are becoming increasingly commonplace in industry, space exploration, and even domestic applications. These diverse fields share the need for robots to perform increasingly complex motion behaviors for interacting with the world. As the robots’ tasks become more varied and sophisticated, though, the challenge of programming then becomes more difficult and domain-specific. Robotics experts without domain knowledge may not be well-suited for communicating task specific goals and constraints to the robot, but domain experts may not possess the skills for programming robots through conventional means. Ideally, any person capable of demonstrating the necessary skill should be able to instruct the robot to do so. In this thesis, we examine the use of demonstration to program or, more aptly, to teach a robot to perform precise motion tasks. Programming by Demonstration (PbD) offers an expressive means for teaching while being accessible to domain experts who may be novices in robotics. This learning paradigm relies on human demonstrations to build a model of a motion task. This thesis develops an algorithm for learning from examples that is capable of producing trajectories that are collision-free and that preserve non-geometric constraints such as end effector orientation, without requiring special training for the teacher or a model of the environment. This approach is capable of learning precise motions, even when the precision required is on the same order of magnitude as the noise in the demonstrations. Finally, this approach is robust to the occasional errors in strategy and jitter in movement inherent in imperfect human demonstrations. The approach contributed in this thesis begins with the construction of a neighbor graph, which determines the correspondences between multiple imperfect demonstrations. This graph permits the robot to plan novel trajectories that safely and smoothly generalize the teacher’s behavior. Finally, like any good learner, a robot should assess its knowledge and ask questions about any detected deficiencies. The learner presented here detects regions of the task in which the demonstrations appear to be ambiguous or insufficient, and requests additional information from the teacher. This algorithm is demonstrated in example domains with a 7 degree-of-freedom manipulator, and user trials are presented.</p

    Graph-based trajectory planning through programming by demonstration

    No full text
    corecore