736 research outputs found

    Markerless visual servoing on unknown objects for humanoid robot platforms

    Full text link
    To precisely reach for an object with a humanoid robot, it is of central importance to have good knowledge of both end-effector, object pose and shape. In this work we propose a framework for markerless visual servoing on unknown objects, which is divided in four main parts: I) a least-squares minimization problem is formulated to find the volume of the object graspable by the robot's hand using its stereo vision; II) a recursive Bayesian filtering technique, based on Sequential Monte Carlo (SMC) filtering, estimates the 6D pose (position and orientation) of the robot's end-effector without the use of markers; III) a nonlinear constrained optimization problem is formulated to compute the desired graspable pose about the object; IV) an image-based visual servo control commands the robot's end-effector toward the desired pose. We demonstrate effectiveness and robustness of our approach with extensive experiments on the iCub humanoid robot platform, achieving real-time computation, smooth trajectories and sub-pixel precisions

    3-D Hand Pose Estimation from Kinect's Point Cloud Using Appearance Matching

    Full text link
    We present a novel appearance-based approach for pose estimation of a human hand using the point clouds provided by the low-cost Microsoft Kinect sensor. Both the free-hand case, in which the hand is isolated from the surrounding environment, and the hand-object case, in which the different types of interactions are classified, have been considered. The hand-object case is clearly the most challenging task having to deal with multiple tracks. The approach proposed here belongs to the class of partial pose estimation where the estimated pose in a frame is used for the initialization of the next one. The pose estimation is obtained by applying a modified version of the Iterative Closest Point (ICP) algorithm to synthetic models to obtain the rigid transformation that aligns each model with respect to the input data. The proposed framework uses a "pure" point cloud as provided by the Kinect sensor without any other information such as RGB values or normal vector components. For this reason, the proposed method can also be applied to data obtained from other types of depth sensor, or RGB-D camera

    Articulated Object Tracking from Visual Sensory Data for Robotic Manipulation

    Get PDF
    Roboti juhtimine liigestatud objekti manipuleerimisel vajab robustset ja tĂ€psetobjekti oleku hindamist. Oleku hindamise tulemust kasutatakse tagasisidena vastavate roboti liigutuste arvutamisel soovitud manipulatsiooni tulemuse saavutamiseks. Selles töös uuritakse robootilise manipuleerimise visuaalse tagasiside teostamist. TehisnĂ€gemisele pĂ”hinevat servode liigutamist juhitakse ruutplaneerimise raamistikus vĂ”imaldamaks humanoidsel robotil lĂ€bi viia objekti manipulatsiooni. Esitletakse tehisnĂ€gemisel pĂ”hinevat liigestatud objekti oleku hindamise meetodit. Me nĂ€itame vĂ€ljapakutud meetodi efektiivsust mitmel erineval eksperimendil HRP-4 humanoidse robotiga. Teeme ka ettepaneku ĂŒhendada masinĂ”ppe ja serva tuvastamise tehnikad liigestatud objekti manipuleerimise markeerimata visuaalse tagasiside teostamiseks reaalajas.In order for a robot to manipulate an articulated object, it needs to know itsstate (i.e. its pose); that is to say: where and in which configuration it is. Theresult of the object’s state estimation is to be provided as a feedback to the control to compute appropriate robot motion and achieve the desired manipulation outcome. This is the main topic of this thesis, where articulated object state estimation is solved using visual feedback. Vision based servoing is implemented in a Quadratic Programming task space control framework to enable humanoid robot to perform articulated objects manipulation. We thoroughly developed our methodology for vision based articulated object state estimation on these bases.We demonstrate its efficiency by assessing it on several real experiments involving the HRP-4 humanoid robot. We also propose to combine machine learning and edge extraction techniques to achieve markerless, realtime and robust visual feedback for articulated object manipulation

    A fast and robust hand-driven 3D mouse

    Get PDF
    The development of new interaction paradigms requires a natural interaction. This means that people should be able to interact with technology with the same models used to interact with everyday real life, that is through gestures, expressions, voice. Following this idea, in this paper we propose a non intrusive vision based tracking system able to capture hand motion and simple hand gestures. The proposed device allows to use the hand as a "natural" 3D mouse, where the forefinger tip or the palm centre are used to identify a 3D marker and the hand gesture can be used to simulate the mouse buttons. The approach is based on a monoscopic tracking algorithm which is computationally fast and robust against noise and cluttered backgrounds. Two image streams are processed in parallel exploiting multi-core architectures, and their results are combined to obtain a constrained stereoscopic problem. The system has been implemented and thoroughly tested in an experimental environment where the 3D hand mouse has been used to interact with objects in a virtual reality application. We also provide results about the performances of the tracker, which demonstrate precision and robustness of the proposed syste

    Registration Combining Wide and Narrow Baseline Feature Tracking Techniques for Markerless AR Systems

    Get PDF
    Augmented reality (AR) is a field of computer research which deals with the combination of real world and computer generated data. Registration is one of the most difficult problems currently limiting the usability of AR systems. In this paper, we propose a novel natural feature tracking based registration method for AR applications. The proposed method has following advantages: (1) it is simple and efficient, as no man-made markers are needed for both indoor and outdoor AR applications; moreover, it can work with arbitrary geometric shapes including planar, near planar and non planar structures which really enhance the usability of AR systems. (2) Thanks to the reduced SIFT based augmented optical flow tracker, the virtual scene can still be augmented on the specified areas even under the circumstances of occlusion and large changes in viewpoint during the entire process. (3) It is easy to use, because the adaptive classification tree based matching strategy can give us fast and accurate initialization, even when the initial camera is different from the reference image to a large degree. Experimental evaluations validate the performance of the proposed method for online pose tracking and augmentation

    Vision-Based Three Dimensional Hand Interaction In Markerless Augmented Reality Environment

    Get PDF
    Kemunculan realiti tambahan membolehkan objek maya untuk wujud bersama dengan dunia sebenar dan ini memberi kaedah baru untuk berinteraksi dengan objek maya. Sistem realiti tambahan memerlukan penunjuk tertentu, seperti penanda untuk menentukan bagaimana objek maya wujud dalam dunia sebenar. Penunjuk tertentu mesti diperolehi untuk menggunakan sistem realiti tambahan, tetapi susah untuk seseorang mempunyai penunjuk tersebut pada bila-bila masa. Tangan manusia, yang merupakan sebahagian dari badan manusia dapat menyelesaikan masalah ini. Selain itu, tangan boleh digunakan untuk berinteraksi dengan objek maya dalam dunia realiti tambahan. Tesis ini membentangkan sebuah sistem realiti tambahan yang menggunakan tangan terbuka untuk pendaftaran objek maya dalam persekitaran sebenar dan membolehkan pengguna untuk menggunakan tangan yang satu lagi untuk berinteraksi dengan objek maya yang ditambahkan dalam tiga-matra. Untuk menggunakan tangan untuk pendaftaran dan interaksi dalam realiti tambahan, postur dan isyarat tangan pengguna perlu dikesan. The advent of augmented reality (AR) enables virtual objects to be superimposed on the real world and provides a new way to interact with the virtual objects. AR system requires an indicator to determine for how the virtual objects aligned in the real world. The indicator must first be obtained to access to a particular AR system. It may be inconvenient to have the indicator in reach at all time. Human hand, which is part of the human body may be a solution for this. Besides, hand is also a promising tool for interaction with virtual objects in AR environment. This thesis presents a markerless Augmented Reality system which utilizes outstretched hand for registration of virtual objects in the real environment and enables the users to have three dimensional (3D) interaction with the augmented virtual objects. To employ the hand for registration and interaction in AR, hand postures and gestures that the user perform has to be recognized

    Essentials of Augmented Reality Software Development under Android Patform

    Get PDF
    Liitreaalsus on ĂŒha enam arenev tehnoloogia. Lisaks meelelahutuseleon liitreaalsus leidnud kasutust nii meditsiinis, sĂ”javĂ€es, masinaehituses kui ka teistes suurtes ettevĂ”tluse ning riigiga seotud valdkondades. Arendusmeeskondade eesmĂ€rk on saavutada vĂ”imalikult hea jĂ”udlus ning visuaalsed tulemused nende poolt toodetavas tarkvaras sĂ”ltumata kasutuspiirkonnast. Liitreaalsuse tarkvara pĂ”hitehnoloogia sĂ”ltub vĂ€gapalju meeskonnale kĂ€ttesaadavatest ressurssidest. See tĂ€hendab, et paremate vĂ”imalustega organisatsioonid saavad lubada endale tipptehnoloogiaid ning oma arendusmeeskondi, mille abil on neil vĂ”imalus implementeerida uusi liitreaalsuse tarkvaralahendusi. Samal ajal on aga tavalised firmad piiratud aja, meeskonna ja raha poolest, mis omakorda sunnib neid kasutama turul olemasolevaid lahendusi - tööriistakomplekte.Sellest lĂ€htuvalt keskendub kĂ€esolev töö vajalikele teadmistele, mida lĂ€heb vaja erinevate liitreaalsuse tööriistakomplektide kasutamisel. Selleks, et luua edukalt valmis liitreaalsuse tarkvara, on vĂ€lja valitud kindlad raamistikud, millest koostatakse ĂŒlevaade, mida testitakse ning vĂ”rreldakse. Lisaks sellele Ă”petatakse uurimise kĂ€igus selgeks ka mĂ”ned pĂ”hiteadmised liitreaalsuse arendamiseks Androidi platvormi nĂ€itel.Augmented Reality (AR) is an emerging technology. Besides entertainment, AR also is found to be used in medicine, military, engineering and other major fields of enterprise and government. Regardless of the application area, development teams usually target to achieve best performance and visual results in the AR software that they are providing. In addition, the core technology used behind a particular AR software depends a lot on resources available to the team. This means, that organizations with large resources can afford to implement AR software solutions using cutting-edge technologies build by their own engineering units, whereas ordinary companies are usually limited in time, staff and budget. Hence, forcing them to use existing market solutions - toolkits.From this perspective, this thesis work focuses on providing the basics of working with AR toolkits. In order to succeed in building an AR software, particular toolkits are selected to be reviewed, tested and compared. Moreover, during the investigation process some essentials of the AR development under Android platform are also studied
    • 

    corecore