173 research outputs found

    Design and Development of Sensor Integrated Robotic Hand

    Get PDF
    Most of the automated systems using robots as agents do use few sensors according to the need. However, there are situations where the tasks carried out by the end-effector, or for that matter by the robot hand needs multiple sensors. The hand, to make the best use of these sensors, and behave autonomously, requires a set of appropriate types of sensors which could be integrated in proper manners. The present research work aims at developing a sensor integrated robot hand that can collect information related to the assigned tasks, assimilate there correctly and then do task action as appropriate. The process of development involves selection of sensors of right types and of right specification, locating then at proper places in the hand, checking their functionality individually and calibrating them for the envisaged process. Since the sensors need to be integrated so that they perform in the desired manner collectively, an integration platform is created using NI PXIe-1082. A set of algorithm is developed for achieving the integrated model. The entire process is first modelled and simulated off line for possible modification in order to ensure that all the sensors do contribute towards the autonomy of the hand for desired activity. This work also involves design of a two-fingered gripper. The design is made in such a way that it is capable of carrying out the desired tasks and can accommodate all the sensors within its fold. The developed sensor integrated hand has been put to work and its performance test has been carried out. This hand can be very useful for part assembly work in industries for any shape of part with a limit on the size of the part in mind. The broad aim is to design, model simulate and develop an advanced robotic hand. Sensors for pick up contacts pressure, force, torque, position, surface profile shape using suitable sensing elements in a robot hand are to be introduced. The hand is a complex structure with large number of degrees of freedom and has multiple sensing capabilities apart from the associated sensing assistance from other organs. The present work is envisaged to add multiple sensors to a two-fingered robotic hand having motion capabilities and constraints similar to the human hand. There has been a good amount of research and development in this field during the last two decades a lot remains to be explored and achieved. The objective of the proposed work is to design, simulate and develop a sensor integrated robotic hand. Its potential applications can be proposed for industrial environments and in healthcare field. The industrial applications include electronic assembly tasks, lighter inspection tasks, etc. Application in healthcare could be in the areas of rehabilitation and assistive techniques. The work also aims to establish the requirement of the robotic hand for the target application areas, to identify the suitable kinds and model of sensors that can be integrated on hand control system. Functioning of motors in the robotic hand and integration of appropriate sensors for the desired motion is explained for the control of the various elements of the hand. Additional sensors, capable of collecting external information and information about the object for manipulation is explored. Processes are designed using various software and hardware tools such as mathematical computation MATLAB, OpenCV library and LabVIEW 2013 DAQ system as applicable, validated theoretically and finally implemented to develop an intelligent robotic hand. The multiple smart sensors are installed on a standard six degree-of-freedom industrial robot KAWASAKI RS06L articulated manipulator, with the two-finger pneumatic SHUNK robotic hand or designed prototype and robot control programs are integrated in such a manner that allows easy application of grasping in an industrial pick-and-place operation where the characteristics of the object can vary or are unknown. The effectiveness of the actual recommended structure is usually proven simply by experiments using calibration involving sensors and manipulator. The dissertation concludes with a summary of the contribution and the scope of further work

    Artificial Intelligence as a Substitute for Human Creativity

    Get PDF
    Creativity has always been perceived as a human trait, even though the exact neural mechanisms remain unknown, it has been the subject of research and debate for a long time. The recent development of AI technologies and increased interest in AI has led to many projects capable of performing tasks that have been previously regarded as impossible without human creativity. Music composition, visual arts, literature, and science represent areas in which these technologies have started to both help and replace the creative human, with the question of whether AI can be creative and capable of creation more realistic than ever. This review aims to provide an extensive perspective over several state-of-the art technologies and applications based on AI which are currently being implemented into areas of interest closely correlated to human creativity, as well as the economic impact the development of such technologies might have on those domains

    Human Assisted Humanoid Robot Painter

    Get PDF
    Tez (Yüksek Lisans) -- İstanbul Teknik Üniversitesi, Fen Bilimleri Enstitüsü, 2012Thesis (M.Sc.) -- İstanbul Technical University, Institute of Science and Technology, 2012Günümüzde, insansı robotlar üzerindeki araştırmalar en heyecan verici ve sıcak konulardan biridir. İnsansı robotlar üzerinde insanlarla bir arada ve işbirliği içinde çalışabilen ve insanların yapamadığı görevleri yapabilen hareketli robot yaratma gibi birçok araştırma yapılmıştır. Bir diğer ilgi çekici araştırma alanı ise sanat icra edebilen robotlar yaratmaktır. Bu araştırmada, NAO H25 insan destekli insansı robot ressam anlatılmaktadır. Bu çalışmanın amacı, bir insan tarafından resim yapılırken yerine getirilen işlemlerin tamamının, görü sistemi olan insansı bir robot tarafından bir insan yardımıyla gerçekleştirebilmesidir. Bir ressam robot yaratmak konusunda, bazı önemli sorunlar için çözümler üretilmiştir. Görü ile ilgili çözülmesi gereken önemli sorunlar, robot tarafından resmi yapılacak nesnelerin bulunduğu ortamın görüntüsünün elde edilmesi, iki boyutlu nesne bölütlemesinin yapılması, nesnelerin yeraldığı ortam görüntüsünde nesneler ile arka planın birbirinden ayrılması, nesnelerin sahip olduğu renk sayısının robot tarafından kullanılabilecek en uygun renk sayısına indirgenmesi, renklerin robot tarafından algılanabilmesi, fırça darbelerinin yönünün belirlenmesi ve kullanılacak doğru rengin robot tarafından tespit edilebilmesidir. NAO insansı robotu ile ilgili çözülmesi gereken önemli sorunlar, insansı robotun resmin çizileceği yatay resim masası önünde ayakta durması, boya fırçasını kavraması, boya fırçasını başlangıç noktasında aşağı doğru indirmesi, fırça ile bir çizgi çizilmesi ve bitiş noktasında yukarı doğru kaldırmasıdır. Resim yapma işlemi üç bölüme ayrılmıştır: 1) Resmi yapılacak nesnelerin bulunduğu ortamın resminin robot tarafından elde edilmesi, 2) Resmi yapılacak nesneler ile arka planın birbirinden ayrılması, 3) Asistanı ile etkileşime geçerek robot tarafından nesnelerin resminin yapılması (örneğin robot yardımcısına “Bana mavi boyayı verebilir misin?” diyerek etkileşimde bulunabilir.) NAO H25 insansı robotun başının üzerinde bir tanesi alın, diğeri ise ağız hizasında olmak üzere iki tane dijital kamerası vardır. Robot, nesnelerin bulunduğu ortamın resmini alın hizasındaki kamerası aracılığıyla elde eder. Bu işlem neticesinde birinci bölüm tamamlanmış olur. Ortam görüntüsü elde edildikten sonra nesne ile arka planın birbirinden ayrılması maksadıyla nesneye ait görüntü elemanları ile arka plana ait görüntü elemanlarının bölütlenmesi gerekmektedir. Bu amaçla çizge tabanlı nesne bölütleme algoritması kullanılmaktadır. Bu algoritma neticesinde nesnelerin sahip olduğu renkler belirlenmiş olur ve bu renkler robot tarafından algılanır. Robotun, resmini yapacağı nesneler çok farklı renklere sahip olabilirler. Buna karşın robotun elinde var olan renk sayısı ise sınırlıdır. Bu nedenle robotun uygun renk bölütlerini belirleyebilmesi için nesnenin renk dağılımını analiz etmesi gerekmektedir. Bu konu renk azaltma ile ilgilidir. Bu çalışmada renk azaltma işlemi için K-means bölütleme algoritması kullanılmıştır. Nesne ile arka planın birbirinden ayrılması, renklerin algılanması ve renk azaltma işlemlerinin tamamlanmasından sonra, robotun fırça darbelerini tuval üzerine nasıl uygulayacağına karar vermesi gerekmektedir. Bu maksatla fırça darbelerinin yönü belirlenmelidir. Fırça darbelerinin yönünün belirlenmesi için ortam resmindeki görüntü elemanlarının yön bilgileri kullanılmıştır. Fırçanın yönünün belirlenmesinden sonra NAO her bir renk bölütünde boyayabileceği uygun bir alan bulmaya çalışır. Böyle bir alan bulduğunda, yardımcısını çağırır ve kendisinden boyama yapacağı rengi vermesini ister. Yardımcı renk kâsesini NAO’ya verir. NAO, boya kâsesini kontrol eder ve doğru boya olup olmadığını tespit eder. Eğer doğru boya ise yardımcısına teşekkür eder, doğru boya değilse o zaman yardımcısından doğru boyayı kendisine vermesini ister. NAO doğru boyayı elde ettiğinde resim yapmaya başlar. Öncelikle kolunu başlangıç noktasına götürür ve boya fırçasını aşağı indirir. Daha sonra kolunu bitiş noktasına götürür ve fırçayı kaldırır. NAO bulduğu ilk uygun alanı boyadıktan sonra bir başka uygun alan arar. İlk seviyedeki bütün renkler boyandıktan sonra, bir sonraki seviyeyi daha ince fırça ile boyamaya başlar. Resim yapma işleminde kullanılan seviye sayısı, boyama için kullanılan fırça sayısı ile ilgilidir. Kullanılacak fırça sayısı yapılacak olan resmin tamamlanacağı seviye sayısını belirler. Bu sistemde boyama yaparken fırçayı dengeli bir yönde hareket ettirmek önemlidir. Aksi taktirde fırça, robotun elinden kayabilir veya çizgiler yanlış bir şekilde çizilebilir. Boya fırçası, robota yakın olan taraftan uzak olan tarafa doğru hareket ederse kavrama dengeli bir hale gelir. Bununla birlikte, boya fırçası soldan sağa doğru ve uzak taraftan yakın tarafa doğru hareket ederse o zaman kavrama dengesiz bir hale gelir. Webots, hareketli robotların modellenebilmesi, programlanabilmesi ve benzetiminin yapılabilmesi için kullanılan bir geliştirme ortamıdır. Resim yapma işlemi, Webots benzetim programında NAO H25 insansı robotu kullanılarak yaratılan jenerik bir ortamda gerçekleştirildi. Bu ortama bir adet sandalye, sandalyenin üzerine bir adet küçük teneke kutu, sandalyenin önüne ise bir tanesi küçük bir tanesi büyük olmak üzere iki adet farklı renkte top yerleştirildi. Insansı robot bu nesnelerin resimlerini jenerik ortamdaki tuval üzerine yaptı. Benzetim ortamında robot kolunu ve ellerini oynatırken, aynı zamanda resim de yavaş yavaş renklerle doldu ve resim ortaya çıktı. Gerçek ortamda gerçekleştirilen çalışmalarda yatay bir masa, masa üzerine yerleştirilmiş tuval olarak kullanılan bir kağıt, resmini yapacağımız bir saksı ve guaj boya kullanılmıştır. NAO, kollarını x-y düzlemi üzerinde bir çizici gibi kullanarak ve üst gövdesini koluyla birlikte hareket ettirerek masa üzerindeki farklı noktalara erişebilmektedir. Sistem gerçek robot üzerine taşındıktan sonra, özellikle de NAO’nun eklemli kollarını sabit bir yüzey üzerinde resim yapması için hareket ettirmesi gereksiniminden kaynaklanan bazı teknik zorluklar ortaya çıkmıştır. Çünkü NAO’nun eklemli kollarının hareket kısıtlaması vardır. Sabit yüzey üzerinde istediğibir noktaya istediği bir şekilde hareket edememektedir. Uzak noktalara erişebilmesi için kollarıyla birlikte üst gövdesini de hareket ettirmesi gerekmektedir. Üst gövdesini hareket ettirirken aynı zamanda ayakta olması nedeniyle kendi dengesini de sağlaması gerekmektedir. NAO’nun kolları ile ilgili kısıtlamalarının yanında elleri ile ilgili de bazı kısıtlamaları vardır. NAO’nun her bir elinde üç parmağı vardır. Parmaklarından hiçbirisini tek başına hareket ettirememekte, sadece parmaklarını açıp kapayabilmektedir. Bununla birlikte parmaklarının üzerinde herhangi bir güç veya dokunma sensörü bulunmamaktadır. Bu kısıtlamadan dolayı insansı robot, fırçayı kendi imkânları ile insana benzer bir şekilde hissederek kavrayamamaktadır. Elini açtığı anda fırça elinin içine sabitlenmekte ve daha sonra elini kapatmak suretiyle fırçayı kavramaktadır.Currently, research on humanoid robots is one of the most exciting and hot topics. Many researches have been studied on humanoid robots like creating a mobile robot that is able to coexist and collaborate with humans and to perform tasks that humans cannot. Another attractive research area is creating artist robots that are able to perform arts. In this research, NAO H25 human assisted humanoid robot painter is described. The aim of this study to reproduce the whole painting process by a humanoid robot with a vision system with the assistance of a human. To create a robot painter, solutions for some key problems are generated. The key problems regarding vision are obtaining the image of the environment in which the objects that the robot paints, 2D object segmentation, extracting the objects from environment, color perception, reducing the number of color to the optimum set of colors that can be used by the robot and determining orientation for brush strokes. The key problems regarding NAO humanoid robot are standing in front of the painting table, grasping the paintbrush, moving down the paintbrush to the start point, drawing a line and moving up the paintbrush from end point. Painting process are divided into three phases: obtaining the image of the environment that the objects are in, extracting the objects to paint and painting by a robot to be in interaction with its assistant (i.e. Robot says to human: “Could you give me the blue color?” ). NAO H25 humanoid robot has two CMOS-Complementary Metal Oxide Semiconductor digital cameras on its head. It acquires the image of the environment via its camera. After taking the image of the environment, the foreground and background pixels should be segmented to extract objects from the environment. Graph-based object segmentation is used for this purpose. Once this algorithm applied to the image, all colors are determined and can be percept by the robot. The robot must analyze color distribution of the object to determine the appropriate color segments for the optimum set of colors that can be used by the robot. This issue is related with the color reduction. K-means clustering method is used for color reduction. When the color reduction is finished, the orientation used to guide brush strokes should be computed in order to imitate human painting style. The gradient information is used to calculate orientation. After computing orientation, NAO tries to find an appropriate area in each color segment derived from the second phase (extracting objects from the environment). If the robot finds such an area, it calls its assistant and asks for the color that it will paint. Its assistant gives the color case to it. NAO controls the color and if it is true, it thanks to child. If the color is incorrect, the robot asks for the right color. When the robot gets the right color, it starts drawing. NAO moves its arm to the start position and pushes the paintbrush down. Then it moves its arm to the end position and pulls the brush up. In this system, it is important to paint in a stable direction. The grasping becomes unstable when a brush moves from near side to far side. However, it is stable when a brush moves from left to right and from far side to near side. For the next time, the robot looks for a new area to paint. All of the color in one level is painted; the next level starts with a thinner brush. The number of level is related with the brush number used for painting. Webots is a development environment used to model, program and simulate mobile robots. This painting art is simulated in a generic environment in Webots with NAO. A chair, a can on the chair and two balls of different sizes in front of the chair are located. The humanoid robot paints these objects on a canvas. In the simulator, the robot moves its arm and its hand, and the simulated picture is filled with colors, as the robot’s hand is moved. In real environment, a table, a cardboard used as a canvas placed on a table, a pot the robot paints and gouache paints are used. Currently, the robot paints on a table top, using its arm like a plotter on an x-y plane and moves its upper torso with the arm to reach different locations on the table. Once the system is moved on the real robot, there are some technical challenges, primarily due to the requirement that the NAO uses its articulated arms to paint on a fixed surface (canvas). It has some limitations on its articulated arms. NAO H25 humanoid robot does not have any force and tactile sensors on its fingers. Because of this constraint, the paintbrush is fixed in the robot’s hand to manipulate it while painting.Yüksek LisansM.Sc

    Compliant multi-fingered adaptive robotic gripper

    Get PDF
    Passively compliant underactuated mechanisms are one way to obtain the finger which could accommodate to any irregular shaped and sensitive grasping object. The purpose of the underactuation is to use the power of one actuator to drive the open and close motion of the gripper. The fully compliant mechanism has multiple degrees of freedom and can be considered as an underactuated mechanism. This paper presents design of the adaptive underactuated compliant multi-fingered gripper with distributed compliance. The optimal topology of the finger structure was obtained by an iterative optimization procedure. It was proven that for real robotic applications multi-fingered grippers with three or more fingers were more suitable for stable and safe grasping.Passiven nachgiebigen unteraktuierten Mechanismen bieten die Möglichkeit solche Robotergreifer zu entwerfen die anpassfähig an irgendeine unregelmäßig geformte und empfindliche Greifobjekt sind. Der Zweck der Unteraktuierung ist nur ein Aktor für die Greifer-öffnen/schließen-Funktionen zu benutzen. Völlig nachgiebige Mechanismus hat grosse Anzahl der Freiheitsgrade und man kann als unteraktuierte Mechanismus betrachtet werden. In diesem Beitrag wird ein adaptive unteraktuierte nachgiebige Multifinger-Greifer mit verteilten Nachgiebigkeit entworfen. Die optimale Topologie der Fingerstruktur wurde durch ein iteratives Optimisationverfahren bekommen. Es wurde bewährt dass für die realen Anwendungen der Multifinger-Greifer mit drei und mehr Fingern für stabile und sichere Greifen besser geeignet ist

    An overview of artificial intelligence and robotics. Volume 2: Robotics

    Get PDF
    This report provides an overview of the rapidly changing field of robotics. The report incorporates definitions of the various types of robots, a summary of the basic concepts, utilized in each of the many technical areas, review of the state of the art and statistics of robot manufacture and usage. Particular attention is paid to the status of robot development, the organizations involved, their activities, and their funding

    인간 기계 상호작용을 위한 강건하고 정확한 손동작 추적 기술 연구

    Get PDF
    학위논문(박사) -- 서울대학교대학원 : 공과대학 기계항공공학부, 2021.8. 이동준.Hand-based interface is promising for realizing intuitive, natural and accurate human machine interaction (HMI), as the human hand is main source of dexterity in our daily activities. For this, the thesis begins with the human perception study on the detection threshold of visuo-proprioceptive conflict (i.e., allowable tracking error) with or without cutantoues haptic feedback, and suggests tracking error specification for realistic and fluidic hand-based HMI. The thesis then proceeds to propose a novel wearable hand tracking module, which, to be compatible with the cutaneous haptic devices spewing magnetic noise, opportunistically employ heterogeneous sensors (IMU/compass module and soft sensor) reflecting the anatomical properties of human hand, which is suitable for specific application (i.e., finger-based interaction with finger-tip haptic devices). This hand tracking module however loses its tracking when interacting with, or being nearby, electrical machines or ferromagnetic materials. For this, the thesis presents its main contribution, a novel visual-inertial skeleton tracking (VIST) framework, that can provide accurate and robust hand (and finger) motion tracking even for many challenging real-world scenarios and environments, for which the state-of-the-art technologies are known to fail due to their respective fundamental limitations (e.g., severe occlusions for tracking purely with vision sensors; electromagnetic interference for tracking purely with IMUs (inertial measurement units) and compasses; and mechanical contacts for tracking purely with soft sensors). The proposed VIST framework comprises a sensor glove with multiple IMUs and passive visual markers as well as a head-mounted stereo camera; and a tightly-coupled filtering-based visual-inertial fusion algorithm to estimate the hand/finger motion and auto-calibrate hand/glove-related kinematic parameters simultaneously while taking into account the hand anatomical constraints. The VIST framework exhibits good tracking accuracy and robustness, affordable material cost, light hardware and software weights, and ruggedness/durability even to permit washing. Quantitative and qualitative experiments are also performed to validate the advantages and properties of our VIST framework, thereby, clearly demonstrating its potential for real-world applications.손 동작을 기반으로 한 인터페이스는 인간-기계 상호작용 분야에서 직관성, 몰입감, 정교함을 제공해줄 수 있어 많은 주목을 받고 있고, 이를 위해 가장 필수적인 기술 중 하나가 손 동작의 강건하고 정확한 추적 기술 이다. 이를 위해 본 학위논문에서는 먼저 사람 인지의 관점에서 손 동작 추적 오차의 인지 범위를 규명한다. 이 오차 인지 범위는 새로운 손 동작 추적 기술 개발 시 중요한 설계 기준이 될 수 있어 이를 피험자 실험을 통해 정량적으로 밝히고, 특히 손끝 촉각 장비가 있을때 이 인지 범위의 변화도 밝힌다. 이를 토대로, 촉각 피드백을 주는 것이 다양한 인간-기계 상호작용 분야에서 널리 연구되어 왔으므로, 먼저 손끝 촉각 장비와 함께 사용할 수 있는 손 동작 추적 모듈을 개발한다. 이 손끝 촉각 장비는 자기장 외란을 일으켜 착용형 기술에서 흔히 사용되는 지자기 센서를 교란하는데, 이를 적절한 사람 손의 해부학적 특성과 관성 센서/지자기 센서/소프트 센서의 적절한 활용을 통해 해결한다. 이를 확장하여 본 논문에서는, 촉각 장비 착용 시 뿐 아니라 모든 장비 착용 / 환경 / 물체와의 상호작용 시에도 사용 가능한 새로운 손 동작 추적 기술을 제안한다. 기존의 손 동작 추적 기술들은 가림 현상 (영상 기반 기술), 지자기 외란 (관성/지자기 센서 기반 기술), 물체와의 접촉 (소프트 센서 기반 기술) 등으로 인해 제한된 환경에서 밖에 사용하지 못한다. 이를 위해 많은 문제를 일으키는 지자기 센서 없이 상보적인 특성을 지니는 관성 센서와 영상 센서를 융합하고, 이때 작은 공간에 다 자유도의 움직임을 갖는 손 동작을 추적하기 위해 다수의 구분되지 않는 마커들을 사용한다. 이 마커의 구분 과정 (correspondence search)를 위해 기존의 약결합 (loosely-coupled) 기반이 아닌 강결합 (tightly-coupled 기반 센서 융합 기술을 제안하고, 이를 통해 지자기 센서 없이 정확한 손 동작이 가능할 뿐 아니라 착용형 센서들의 정확성/편의성에 문제를 일으키던 센서 부착 오차 / 사용자의 손 모양 등을 자동으로 정확히 보정한다. 이 제안된 영상-관성 센서 융합 기술 (Visual-Inertial Skeleton Tracking (VIST)) 의 뛰어난 성능과 강건성이 다양한 정량/정성 실험을 통해 검증되었고, 이는 VIST의 다양한 일상환경에서 기존 시스템이 구현하지 못하던 손 동작 추적을 가능케 함으로써, 많은 인간-기계 상호작용 분야에서의 가능성을 보여준다.1 Introduction 1 1.1. Motivation 1 1.2. Related Work 5 1.3. Contribution 12 2 Detection Threshold of Hand Tracking Error 16 2.1. Motivation 16 2.2. Experimental Environment 20 2.2.1. Hardware Setup 21 2.2.2. Virtual Environment Rendering 23 2.2.3. HMD Calibration 23 2.3. Identifying the Detection Threshold of Tracking Error 26 2.3.1. Experimental Setup 27 2.3.2. Procedure 27 2.3.3. Experimental Result 31 2.4. Enlarging the Detection Threshold of Tracking Error by Haptic Feedback 31 2.4.1. Experimental Setup 31 2.4.2. Procedure 32 2.4.3. Experimental Result 34 2.5. Discussion 34 3 Wearable Finger Tracking Module for Haptic Interaction 38 3.1. Motivation 38 3.2. Development of Finger Tracking Module 42 3.2.1. Hardware Setup 42 3.2.2. Tracking algorithm 45 3.2.3. Calibration method 48 3.3. Evaluation for VR Haptic Interaction Task 50 3.3.1. Quantitative evaluation of FTM 50 3.3.2. Implementation of Wearable Cutaneous Haptic Interface 51 3.3.3. Usability evaluation for VR peg-in-hole task 53 3.4. Discussion 57 4 Visual-Inertial Skeleton Tracking for Human Hand 59 4.1. Motivation 59 4.2. Hardware Setup and Hand Models 62 4.2.1. Human Hand Model 62 4.2.2. Wearable Sensor Glove 62 4.2.3. Stereo Camera 66 4.3. Visual Information Extraction 66 4.3.1. Marker Detection in Raw Images 68 4.3.2. Cost Function for Point Matching 68 4.3.3. Left-Right Stereo Matching 69 4.4. IMU-Aided Correspondence Search 72 4.5. Filtering-based Visual-Inertial Sensor Fusion 76 4.5.1. EKF States for Hand Tracking and Auto-Calibration 78 4.5.2. Prediction with IMU Information 79 4.5.3. Correction with Visual Information 82 4.5.4. Correction with Anatomical Constraints 84 4.6. Quantitative Evaluation for Free Hand Motion 87 4.6.1. Experimental Setup 87 4.6.2. Procedure 88 4.6.3. Experimental Result 90 4.7. Quantitative and Comparative Evaluation for Challenging Hand Motion 95 4.7.1. Experimental Setup 95 4.7.2. Procedure 96 4.7.3. Experimental Result 98 4.7.4. Performance Comparison with Existing Methods for Challenging Hand Motion 101 4.8. Qualitative Evaluation for Real-World Scenarios 105 4.8.1. Visually Complex Background 105 4.8.2. Object Interaction 106 4.8.3. Wearing Fingertip Cutaneous Haptic Devices 109 4.8.4. Outdoor Environment 111 4.9. Discussion 112 5 Conclusion 116 References 124 Abstract (in Korean) 139 Acknowledgment 141박

    Autonomous Systems, Robotics, and Computing Systems Capability Roadmap: NRC Dialogue

    Get PDF
    Contents include the following: Introduction. Process, Mission Drivers, Deliverables, and Interfaces. Autonomy. Crew-Centered and Remote Operations. Integrated Systems Health Management. Autonomous Vehicle Control. Autonomous Process Control. Robotics. Robotics for Solar System Exploration. Robotics for Lunar and Planetary Habitation. Robotics for In-Space Operations. Computing Systems. Conclusion

    Humanoid robot painter: Visual perception and high-level planning

    Get PDF
    Abstract -This paper presents visual perception discovered in high-level manipulator planning for a robot to reproduce the procedure involved in human painting. First, we apply a technique of 2D object segmentation that considers region similarity as an objective function and edge as a constraint with artificial intelligent used as a criterion function. The system can segment images more effectively than most of existing methods, even if the foreground is very similar to the background. Second, we propose a novel color perception model that shows similarity to human perception. The method outperforms many existing color reduction algorithms. Third, we propose a novel global orientation map perception using a radial basis function. Finally, we use the derived model along with the brush's position-and force-sensing to produce a visual feedback drawing. Experiments show that our system can generate good paintings including portraits

    GelSight360: An Omnidirectional Camera-Based Tactile Sensor for Dexterous Robotic Manipulation

    Full text link
    Camera-based tactile sensors have shown great promise in enhancing a robot's ability to perform a variety of dexterous manipulation tasks. Advantages of their use can be attributed to the high resolution tactile data and 3D depth map reconstructions they can provide. Unfortunately, many of these tactile sensors use either a flat sensing surface, sense on only one side of the sensor's body, or have a bulky form-factor, making it difficult to integrate the sensors with a variety of robotic grippers. Of the camera-based sensors that do have all-around, curved sensing surfaces, many cannot provide 3D depth maps; those that do often require optical designs specified to a particular sensor geometry. In this work, we introduce GelSight360, a fingertip-like, omnidirectional, camera-based tactile sensor capable of producing depth maps of objects deforming the sensor's surface. In addition, we introduce a novel cross-LED lighting scheme that can be implemented in different all-around sensor geometries and sizes, allowing the sensor to easily be reconfigured and attached to different grippers of varying DOFs. With this work, we enable roboticists to quickly and easily customize high resolution tactile sensors to fit their robotic system's needs

    Custom Autonomous Robotic Painter

    Get PDF
    Today humans out perform robots in problem solving, adaptability, and creativity. The goal of this project was to bridge the gap between robotic and human capabilities through the development of an autonomous painting robot. The custom design of the mechanics, electronics, and software allowed for a versatile solution. Image decomposition techniques were used to break down input images into feature areas that were reconstructed by the robot. Vision feedback was also performed during the painting process to apply corrections to the artwork dynamically. Understanding the motions undertaken by painters and replicating it in a robotic platform can revolutionize the art form, contribute to the scientific advancement of robotic capabilities, and reduce the workload needed to construct paintings
    corecore