5,679 research outputs found

    Augmented Reality

    Get PDF
    Augmented Reality (AR) is a natural development from virtual reality (VR), which was developed several decades earlier. AR complements VR in many ways. Due to the advantages of the user being able to see both the real and virtual objects simultaneously, AR is far more intuitive, but it's not completely detached from human factors and other restrictions. AR doesn't consume as much time and effort in the applications because it's not required to construct the entire virtual scene and the environment. In this book, several new and emerging application areas of AR are presented and divided into three sections. The first section contains applications in outdoor and mobile AR, such as construction, restoration, security and surveillance. The second section deals with AR in medical, biological, and human bodies. The third and final section contains a number of new and useful applications in daily living and learning

    TRECVID 2004 - an overview

    Get PDF

    Robotic learning of force-based industrial manipulation tasks

    Get PDF
    Even with the rapid technological advancements, robots are still not the most comfortable machines to work with. Firstly, due to the separation of the robot and human workspace which imposes an additional financial burden. Secondly, due to the significant re-programming cost in case of changing products, especially in Small and Medium-sized Enterprises (SMEs). Therefore, there is a significant need to reduce the programming efforts required to enable robots to perform various tasks while sharing the same space with a human operator. Hence, the robot must be equipped with a cognitive and perceptual capabilities that facilitate human-robot interaction. Humans use their various senses to perform tasks such as vision, smell and taste. One sensethat plays a significant role in human activity is ’touch’ or ’force’. For example, holding a cup of tea, or making fine adjustments while inserting a key requires haptic information to achieve the task successfully. In all these examples, force and torque data are crucial for the successful completion of the activity. Also, this information implicitly conveys data about contact force, object stiffness, and many others. Hence, a deep understanding of the execution of such events can bridge the gap between humans and robots. This thesis is being directed to equip an industrial robot with the ability to deal with force perceptions and then learn force-based tasks using Learning from Demonstration (LfD).To learn force-based tasks using LfD, it is essential to extract task-relevant features from the force information. Then, knowledge must be extracted and encoded form the task-relevant features. Hence, the captured skills can be reproduced in a new scenario. In this thesis, these elements of LfD were achieved using different approaches based on the demonstrated task. In this thesis, four robotics problems were addressed using LfD framework. The first challenge was to filter out robots’ internal forces (irrelevant signals) using data-driven approach. The second robotics challenge was the recognition of the Contact State (CS) during assembly tasks. To tackle this challenge, a symbolic based approach was proposed, in which a force/torque signals; during demonstrated assembly, the task was encoded as a sequence of symbols. The third challenge was to learn a human-robot co-manipulation task based on LfD. In this case, an ensemble machine learning approach was proposed to capture such a skill. The last challenge in this thesis, was to learn an assembly task by demonstration with the presents of parts geometrical variation. Hence, a new learning approach based on Artificial Potential Field (APF) to learn a Peg-in-Hole (PiH) assembly task which includes no-contact and contact phases. To sum up, this thesis focuses on the use of data-driven approaches to learning force based task in an industrial context. Hence, different machine learning approaches were implemented, developed and evaluated in different scenarios. Then, the performance of these approaches was compared with mathematical modelling based approaches.</div

    TOWARD INTELLIGENT WELDING BY BUILDING ITS DIGITAL TWIN

    Get PDF
    To meet the increasing requirements for production on individualization, efficiency and quality, traditional manufacturing processes are evolving to smart manufacturing with the support from the information technology advancements including cyber-physical systems (CPS), Internet of Things (IoT), big industrial data, and artificial intelligence (AI). The pre-requirement for integrating with these advanced information technologies is to digitalize manufacturing processes such that they can be analyzed, controlled, and interacted with other digitalized components. Digital twin is developed as a general framework to do that by building the digital replicas for the physical entities. This work takes welding manufacturing as the case study to accelerate its transition to intelligent welding by building its digital twin and contributes to digital twin in the following two aspects (1) increasing the information analysis and reasoning ability by integrating deep learning; (2) enhancing the human user operative ability to physical welding manufacturing via digital twins by integrating human-robot interaction (HRI). Firstly, a digital twin of pulsed gas tungsten arc welding (GTAW-P) is developed by integrating deep learning to offer the strong feature extraction and analysis ability. In such a system, the direct information including weld pool images, arc images, welding current and arc voltage is collected by cameras and arc sensors. The undirect information determining the welding quality, i.e., weld joint top-side bead width (TSBW) and back-side bead width (BSBW), is computed by a traditional image processing method and a deep convolutional neural network (CNN) respectively. Based on that, the weld joint geometrical size is controlled to meet the quality requirement in various welding conditions. In the meantime, this developed digital twin is visualized to offer a graphical user interface (GUI) to human users for their effective and intuitive perception to physical welding processes. Secondly, in order to enhance the human operative ability to the physical welding processes via digital twins, HRI is integrated taking virtual reality (VR) as the interface which could transmit the information bidirectionally i.e., transmitting the human commends to welding robots and visualizing the digital twin to human users. Six welders, skilled and unskilled, tested this system by completing the same welding job but demonstrate different patterns and resulted welding qualities. To differentiate their skill levels (skilled or unskilled) from their demonstrated operations, a data-driven approach, FFT-PCA-SVM as a combination of fast Fourier transform (FFT), principal component analysis (PCA), and support vector machine (SVM) is developed and demonstrates the 94.44% classification accuracy. The robots can also work as an assistant to help the human welders to complete the welding tasks by recognizing and executing the intended welding operations. This is done by a developed human intention recognition algorithm based on hidden Markov model (HMM) and the welding experiments show that developed robot-assisted welding can help to improve welding quality. To further take the advantages of the robots i.e., movement accuracy and stability, the role of the robot upgrades to be a collaborator from an assistant to complete a subtask independently i.e., torch weaving and automatic seam tracking in weaving GTAW. The other subtask i.e., welding torch moving along the weld seam is completed by the human users who can adjust the travel speed to control the heat input and ensure the good welding quality. By doing that, the advantages of humans (intelligence) and robots (accuracy and stability) are combined together under this human-robot collaboration framework. The developed digital twin for welding manufacturing helps to promote the next-generation intelligent welding and can be applied in other similar manufacturing processes easily after small modifications including painting, spraying and additive manufacturing

    Analysis and enhancement of interpersonal coordination using inertial measurement unit solutions

    Get PDF
    Die heutigen mobilen Kommunikationstechnologien haben den Umfang der verbalen und textbasierten Kommunikation mit anderen Menschen, sozialen Robotern und kĂŒnstlicher Intelligenz erhöht. Auf der anderen Seite reduzieren diese Technologien die nonverbale und die direkte persönliche Kommunikation, was zu einer gesellschaftlichen Thematik geworden ist, weil die Verringerung der direkten persönlichen Interaktionen eine angemessene Wahrnehmung sozialer und umgebungsbedingter Reizmuster erschweren und die Entwicklung allgemeiner sozialer FĂ€higkeiten bremsen könnte. Wissenschaftler haben aktuell die Bedeutung nonverbaler zwischenmenschlicher AktivitĂ€ten als soziale FĂ€higkeiten untersucht, indem sie menschliche Verhaltensmuster in Zusammenhang mit den jeweilgen neurophysiologischen Aktivierungsmustern analzsiert haben. Solche QuerschnittsansĂ€tze werden auch im Forschungsprojekt der EuropĂ€ischen Union "Socializing sensori-motor contingencies" (socSMCs) verfolgt, das darauf abzielt, die LeistungsfĂ€higkeit sozialer Roboter zu verbessern und Autismus-Spektrumsstörungen (ASD) adĂ€quat zu behandeln. In diesem Zusammenhang ist die Modellierung und das Benchmarking des Sozialverhaltens gesunder Menschen eine Grundlage fĂŒr theorieorientierte und experimentelle Studien zum weiterfĂŒhrenden VerstĂ€ndnis und zur UnterstĂŒtzung interpersoneller Koordination. In diesem Zusammenhang wurden zwei verschiedene empirische Kategorien in AbhĂ€ngigkeit von der Entfernung der Interagierenden zueinander vorgeschlagen: distale vs. proximale Interaktionssettings, da sich die Struktur der beteiligten kognitiven Systeme zwischen den Kategorien Ă€ndert und sich die Ebene der erwachsenden socSMCs verschiebt. Da diese Dissertation im Rahmen des socSMCs-Projekts entstanden ist, wurden Interaktionssettings fĂŒr beide Kategorien (distal und proximal) entwickelt. Zudem wurden Ein-Sensor-Lösungen zur Reduzierung des Messaufwands (und auch der Kosten) entwickelt, um eine Messung ausgesuchter Verhaltensparameter bei einer Vielzahl von Menschen und sozialen Interaktionen zu ermöglichen. ZunĂ€chst wurden Algorithmen fĂŒr eine kopfgetragene TrĂ€gheitsmesseinheit (H-IMU) zur Messung der menschlichen Kinematik als eine Ein-Sensor-Lösung entwickelt. Die Ergebnisse bestĂ€tigten, dass die H-IMU die eigenen Gangparameter unabhĂ€ngig voneinander allein auf Basis der Kopfkinematik messen kann. Zweitens wurden—als ein distales socSMC-Setting—die interpersonellen Kopplungen mit einem Bezug auf drei interagierende Merkmale von „Übereinstimmung“ (engl.: rapport) behandelt: PositivitĂ€t, gegenseitige Aufmerksamkeit und Koordination. Die H-IMUs ĂŒberwachten bestimmte soziale Verhaltensereignisse, die sich auf die Kinematik der Kopforientierung und Oszillation wĂ€hrend des Gehens und Sprechens stĂŒtzen, so dass der Grad der Übereinstimmung geschĂ€tzt werden konnte. Schließlich belegten die Ergebnisse einer experimentellen Studie, die zu einer kollaborativen Aufgabe mit der entwickelten IMU-basierten Tablet-Anwendung durchgefĂŒhrt wurde, unterschiedliche Wirkungen verschiedener audio-motorischer Feedbackformen fĂŒr eine UnterstĂŒtzung der interpersonellen Koordination in der Kategorie proximaler sensomotorischer Kontingenzen. Diese Dissertation hat einen intensiven interdisziplinĂ€ren Charakter: Technologische Anforderungen in den Bereichen der Sensortechnologie und der Softwareentwicklung mussten in direktem Bezug auf vordefinierte verhaltenswissenschaftliche Fragestellungen entwickelt und angewendet bzw. gelöst werden—und dies in zwei unterschiedlichen DomĂ€nen (distal, proximal). Der gegebene Bezugsrahmen wurde als eine große Herausforderung bei der Entwicklung der beschriebenen Methoden und Settings wahrgenommen. Die vorgeschlagenen IMU-basierten Lösungen könnten dank der weit verbreiteten IMU-basierten mobilen GerĂ€te zukĂŒnftig in verschiedene Anwendungen perspektiv reich integriert werden.Today’s mobile communication technologies have increased verbal and text-based communication with other humans, social robots and intelligent virtual assistants. On the other hand, the technologies reduce face-to-face communication. This social issue is critical because decreasing direct interactions may cause difficulty in reading social and environmental cues, thereby impeding the development of overall social skills. Recently, scientists have studied the importance of nonverbal interpersonal activities to social skills, by measuring human behavioral and neurophysiological patterns. These interdisciplinary approaches are in line with the European Union research project, “Socializing sensorimotor contingencies” (socSMCs), which aims to improve the capability of social robots and properly deal with autism spectrum disorder (ASD). Therefore, modelling and benchmarking healthy humans’ social behavior are fundamental to establish a foundation for research on emergence and enhancement of interpersonal coordination. In this research project, two different experimental settings were categorized depending on interactants’ distance: distal and proximal settings, where the structure of engaged cognitive systems changes, and the level of socSMCs differs. As a part of the project, this dissertation work referred to this spatial framework. Additionally, single-sensor solutions were developed to reduce costs and efforts in measuring human behaviors, recognizing the social behaviors, and enhancing interpersonal coordination. First of all, algorithms using a head worn inertial measurement unit (H-IMU) were developed to measure human kinematics, as a baseline for social behaviors. The results confirmed that the H-IMU can measure individual gait parameters by analyzing only head kinematics. Secondly, as a distal sensorimotor contingency, interpersonal relationship was considered with respect to a dynamic structure of three interacting components: positivity, mutual attentiveness, and coordination. The H-IMUs monitored the social behavioral events relying on kinematics of the head orientation and oscillation during walk and talk, which can contribute to estimate the level of rapport. Finally, in a new collaborative task with the proposed IMU-based tablet application, results verified effects of different auditory-motor feedbacks on the enhancement of interpersonal coordination in a proximal setting. This dissertation has an intensive interdisciplinary character: Technological development, in the areas of sensor and software engineering, was required to apply to or solve issues in direct relation to predefined behavioral scientific questions in two different settings (distal and proximal). The given frame served as a reference in the development of the methods and settings in this dissertation. The proposed IMU-based solutions are also promising for various future applications due to widespread wearable devices with IMUs.European Commission/HORIZON2020-FETPROACT-2014/641321/E

    Human-robot interaction for assistive robotics

    Get PDF
    This dissertation presents an in-depth study of human-robot interaction (HRI) withapplication to assistive robotics. In various studies, dexterous in-hand manipulation is included, assistive robots for Sit-To-stand (STS) assistance along with the human intention estimation. In Chapter 1, the background and issues of HRI are explicitly discussed. In Chapter 2, the literature review introduces the recent state-of-the-art research on HRI, such as physical Human-Robot Interaction (HRI), robot STS assistance, dexterous in hand manipulation and human intention estimation. In Chapter 3, various models and control algorithms are described in detail. Chapter 4 introduces the research equipment. Chapter 5 presents innovative theories and implementations of HRI in assistive robotics, including a general methodology of robotic assistance from the human perspective, novel hardware design, robotic sit-to-stand (STS) assistance, human intention estimation, and control
    • 

    corecore