516 research outputs found

    Games and Brain-Computer Interfaces: The State of the Art

    Get PDF
    BCI gaming is a very young field; most games are proof-of-concepts. Work that compares BCIs in a game environments with traditional BCIs indicates no negative effects, or even a positive effect of the rich visual environments on the performance. The low transfer-rate of current games poses a problem for control of a game. This is often solved by changing the goal of the game. Multi-modal input with BCI forms an promising solution, as does assigning more meaningful functionality to BCI control

    Augmented Reality and Robotics: A Survey and Taxonomy for AR-enhanced Human-Robot Interaction and Robotic Interfaces

    Get PDF
    This paper contributes to a taxonomy of augmented reality and robotics based on a survey of 460 research papers. Augmented and mixed reality (AR/MR) have emerged as a new way to enhance human-robot interaction (HRI) and robotic interfaces (e.g., actuated and shape-changing interfaces). Recently, an increasing number of studies in HCI, HRI, and robotics have demonstrated how AR enables better interactions between people and robots. However, often research remains focused on individual explorations and key design strategies, and research questions are rarely analyzed systematically. In this paper, we synthesize and categorize this research field in the following dimensions: 1) approaches to augmenting reality; 2) characteristics of robots; 3) purposes and benefits; 4) classification of presented information; 5) design components and strategies for visual augmentation; 6) interaction techniques and modalities; 7) application domains; and 8) evaluation strategies. We formulate key challenges and opportunities to guide and inform future research in AR and robotics

    Collision Awareness Using Vibrotactile Arrays

    Get PDF
    What is often missing from many virtual worlds is a physical sense of the confinement and constraint of the virtual environment. To address this issue, we present a method for providing localized cutaneous vibratory feedback to the user’s right arm. We created a sleeve of tactors linked to a real-time human model that activates when the corresponding body area collides with an object. The hypothesis is that vibrotactile feedback to body areas provides the wearer sufficient guidance to acertain the existence and physical realism of access paths and body configurations. The results of human subject experiments clearly show that the use of full arm vibrotactile feedback improves performance over purely visual feedback in navigating the virtual environment. These results validate the empirical performance of this concept

    Motion Planning in Artificial and Natural Vector Fields

    Get PDF
    This dissertation advances the field of autonomous vehicle motion planning in various challenging environments, ranging from flows and planetary atmospheres to cluttered real-world scenarios. By addressing the challenge of navigating environmental flows, this work introduces the Flow-Aware Fast Marching Tree algorithm (FlowFMT*). This algorithm optimizes motion planning for unmanned vehicles, such as UAVs and AUVs, navigating in tridimensional static flows. By considering reachability constraints caused by vehicle and flow dynamics, flow-aware neighborhood sets are found and used to reduce the number of calls to the cost function. The method computes feasible and optimal trajectories from start to goal in challenging environments that may contain obstacles or prohibited regions (e.g., no-fly zones). The method is extended to generate a vector field-based policy that optimally guides the vehicle to a given goal. Numerical comparisons with state-of-the-art control solvers demonstrate the method\u27s simplicity and accuracy. In this dissertation, the proposed sampling-based approach is used to compute trajectories for an autonomous semi-buoyant solar-powered airship in the challenging Venusian atmosphere, which is characterized by super-rotation winds. A cost function that incorporates the energetic balance of the airship is proposed to find energy-efficient trajectories. This cost function combines the main forces acting on the vehicle: weight, buoyancy, aerodynamic lift and drag, and thrust. The FlowFMT* method is also extended to consider the possibility of battery depletion due to thrust or battery charging due to solar energy and tested in this Venus atmosphere scenario. Simulations showcase how the airship selects high-altitude paths to minimize energy consumption and maximize battery recharge. They also show the airship sinking down and drifting with the wind at the altitudes where it is fully buoyant. For terrestrial applications, this dissertation finally introduces the Sensor-Space Lattice (SSLAT) motion planner, a real-time obstacle avoidance algorithm for autonomous vehicles and mobile robots equipped with planar range finders. This planner uses a lattice to tessellate the area covered by the sensor and to rapidly compute collision-free paths in the robot surroundings by optimizing a cost function. The cost function guides the vehicle to follow an artificial vector field that encodes the desired vehicle path. This planner is evaluated in challenging, cluttered static environments, such as warehouses and forests, and in the presence of moving obstacles, both in simulations and real experiments. Our results show that our algorithm performs collision checking and path planning faster than baseline methods. Since the method can have sequential or parallel implementations, we also compare the two versions of SSLAT and show that the run-time for its parallel implementation, which is independent of the number and shape of the obstacles found in the environment, provides a significant speedup due to the independent collision checks

    Wearable haptic systems for the fingertip and the hand: taxonomy, review and perspectives

    Get PDF
    In the last decade, we have witnessed a drastic change in the form factor of audio and vision technologies, from heavy and grounded machines to lightweight devices that naturally fit our bodies. However, only recently, haptic systems have started to be designed with wearability in mind. The wearability of haptic systems enables novel forms of communication, cooperation, and integration between humans and machines. Wearable haptic interfaces are capable of communicating with the human wearers during their interaction with the environment they share, in a natural and yet private way. This paper presents a taxonomy and review of wearable haptic systems for the fingertip and the hand, focusing on those systems directly addressing wearability challenges. The paper also discusses the main technological and design challenges for the development of wearable haptic interfaces, and it reports on the future perspectives of the field. Finally, the paper includes two tables summarizing the characteristics and features of the most representative wearable haptic systems for the fingertip and the hand

    Advances in Robot Navigation

    Get PDF
    Robot navigation includes different interrelated activities such as perception - obtaining and interpreting sensory information; exploration - the strategy that guides the robot to select the next direction to go; mapping - the construction of a spatial representation by using the sensory information perceived; localization - the strategy to estimate the robot position within the spatial map; path planning - the strategy to find a path towards a goal location being optimal or not; and path execution, where motor actions are determined and adapted to environmental changes. This book integrates results from the research work of authors all over the world, addressing the abovementioned activities and analyzing the critical implications of dealing with dynamic environments. Different solutions providing adaptive navigation are taken from nature inspiration, and diverse applications are described in the context of an important field of study: social robotics

    사람의 자연스러운 보행 동작 생성을 위한 물리 시뮬레이션 기반 휴머노이드 제어 방법

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2014. 8. 이제희.휴머노이드를 제어하여 사람의 자연스러운 이동 동작을 만들어내는 것은 컴퓨터그래픽스 및 로봇공학 분야에서 중요한 문제로 생각되어 왔다. 하지만, 이는 사람의 이동에서 구동기가 부족한 (underactuated) 특성과 사람의 몸의 복잡한 구조를 모방하고 시뮬레이션해야 한다는 점 때문에 매우 어려운 문제로 알려져왔다. 본 학위논문은 물리 시뮬레이션 기반 휴머노이드가 외부의 변화에 안정적으로 대응하고 실제 사람처럼 자연스럽고 다양한 이동 동작을 만들어내도록 하는 제어 방법을 제안한다. 우리는 실제 사람으로부터 얻을 수 있는 관찰 가능하고 측정 가능한 데이터를 최대한으로 활용하여 문제의 어려움을 극복했다. 우리의 접근 방법은 모션 캡처 시스템으로부터 획득한 사람의 모션 데이터를 활용하며, 실제 사람의 측정 가능한 물리적, 생리학적 특성을 복원하여 사용하는 것이다. 우리는 토크로 구동되는 이족 보행 모델이 다양한 스타일로 걸을 수 있도록 제어하는 데이터 기반 알고리즘을 제안한다. 우리의 알고리즘은 모션 캡처 데이터에 내재된 이동 동작 자체의 강건성을 활용하여 실제 사람과 같은 사실적인 이동 제어를 구현한다. 구체적으로는, 참조 모션 데이터를 재현하는 자연스러운 보행 시뮬레이션을 위한 관절 토크를 계산하게 된다. 알고리즘에서 가장 핵심적인 아이디어는 간단한 추종 제어기만으로도 참조 모션을 재현할 수 있도록 참조 모션을 연속적으로 조절하는 것이다. 우리의 방법은 모션 블렌딩, 모션 와핑, 모션 그래프와 같은 기존에 존재하는 데이터 기반 기법들을 이족 보행 제어에 활용할 수 있게 한다. 우리는 보다 사실적인 이동 동작을 생성하기 위해 사람의 몸을 세부적으로 모델링한, 근육에 의해 관절이 구동되는 인체 모델을 제어하는 이동 제어 시스템을 제안한다. 시뮬레이션에 사용되는 휴머노이드는 실제 사람의 몸에서 측정된 수치들에 기반하고 있으며 최대 120개의 근육을 가진다. 우리의 알고리즘은 최적의 근육 활성화 정도를 계산하여 시뮬레이션을 수행하며, 참조 모션을 충실히 재현하거나 혹은 새로운 상황에 맞게 모션을 적응시키기 위해 주어진 참조 모션을 수정하는 방식으로 동작한다. 우리의 확장가능한 알고리즘은 다양한 종류의 근골격 인체 모델을 최적의 근육 조합을 사용하며 균형을 유지하도록 제어할 수 있다. 우리는 다양한 스타일로 걷기 및 달리기, 모델의 변화 (근육의 약화, 경직, 관절의 탈구), 환경의 변화 (외력), 목적의 변화 (통증의 감소, 효율성의 최대화)에 대한 대응, 방향 전환, 회전, 인터랙티브하게 방향을 바꾸며 걷기 등과 같은 보다 난이도 높은 동작들로 이루어진 예제를 통해 우리의 접근 방법이 효율적임을 보였다.Controlling artificial humanoids to generate realistic human locomotion has been considered as an important problem in computer graphics and robotics. However, it has been known to be very difficult because of the underactuated characteristics of the locomotion dynamics and the complex human body structure to be imitated and simulated. In this thesis, we presents controllers for physically simulated humanoids that exhibit a rich set of human-like and resilient simulated locomotion. Our approach exploits observable and measurable data of a human to effectively overcome difficulties of the problem. More specifically, our approach utilizes observed human motion data collected by motion capture systems and reconstructs measured physical and physiological properties of a human body. We propose a data-driven algorithm to control torque-actuated biped models to walk in a wide range of locomotion skills. Our algorithm uses human motion capture data and realizes an human-like locomotion control facilitated by inherent robustness of the locomotion motion. Concretely, it takes reference motion and generates a set of joint torques to generate human-like walking simulation. The idea is continuously modulating the reference motion such that even a simple tracking controller can reproduce the reference motion. A number of existing data-driven techniques such as motion blending, motion warping, and motion graph can facilitate the biped control with this framework. We present a locomotion control system that controls detailed models of a human body with the musculotendon actuating process to create more human-like simulated locomotion. The simulated humanoids are based on measured properties of a human body and contain maximum 120 muscles. Our algorithm computes the optimal coordination of muscle activations and actively modulates the reference motion to fathifully reproduce the reference motion or adapt the motion to meet new conditions. Our scalable algorithm can control various types of musculoskeletal humanoids while seeking harmonious coordination of many muscles and maintaining balance. We demonstrate the strength of our approach with examples that allow simulated humanoids to walk and run in various styles, adapt to change of models (e.g., muscle weakness, tightness, joint dislocation), environments (e.g., external pushes), goals (e.g., pain reduction and efficiency maximization), and perform more challenging locomotion tasks such as turn, spin, and walking while steering its direction interactively.Contents Abstract i Contents iii List of Figures v 1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1.1 Computer Graphics Perspective . . . . . . . . . . . . . . . . . 3 1.1.2 Robotics Perspective . . . . . . . . . . . . . . . . . . . . . . . 5 1.1.3 Biomechanics Perspective . . . . . . . . . . . . . . . . . . . . 7 1.2 Aim of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.3 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.4 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.5 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2 Previous Work 16 2.1 Biped Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.1.1 Controllers with Optimization . . . . . . . . . . . . . . . . . . 18 2.1.2 Controllers with Motion Capture Data . . . . . . . . . . . . . 20 2.2 Simulation of Musculoskeletal Humanoids . . . . . . . . . . . . . . . 21 2.2.1 Simulation of Specic Body Parts . . . . . . . . . . . . . . . . 21 2.2.2 Simulation of Full-Body Models . . . . . . . . . . . . . . . . . 22 2.2.3 Controllers for Musculoskeletal Humanoids . . . . . . . . . . . 23 3 Data-Driven Biped Control 24 3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.2 System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.3 Data-Driven Control . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.3.1 Balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.3.2 Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4 Locomotion Control for Many-Muscle Humanoids 56 4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.2 Humanoid Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.2.1 Muscle Force Generation . . . . . . . . . . . . . . . . . . . . . 61 4.2.2 Muscle Force Transfer . . . . . . . . . . . . . . . . . . . . . . 64 4.2.3 Equation of Motion . . . . . . . . . . . . . . . . . . . . . . . . 66 4.3 Muscle Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.3.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.3.2 Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.3.3 Quadratic Programming Formulation . . . . . . . . . . . . . . 70 4.4 Trajectory Optimization . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 5 Conclusion 84 A Mathematical Definitions 88 A.1 Definitions of Transition Function . . . . . . . . . . . . . . . . . . . . 88 B Humanoid Models 89 B.1 Torque-Actuated Biped Models . . . . . . . . . . . . . . . . . . . . . 89 B.2 Many-Muscle Humanoid Models . . . . . . . . . . . . . . . . . . . . . 91 C Dynamics of Musculotendon Actuators 94 C.1 Contraction Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . 94 C.2 Initial Muscle States . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Glossary for Medical Terms 99 Bibliography 102 초록 113Docto

    Rover and Telerobotics Technology Program

    Get PDF
    The Jet Propulsion Laboratory's (JPL's) Rover and Telerobotics Technology Program, sponsored by the National Aeronautics and Space Administration (NASA), responds to opportunities presented by NASA space missions and systems, and seeds commerical applications of the emerging robotics technology. The scope of the JPL Rover and Telerobotics Technology Program comprises three major segments of activity: NASA robotic systems for planetary exploration, robotic technology and terrestrial spin-offs, and technology for non-NASA sponsors. Significant technical achievements have been reached in each of these areas, including complete telerobotic system prototypes that have built and tested in realistic scenarios relevant to prospective users. In addition, the program has conducted complementary basic research and created innovative technology and terrestrial applications, as well as enabled a variety of commercial spin-offs

    Assessment of Audio Interfaces for use in Smartphone Based Spatial Learning Systems for the Blind

    Get PDF
    Recent advancements in the field of indoor positioning and mobile computing promise development of smart phone based indoor navigation systems. Currently, the preliminary implementations of such systems only use visual interfaces—meaning that they are inaccessible to blind and low vision users. According to the World Health Organization, about 39 million people in the world are blind. This necessitates the need for development and evaluation of non-visual interfaces for indoor navigation systems that support safe and efficient spatial learning and navigation behavior. This thesis research has empirically evaluated several different approaches through which spatial information about the environment can be conveyed through audio. In the first experiment, blindfolded participants standing at an origin in a lab learned the distance and azimuth of target objects that were specified by four audio modes. The first three modes were perceptual interfaces and did not require cognitive mediation on the part of the user. The fourth mode was a non-perceptual mode where object descriptions were given via spatial language using clockface angles. After learning the targets through the four modes, the participants spatially updated the position of the targets and localized them by walking to each of them from two indirect waypoints. The results also indicate hand motion triggered mode to be better than the head motion triggered mode and comparable to auditory snapshot. In the second experiment, blindfolded participants learned target object arrays with two spatial audio modes and a visual mode. In the first mode, head tracking was enabled, whereas in the second mode hand tracking was enabled. In the third mode, serving as a control, the participants were allowed to learn the targets visually. We again compared spatial updating performance with these modes and found no significant performance differences between modes. These results indicate that we can develop 3D audio interfaces on sensor rich off the shelf smartphone devices, without the need of expensive head tracking hardware. Finally, a third study, evaluated room layout learning performance by blindfolded participants with an android smartphone. Three perceptual and one non-perceptual mode were tested for cognitive map development. As expected the perceptual interfaces performed significantly better than the non-perceptual language based mode in an allocentric pointing judgment and in overall subjective rating. In sum, the perceptual interfaces led to better spatial learning performance and higher user ratings. Also there is no significant difference in a cognitive map developed through spatial audio based on tracking user’s head or hand. These results have important implications as they support development of accessible perceptually driven interfaces for smartphones
    corecore