99 research outputs found
Airborne Infrared Target Tracking with the Nintendo Wii Remote Sensor
Intelligence, surveillance, and reconnaissance unmanned aircraft systems (UAS) are the most common variety of UAS in use today and provide invaluable capabilities to both the military and civil services. Keeping the sensors centered on a point of interest for an extended period of time is a demanding task requiring the full attention and cooperation of the UAS pilot and sensor operator. There is great interest in developing technologies which allow an operator to designate a target and allow the aircraft to automatically maneuver and track the designated target without operator intervention. Presently, the barriers to entry for developing these technologies are high: expertise in aircraft dynamics and control as well as in real- time motion video analysis is required and the cost of the systems required to flight test these technologies is prohibitive. However, if the research intent is purely to develop a vehicle maneuvering controller then it is possible to obviate the video analysis problem entirely. This research presents a solution to the target tracking problem which reliably provides automatic target detection and tracking with low expense and computational overhead by making use of the infrared sensor from a Nintendo Wii Remote Controller
The development of near field probing systems for EMC near field visualization and EMI source localization
The objectives of this research are to visualize the frequency dependent electromagnetic field distribution for electromagnetic compatibility (EMC) applications and the radiating source reconstruction on complex shaped electronic systems. This is achieved by combining near field probing with a system for automatically recording the probe position and orientation. Due to the complexity of the shape of the electronic systems of interest, and for utilizing the expertise of the user, the probe will be moved manually not robotically. Concurrently, the local near field will be recorded, associated with the location and displayed at near real time on the captured 3D geometry as a field strength map for EMC applications and, for source reconstruction, a reconstructed image showing the far field radiating sources. --Abstract, page iii
Ambiente de realidade virtual para visitas imersivas e interação
Mestrado em Engenharia de Computadores e TelemáticaComo solução para visitas virtuais imersivas a museus, propomos uma extensão
à plataforma previamente desenvolvida para efectuar a configuração
de ambientes virtuais imersivos (pSIVE), mantendo todas as funcionalidades
de criação de ambientes virtuais e de associação de conteúdos (PDF, videos,
texto), mas também permitindo interações baseadas em gestos e navegação.
Para isso, propomos navegação um para um usando rastreamento do
esqueleto com uma Kinect que é calibrada no espaço do mundo real em que
o utilizador se situa, assim como métodos de interação por gestos. Para validar
os métodos propostos de navegação e interação, foi efetuado um estudo
comparativo entre a interação e navegação à base de gestos e em botões.
Com os resultados desse estudo em mente, desenvolvemos novos métodos
de interação com seleção via direção do olhar. A aplicação desenvolvida foi
testada num cenário real, como parte de uma instalação artística no museu
da cidade de Aveiro, onde os visitantes podiam navegar uma sala virtual do
museu e manipular objetos de maneira a criar a sua própria exposição.Como solução para visitas virtuais imersivas a museus, propomos uma extensão
à plataforma previamente desenvolvida para efectuar a configuração
de ambientes virtuais imersivos (pSIVE), mantendo todas as funcionalidades
de criação de ambientes virtuais e de associação de conteúdos (PDF, videos,
texto), mas também permitindo interações baseadas em gestos e navegação.
Para isso, propomos navegação um para um usando rastreamento do
esqueleto com uma Kinect que é calibrada no espaço do mundo real em que
o utilizador se situa, assim como métodos de interação por gestos. Para validar
os métodos propostos de navegação e interação, foi efetuado um estudo
comparativo entre a interação e navegação à base de gestos e em botões.
Com os resultados desse estudo em mente, desenvolvemos novos métodos
de interação com seleção via direção do olhar. A aplicação desenvolvida foi
testada num cenário real, como parte de uma instalação artística no museu
da cidade de Aveiro, onde os visitantes podiam navegar uma sala virtual do
museu e manipular objetos de maneira a criar a sua própria exposição
Intuitive Teleoperation of an Intelligent Robotic System Using Low-Cost 6-DOF Motion Capture
There is currently a wide variety of six degree-of-freedom (6-DOF) motion capture technologies available. However, these systems tend to be very expensive and thus cost prohibitive. A software system was developed to provide 6-DOF motion capture using the Nintendo Wii remote’s (wiimote) sensors, an infrared beacon, and a novel hierarchical linear-quaternion Kalman filter. The software is made freely available, and the hardware costs less than one hundred dollars. Using this motion capture software, a robotic control system was developed to teleoperate a 6-DOF robotic manipulator via the operator’s natural hand movements.
The teleoperation system requires calibration of the wiimote’s infrared cameras to obtain an estimate of the wiimote’s 6-DOF pose. However, since the raw images from the wiimote’s infrared camera are not available, a novel camera-calibration method was developed to obtain the camera’s intrinsic parameters, which are used to obtain a low-accuracy estimate of the 6-DOF pose. By fusing the low-accuracy estimate of 6-DOF pose with accelerometer and gyroscope measurements, an accurate estimation of 6-DOF pose is obtained for teleoperation.
Preliminary testing suggests that the motion capture system has an accuracy of less than a millimetre in position and less than one degree in attitude. Furthermore, whole-system tests demonstrate that the teleoperation system is capable of controlling the end effector of a robotic manipulator to match the pose of the wiimote. Since this system can provide 6-DOF motion capture at a fraction of the cost of traditional methods, it has wide applicability in the field of robotics and as a 6-DOF human input device to control 3D virtual computer environments
Towards Naturalistic Interfaces of Virtual Reality Systems
Interaction plays a key role in achieving realistic experience in virtual reality (VR). Its realization depends on interpreting the intents of human motions to give inputs to VR systems. Thus, understanding human motion from the computational perspective is essential to the design of naturalistic interfaces for VR.
This dissertation studied three types of human motions, including locomotion (walking), head motion and hand motion in the context of VR.
For locomotion, the dissertation presented a machine learning approach for developing a mechanical repositioning technique based on a 1-D treadmill for interacting with a unique new large-scale projective display, called the Wide-Field Immersive Stereoscopic Environment (WISE). The usability of the proposed approach was assessed through a novel user study that asked participants to pursue a rolling ball at variable speed in a virtual scene. In addition, the dissertation studied the role of stereopsis in avoiding virtual obstacles while walking by asking participants to step over obstacles and gaps under both stereoscopic and non-stereoscopic viewing conditions in VR experiments.
In terms of head motion, the dissertation presented a head gesture interface for interaction in VR that recognizes real-time head gestures on head-mounted displays (HMDs) using Cascaded Hidden Markov Models. Two experiments were conducted to evaluate the proposed approach. The first assessed its offline classification performance while the second estimated the latency of the algorithm to recognize head gestures. The dissertation also conducted a user study that investigated the effects of visual and control latency on teleoperation of a quadcopter using head motion tracked by a head-mounted display. As part of the study, a method for objectively estimating the end-to-end latency in HMDs was presented.
For hand motion, the dissertation presented an approach that recognizes dynamic hand gestures to implement a hand gesture interface for VR based on a static head gesture recognition algorithm. The proposed algorithm was evaluated offline in terms of its classification performance. A user study was conducted to compare the performance and the usability of the head gesture interface, the hand gesture interface and a conventional gamepad interface for answering Yes/No questions in VR.
Overall, the dissertation has two main contributions towards the improvement of naturalism of interaction in VR systems. Firstly, the interaction techniques presented in the dissertation can be directly integrated into existing VR systems offering more choices for interaction to end users of VR technology. Secondly, the results of the user studies of the presented VR interfaces in the dissertation also serve as guidelines to VR researchers and engineers for designing future VR systems
Multimodal interface for an intelligent wheelchair
Tese de mestrado integrado. Engenharia Informática e Computação. Universidade do Porto. Faculdade de Engenharia. 201
Posture and visuomotor performance in children : the development of a novel measurement system
The aim of this thesis was to develop and test a platform which was capable of measuring the developmental trajectory of postural stability and fine motor control. Moreover, the thesis set out to explore the interdependence of these motor processes through synchronous measurement of postural and fine-motor control processes. The thesis introduces an objective, fine-motor measure sensitive enough to detect gender differences in children. This system was developed further to incorporate
measures of postural sway, providing objective measures of postural performance that were capable of detecting age-dependant task-based manipulations of postural stability.
Further development of the platform to incorporate low-cost consumer products allowed the cost barrier to large-scale measurement of posture to be addressed. This meant that accurate, synchronous and objective measurement of postural
control and fine-motor control could take place outside of the laboratory environment. The developed system was deployed in schools and this allowed an investigation into the effect of seating on postural control. The results indicated that (a) seating attenuates the differences in postural control normally observed as a function of age; (b) postural control is modulated by task demands. Finally, the relationship between postural control and fine-motor control was investigated an interdependent functional relationship was found between manual
control and postural stability development
Interactive ubiquitous displays based on steerable projection
The ongoing miniaturization of computers and their embedding into the physical environment require new means of visual output. In the area of Ubiquitous Computing, flexible and adaptable display options are needed in order to enable the presentation of visual content in the physical environment. In this dissertation, we introduce the concepts of Display Continuum and Virtual Displays as new means of human-computer interaction. In this context, we present a realization of a Display Continuum based on steerable projection, and we describe a number of different interaction methods for manipulating this Display Continuum and the Virtual Displays placed on it.Mit zunehmender Miniaturisierung der Computer und ihrer Einbettung in der physikalischen Umgebung werden neue Arten der visuellen Ausgabe notwendig. Im Bereich des Ubiquitous Computing (Rechnerallgegenwart) werden flexible und anpassungsfähige Displays benötigt, um eine Anzeige von visuellen Inhalten unmittelbar in der physikalischen Umgebung zu ermöglichen. In dieser Dissertation führen wir das Konzept des Display-Kontinuums und der Virtuellen Displays als Instrument der Mensch-Maschine-Interaktion ein. In diesem Zusammenhang präsentieren wir eine mögliche Display-Kontinuum-Realisierung, die auf der Verwendung steuerbarer Projektion basiert, und wir beschreiben mehrere verschiedene Interaktionsmethoden, mit denen man das Display-Kontinuum und die darauf platzierten Virtuellen Displays steuern kann
적은 수의 사용자 입력으로부터 인간 동작의 합성 및 편집
학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2014. 8. 이제희.An ideal 3D character animation system can easily synthesize and edit human motion and also will provide an efficient user interface for an animator. However, despite advancements of animation systems, building effective systems for synthesizing and editing realistic human motion still remains a difficult problem. In the case of a single character, the human body is a significantly complex structure because it consists of as many as hundreds of degrees of freedom. An animator should manually adjust many joints of the human body from user inputs. In a crowd scene, many individuals in a human crowd have to respond to user inputs when an animator wants a given crowd to fit a new environment. The main goal of this thesis is to improve interactions between a user and an animation system.
As 3D character animation systems are usually driven by low-dimensional inputs, there is no method for a user to directly generate a high-dimensional character animation. To address this problem, we propose a data-driven mapping model that is built by motion data obtained from a full-body motion capture system, crowd simulation, and data-driven motion synthesis algorithm. With the data-driven mapping model in hand, we can transform low-dimensional user inputs into character animation because motion data help to infer missing parts of system inputs. As motion capture data have many details and provide realism of the movement of a human, it is easier to generate a realistic character animation than without motion capture data.
To demonstrate the generality and strengths of our approach, we developed two animation systems that allow the user to synthesize a single character animation in realtime and edit crowd animation via low-dimensional user inputs interactively. The first system entails controlling a virtual avatar using a small set of three-dimensional (3D) motion sensors. The second system manipulates large-scale crowd animation that consists of hundreds of characters with a small number of user constraints. Examples show that our system is much less laborious and time-consuming than previous animation systems, and thus is much more suitable for a user to generate desired character animation.Contents
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . II
Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IV
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V
1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Thesis Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2 Background 10
2.1 Performance Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.1 Performance-based Interfaces for Character Animation . . . . . . . 11
2.1.2 Statistical Models for Motion Synthesis . . . . . . . . . . . . . . . 12
2.1.3 Retrieval of Motion Capture Data . . . . . . . . . . . . . . . . . . 13
2.2 Crowd Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.1 Crowd Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.2 Motion Editing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.3 Geometry Deformation . . . . . . . . . . . . . . . . . . . . . . . . 15
3 Realtime Performance Animation Using Sparse 3D Motion Sensors 17
3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2 System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3 Sensor Data and Calibration . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.4 Motion Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.4.1 Online Local Model . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.4.2 Kernel CCA-based Regression . . . . . . . . . . . . . . . . . . . . 25
3.4.3 Motion Post-processing . . . . . . . . . . . . . . . . . . . . . . . 27
3.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4 Interactive Manipulation of Large-Scale Crowd Animation 40
4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.2 Crowd Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.3 Cage-based Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.3.1 Cage Construction . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.3.2 Cage Representation . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.4 Editing Crowd Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.4.1 Spatial Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.4.2 Temporal Manipulation . . . . . . . . . . . . . . . . . . . . . . . . 57
4.5 Collision Avoidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.6 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5 Conclusion 69
Bibliography I
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XI
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XIIIDocto
- …