12 research outputs found

    System, Apparatus and Method for Pedal Control

    Get PDF
    An apparatus, method, and system for controlling motion in six degrees of freedom is described. The apparatus includes a support structure, a first pedal and a second pedal. A first set of three independent articulating mechanisms is operatively connected to the support structure and the first pedal. The first set of three independent articulating mechanisms, in combination, enable motion of the first pedal in three control axes corresponding to three discrete degrees of freedom. A second set of three independent articulating mechanisms, operatively connected to the second pedal, enable motion, in combination, in three control axes corresponding to a discrete second set of three degrees of freedom. The apparatus may also include first and second sensors configured to detect the motion of the first and second pedals

    Projecting physical objects into a virtual space using the Kinect and Oculus Rift

    Get PDF
    Master's Project (M.S.) University of Alaska Fairbanks, 2015Virtualized Reality as a field of research has been increasing over the last couple of decades. Initially, it required large camera arrays, expensive equipment, and custom software to implement a virtualized reality system. With the release of the Kinect and the Oculus Rift development kits, however, the average person now has the potential to acquire the hardware and software needed to implement a virtualized reality system. This project explores the possibility of using the Kinect and Oculus Rift together to display geometry based on real-world objects in a virtual environment

    Natural Hand Gestures Recognition System for Intelligent HCI: A Survey

    Get PDF
    Abstract: Gesture recognition is to recognizing meaningful expressions of motion by a human, involving the hands, arms, face, head, and/or body. Hand Gestures have greater importance in designing an intelligent and efficient human-computer interface. The applications of gesture recognition are manifold, ranging from sign language through medical rehabilitation to virtual reality. In this paper a survey on various recent gesture recognition approaches is provided with particular emphasis on hand gestures. A review of static hand posture methods are explained with different tools and algorithms applied on gesture recognition system, including connectionist models, hidden Markov model, and fuzzy clustering. Challenges and future research directions are also highlighted

    Robotic riding mechanism for segway personal transporter.

    Get PDF
    Wong, Sheung Man.Thesis (M.Phil.)--Chinese University of Hong Kong, 2010.Includes bibliographical references (leaves 63-64).Abstracts in English and Chinese.Abstract --- p.i摘要 --- p.iiiAcknowledgements --- p.ivList of figures --- p.VChapter Chapter 1 --- Introduction --- p.1Chapter 1.1. --- Segway Personal Transporter (PT) --- p.1Chapter 1.2. --- Existing research using Segway Robotic Mobility Platform´ёØ (RMP) --- p.3Chapter 1.3. --- The ICSL Segway Rider --- p.9Chapter 1.4. --- Thesis outlines --- p.10Chapter Chapter 2 --- ICSL Segway Rider --- p.11Chapter 2.1. --- Design concept --- p.11Chapter 2.2. --- Design overview --- p.12Chapter 2.3. --- Actuating components --- p.14Chapter 2.4. --- Electronic and sensing components --- p.24Chapter 2.5. --- Software development of Segway Rider --- p.28Chapter 2.6. --- Chapter summary --- p.31Chapter Chapter 3 --- The grand challenge --- p.32Chapter 3.1. --- Objective --- p.32Chapter 3.2. --- Experiment --- p.33Chapter 3.3. --- Running lane tracking by computer vision --- p.34Chapter 3.3.1. --- Color space conversion --- p.36Chapter 3.3.2. --- Apply binary threshold --- p.37Chapter 3.3.3. --- Edge detection --- p.41Chapter 3.3.4. --- Hough transform --- p.46Chapter 3.3.5. --- Line analysis --- p.49Chapter 3.4. --- Chapter summary --- p.51Chapter Chapter 4 --- Stand and stay --- p.52Chapter 4.1. --- Introduction --- p.52Chapter 4.2. --- Box matching method --- p.53Chapter 4.3. --- Image processing steps --- p.55Chapter 4.4. --- Experiment --- p.58Chapter 4.5. --- Chapter summary --- p.60Chapter Chapter 5 --- Conclusion and future works --- p.61Chapter 5.1. --- Contributions --- p.61Chapter 5.2. --- Future works --- p.62Bibliography --- p.6

    The Underpinnings of Workload in Unmanned Vehicle Systems

    Get PDF
    This paper identifies and characterizes factors that contribute to operator workload in unmanned vehicle systems. Our objective is to provide a basis for developing models of workload for use in design and operation of complex human-machine systems. In 1986, Hart developed a foundational conceptual model of workload, which formed the basis for arguably the most widely used workload measurement techniquethe NASA Task Load Index. Since that time, however, there have been many advances in models and factor identification as well as workload control measures. Additionally, there is a need to further inventory and describe factors that contribute to human workload in light of technological advances, including automation and autonomy. Thus, we propose a conceptual framework for the workload construct and present a taxonomy of factors that can contribute to operator workload. These factors, referred to as workload drivers, are associated with a variety of system elements including the environment, task, equipment and operator. In addition, we discuss how workload moderators, such as automation and interface design, can be manipulated in order to influence operator workload. We contend that workload drivers, workload moderators, and the interactions among drivers and moderators all need to be accounted for when building complex, human-machine systems

    Affordable avatar control system for personal robots

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2009.Includes bibliographical references (p. 76-79).Social robots (personal robots) emphasize individualized social interaction and communication with people. To maximize communication capacity of a personal robot, designers make it more anthropomorphic (or zoomorphic), and people tend to interact more naturally with such robots. However, adapting anthropomorphism (or zoomorphism) in social robots makes morphology of a robot more complex; thus, it becomes harder to control robots with existing interfaces. The Huggable is a robotic Teddy bear platform developed by the Personal Robots Group at the MIT Media Lab. It has its specific purpose in healthcare, elderly care, education, and family communication. It is important that a user can successfully convey the meaningful context in a dialogue via the robot's puppeteering interface. I investigate relevant technologies to develop a robotic puppetry system for a zoomorphic personal robot and develop three different puppeteering interfaces to control the robot: the website interface, wearable interface, and sympathetic interface. The wearable interface was examined through a performance test and the web interface was examined through a user study.by Jun Ki Lee.S.M

    An original framework for understanding human actions and body language by using deep neural networks

    Get PDF
    The evolution of both fields of Computer Vision (CV) and Artificial Neural Networks (ANNs) has allowed the development of efficient automatic systems for the analysis of people's behaviour. By studying hand movements it is possible to recognize gestures, often used by people to communicate information in a non-verbal way. These gestures can also be used to control or interact with devices without physically touching them. In particular, sign language and semaphoric hand gestures are the two foremost areas of interest due to their importance in Human-Human Communication (HHC) and Human-Computer Interaction (HCI), respectively. While the processing of body movements play a key role in the action recognition and affective computing fields. The former is essential to understand how people act in an environment, while the latter tries to interpret people's emotions based on their poses and movements; both are essential tasks in many computer vision applications, including event recognition, and video surveillance. In this Ph.D. thesis, an original framework for understanding Actions and body language is presented. The framework is composed of three main modules: in the first one, a Long Short Term Memory Recurrent Neural Networks (LSTM-RNNs) based method for the Recognition of Sign Language and Semaphoric Hand Gestures is proposed; the second module presents a solution based on 2D skeleton and two-branch stacked LSTM-RNNs for action recognition in video sequences; finally, in the last module, a solution for basic non-acted emotion recognition by using 3D skeleton and Deep Neural Networks (DNNs) is provided. The performances of RNN-LSTMs are explored in depth, due to their ability to model the long term contextual information of temporal sequences, making them suitable for analysing body movements. All the modules were tested by using challenging datasets, well known in the state of the art, showing remarkable results compared to the current literature methods

    Building a semi-autonomous sociable robot platform for robust interpersonal telecommunication

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 73-74).This thesis presents the design of a software platform for the Huggable project. The Huggable is a new kind of robotic companion being developed at the MIT Media Lab for health care, education, entertainment and social communication applications. This work focuses on the social communication application as it pertains to using a semi-autonomous robotic avatar in a remote environment. The software platform consists of an extensible and robust distributed software system that connects a remote human puppeteer to the Huggable robot via internet. The paper discusses design decisions made in building the software platform and describes the technologies created for the social communication application. An informal trial of the system reveals how the system's puppeteering interface can be improved, and pinpoints where performance enhancements are needed for this particular application.by Robert Lopez Toscano.M.Eng

    Telepräsente Bewegung und haptische Interaktion in ausgedehnten entfernten Umgebungen

    Get PDF
    Das hier vorgestellte System zur weiträumigen Telepräsenz erlaubt wirklichkeitsnahe Exploration und Manipulation in ausgedehnten entfernten Umgebungen. Durch immersive Schnittstelle versetzt sich der menschliche Benutzer an die Stelle eines mobilen Teleoperators. Weiträumige Bewegung durch natürliches Gehen wird durch das substantiell erweiterte Verfahren der Bewegungskompression ermöglicht. Für die haptische Interaktion wird eine speziell entwickelte haptische Schnittstelle vorgestellt

    Telepräsente Bewegung und haptische Interaktion in ausgedehnten entfernten Umgebungen

    Get PDF
    Das hier vorgestellte System zur weiträumigen Telepräsenz erlaubt wirklichkeitsnahe Exploration und Manipulation in ausgedehnten entfernten Umgebungen. Durch immersive Schnittstelle versetzt sich der menschliche Benutzer an die Stelle eines mobilen Teleoperators. Weiträumige Bewegung durch natürliches Gehen wird durch das substantiell erweiterte Verfahren der Bewegungskompression ermöglicht. Für die haptische Interaktion wird eine speziell entwickelte haptische Schnittstelle vorgestellt
    corecore