14,103 research outputs found

    Interactive form creation: exploring the creation and manipulation of free form through the use of interactive multiple input interface

    Get PDF
    Most current CAD systems support only the two most common input devices: a mouse and a keyboard that impose a limit to the degree of interaction that a user can have with the system. However, it is not uncommon for users to work together on the same computer during a collaborative task. Beside that, people tend to use both hands to manipulate 3D objects; one hand is used to orient the object while the other hand is used to perform some operation on the object. The same things could be applied to computer modelling in the conceptual phase of the design process. A designer can rotate and position an object with one hand, and manipulate the shape [deform it] with the other hand. Accordingly, the 3D object can be easily and intuitively changed through interactive manipulation of both hands.The research investigates the manipulation and creation of free form geometries through the use of interactive interfaces with multiple input devices. First the creation of the 3D model will be discussed; several different types of models will be illustrated. Furthermore, different tools that allow the user to control the 3D model interactively will be presented. Three experiments were conducted using different interactive interfaces; two bi-manual techniques were compared with the conventional one-handed approach. Finally it will be demonstrated that the use of new and multiple input devices can offer many opportunities for form creation. The problem is that few, if any, systems make it easy for the user or the programmer to use new input devices

    Human operator performance of remotely controlled tasks: Teleoperator research conducted at NASA's George C. Marshal Space Flight Center

    Get PDF
    The capabilities within the teleoperator laboratories to perform remote and teleoperated investigations for a wide variety of applications are described. Three major teleoperator issues are addressed: the human operator, the remote control and effecting subsystems, and the human/machine system performance results for specific teleoperated tasks

    Ground Robotic Hand Applications for the Space Program study (GRASP)

    Get PDF
    This document reports on a NASA-STDP effort to address research interests of the NASA Kennedy Space Center (KSC) through a study entitled, Ground Robotic-Hand Applications for the Space Program (GRASP). The primary objective of the GRASP study was to identify beneficial applications of specialized end-effectors and robotic hand devices for automating any ground operations which are performed at the Kennedy Space Center. Thus, operations for expendable vehicles, the Space Shuttle and its components, and all payloads were included in the study. Typical benefits of automating operations, or augmenting human operators performing physical tasks, include: reduced costs; enhanced safety and reliability; and reduced processing turnaround time

    Gesture Based Control of Semi-Autonomous Vehicles

    Get PDF
    The objective of this investigation is to explore the use of hand gestures to control semi-autonomous vehicles, such as quadcopters, using realistic, physics based simulations. This involves identifying natural gestures to control basic functions of a vehicle, such as maneuvering and onboard equipment operation, and building simulations using the Unity game engine to investigate preferred use of those gestures. In addition to creating a realistic operating experience, human factors associated with limitations on physical hand motion and information management are also considered in the simulation development process. Testing with external participants using a recreational quadcopter simulation built in Unity was conducted to assess the suitability of the simulation and preferences between a joystick approach and the gesture-based approach. Initial feedback indicated that the simulation represented the actual vehicle performance well and that the joystick is preferred over the gesture-based approach. Improvements in the gesture-based control are documented as additional features in the simulation, such as basic maneuver training and additional vehicle positioning information, are added to assist the user to better learn the gesture-based interface and implementation of active control concepts to interpret and apply vehicle forces and torques. Tests were also conducted with an actual ground vehicle to investigate if knowledge and skill from the simulated environment transfers to a real-life scenario. To assess this, an immersive virtual reality (VR) simulation was built in Unity as a training environment to learn how to control a remote control car using gestures. This was then followed by a control of the actual ground vehicle. Observations and participant feedback indicated that range of hand movement and hand positions transferred well to the actual demonstration. This illustrated that the VR simulation environment provides a suitable learning experience, and an environment from which to assess human performance; thus, also validating the observations from earlier tests. Overall results indicate that the gesture-based approach holds promise given the emergence of new technology, but additional work needs to be pursued. This includes algorithms to process gesture data to provide more stable and precise vehicle commands and training environments to familiarize users with this new interface concept

    The Mole: a pressure-sensitive mouse

    Get PDF
    The traditional mouse enables the positioning of a cursor in a 2D plane, as well as the interaction of binary elements within that plane (e.g., buttons, links, icons). While this basic functionality is sufficient for interacting with every modern computing environment, it makes little use of the human hand\u27s ability to perform complex multi-directional movements. Devices developed to capture these multi-directional capabilities typically lack the familiar form and function of the mouse. This thesis details the design and development of a pressure-sensitive device called The Mole. The Mole retains the familiar form and function of the mouse while passively measuring the magnitude of normal hand force (i.e., downward force normal to the 2D operating surface). The measurement of this force lends itself to the development of novel interactions, far beyond what is possible with a typical mouse. This thesis demonstrates two such interactions: the positioning of a cursor in 3D space, and the simultaneous manipulation of cursor position and graphic tool parameters

    Interactive Training System for Medical Ultrasound

    Get PDF
    Ultrasound is an effective imaging modality because it is safe, unobtrusive and portable. However, it is also very operator-dependent and significant skill is required to capture quality images and properly detect abnormalities. Training is an important part of ultrasound, but the limited availability of training courses presents a significant hindrance to the use of ultrasound being used in additional settings. The goal of this work was to design and implement an interactive training system to help train and evaluate sonographers. The Interactive Training System for Medical Ultrasound is an inexpensive, software-based training system in which the trainee scans a lifelike manikin with a sham transducer containing a 6 degree of freedom tracking sensor. The observed ultrasound image is generated from a pre-stored 3D image volume and is controlled interactively by the sham transducer\u27s position and orientation. Based on the selected 3D volume, the manikin may represent normal anatomy, exhibit a specific trauma or present a given physical condition. The training system provides a realistic scanning experience by providing an interactive real-time display with adjustable image parameters such as scan depth, gain, and time gain compensation. A representative hardware interface has been developed including a lifelike manikin and convincing sham transducers, along with a touch screen user interface. Methods of capturing 3D ultrasound image volumes and stitching together multiple volumes have been evaluated. System performance was analyzed and an initial clinical evaluation was performed. This thesis presents a complete prototype training system with advanced simulation and learning assessment features. The ultrasound training system can provide cost-effective and convenient training of physicians and sonographers. This system is an innovative approach to training and is a powerful tool for training sonographers in recognizing a wide variety of medical conditions

    Freeform User Interfaces for Graphical Computing

    Get PDF
    報告番号: 甲15222 ; 学位授与年月日: 2000-03-29 ; 学位の種別: 課程博士 ; 学位の種類: 博士(工学) ; 学位記番号: 博工第4717号 ; 研究科・専攻: 工学系研究科情報工学専

    Applications for robotics in the shoe manufacturing industry

    Get PDF

    Goal Based Human Swarm Interaction for Collaborative Transport

    Get PDF
    Human-swarm interaction is an important milestone for the introduction of swarm-intelligence based solutions into real application scenarios. One of the main hurdles towards this goal is the creation of suitable interfaces for humans to convey the correct intent to multiple robots. As the size of the swarm increases, the complexity of dealing with explicit commands for individual robots becomes intractable. This brings a great challenge for the developer or the operator to drive robots to finish even the most basic tasks. In our work, we consider a different approach that humans specify only the desired goal rather than issuing individual commands necessary to obtain this task. We explore this approach in a collaborative transport scenario, where the user chooses the target position of an object, and a group of robots moves it by adapting themselves to the environment. The main outcome of this thesis is the design of integration of a collaborative transport behavior of swarm robots and an augmented reality human interface. We implemented an augmented reality (AR) application in which a virtual object is displayed overlapped on a detected target object. Users can manipulate the virtual object to generate the goal configuration for the object. The designed centralized controller translate the goal position to the robots and synchronize the state transitions. The whole system is tested on Khepera IV robots through the integration of Vicon system and ARGoS simulator
    corecore