473 research outputs found
Interactive form creation: exploring the creation and manipulation of free form through the use of interactive multiple input interface
Most current CAD systems support only the two most common input devices: a mouse and a keyboard that impose a limit to the degree of interaction that a user can have with the system. However, it is not uncommon for users to work together on the same computer during a collaborative task. Beside that, people tend to use both hands to manipulate 3D objects; one hand is used to orient the object while the other hand is used to perform some operation on the object. The same things could be applied to computer modelling in the conceptual phase of the design process. A designer can rotate and position an object with one hand, and manipulate the shape [deform it] with the other hand. Accordingly, the 3D object can be easily and intuitively changed through interactive manipulation of both hands.The research investigates the manipulation and creation of free form geometries through the use of interactive interfaces with multiple input devices. First the creation of the 3D model will be discussed; several different types of models will be illustrated. Furthermore, different tools that allow the user to control the 3D model interactively will be presented. Three experiments were conducted using different interactive interfaces; two bi-manual techniques were compared with the conventional one-handed approach. Finally it will be demonstrated that the use of new and multiple input devices can offer many opportunities for form creation. The problem is that few, if any, systems make it easy for the user or the programmer to use new input devices
How force perception changes in different refresh rate conditions
n this work we consider the role of different refresh rates of the force feedback physical engine for haptics environments, such as robotic surgery and virtual reality surgical training systems. Two experimental force feedback tasks are evaluated in a virtual environment. Experiment I is a passive contact task, where the hand-grip is held waiting for the force feedback perception given by the contact with virtual objects. Experiment II is an active contact task, where a tool is moved in a direction until the contact perception with a pliable object. Different stiffnesses and refresh rates are factorially manipulated. To evaluate differences in the two tasks, we account for latency time inside the wall, penetration depth, and maximum force exerted against the object surface. The overall result of these experiments shows an improved sensitivity in almost all variables considered with refresh rates of 500 and 1,000 Hz compared with a refresh rate of 250 Hz, but no improved sensitivity is showed among them
How force perception changes in different refresh rate conditions
n this work we consider the role of different refresh rates of the force feedback physical engine for haptics environments, such as robotic surgery and virtual reality surgical training systems. Two experimental force feedback tasks are evaluated in a virtual environment. Experiment I is a passive contact task, where the hand-grip is held waiting for the force feedback perception given by the contact with virtual objects. Experiment II is an active contact task, where a tool is moved in a direction until the contact perception with a pliable object. Different stiffnesses and refresh rates are factorially manipulated. To evaluate differences in the two tasks, we account for latency time inside the wall, penetration depth, and maximum force exerted against the object surface. The overall result of these experiments shows an improved sensitivity in almost all variables considered with refresh rates of 500 and 1,000 Hz compared with a refresh rate of 250 Hz, but no improved sensitivity is showed among them
FEELING FOR FAILURE: HAPTIC FORCE PERCEPTION OF SOFT TISSUE CONSTRAINTS IN A SIMULATED MINIMALLY INVASIVE SURGERY TASK
In minimally invasive surgery (MIS), the ability to accurately interpret haptic information and apply appropriate force magnitudes onto soft tissue is critical for minimizing bodily trauma. Force perception in MIS is a dynamic process in which the surgeon\u27s administration of force onto tissue results in useful perceptual information which guides further haptic interaction and it is hypothesized that the compliant nature of soft tissue during force application provides biomechanical information denoting tissue failure. Specifically, the perceptual relationship between applied force and material deformation rate specifies the distance remaining until structural capacity will fail, or indicates Distance-to-Break (DTB). Two experiments explored the higher-order relationship of DTB in MIS using novice and surgeon observers. Findings revealed that observers could reliably perceive DTB in simulated biological tissues, and that surgeons performed better than novices. Further, through calibration feedback training, sensitivity to DTB can be improved. Implications for optimizing training in MIS are discussed
Preliminary Study on Haptics of Textile Surfaces via Digital Visual Cues
Humans perceive through various sensory impressions, including the five senses. Not only the number of different stimuli in everyday life increase, but also the degree of assessment of urgent and irrelevant information. But online it is not possible for the customer to physically perceive and assess the haptics of a product. This paper focus on the questions if it is possible for humans to perceive and identify surface properties without using their sense of touch and if humans can judge and classify the haptics of a textile materials via digital channels through a purely visual perception
Recommended from our members
Haptic object recognition using a multi-fingered dextrous hand
The use of a dextrous, multifingered hand for high-level object recognition tasks is considered. The paradigm is model-based recognition in which the objects are modeled and recovered as superquadratics, which are shown to have a number of important attributes that make them well suited for such a task. Experiments have been performed to recover the shape of objects using sparse contacts point data from the hand with promising results. The authors also propose an approach to using tactile data in conjunction with the dextrous hand to build a library of grasping and exploration primitives that can be used in recognizing and grasping more complex multipart objects
Digital sculpture : conceptually motivated sculptural models through the application of three-dimensional computer-aided design and additive fabrication technologies
Thesis (D. Tech.) - Central University of Technology, Free State, 200
deForm: An interactive malleable surface for capturing 2.5D arbitrary objects, tools and touch
We introduce a novel input device, deForm, that supports 2.5D touch gestures, tangible tools, and arbitrary objects through real-time structured light scanning of a malleable surface of interaction. DeForm captures high-resolution surface deformations and 2D grey-scale textures of a gel surface through a three-phase structured light 3D scanner. This technique can be combined with IR projection to allow for invisible capture, providing the opportunity for co-located visual feedback on the deformable surface. We describe methods for tracking fingers, whole hand gestures, and arbitrary tangible tools. We outline a method for physically encoding fiducial marker information in the height map of tangible tools. In addition, we describe a novel method for distinguishing between human touch and tangible tools, through capacitive sensing on top of the input surface. Finally we motivate our device through a number of sample applications
Recommended from our members
Acquisition and Interpretation of 3-D Sensor Data from Touch
Acquisition of 3-D scene information has focused on either passive 2-D imaging methods (stereopsis, structure from motion etc.) or 3-D range sensing methods (structured lighting, laser scanning etc.). Little work has been done in using active touch sensing with a multi-fingered robotic hand to acquire scene descriptions, even though it is a well developed human capability. Touch sensing differs from other more passive sensing modalities such as vision in a number of ways. A multi-fingered robotic hand with touch sensors can probe, move, and change its environment. This imposes a level of control on the sensing that makes it typically more difficult than traditional passive sensors in which active control is not an issue. Secondly, touch sensing generates far less data than vision methods; this is especially intriguing in light of psychological evidence that shows humans can recover shape and a number of other object attributes very reliably using touch alone. Future robotic systems will need to use dextrous robotic hands for tasks such as grasping, manipulation, assembly, inspection and object recognition. This paper describes our use of touch sensing as part of a larger system we are building for 3-D shape recovery and object recognition using touch and vision methods. It focuses on three exploratory procedures we have built to acquire and interpret sparse 3-D touch data: grasping by containment, planar surface exploration and surface contour exploration. Experimental results for each of these procedures are presented
- …