1,370 research outputs found

    Simple Kinesthetic Haptics for Object Recognition

    Full text link
    Object recognition is an essential capability when performing various tasks. Humans naturally use either or both visual and tactile perception to extract object class and properties. Typical approaches for robots, however, require complex visual systems or multiple high-density tactile sensors which can be highly expensive. In addition, they usually require actual collection of a large dataset from real objects through direct interaction. In this paper, we propose a kinesthetic-based object recognition method that can be performed with any multi-fingered robotic hand in which the kinematics is known. The method does not require tactile sensors and is based on observing grasps of the objects. We utilize a unique and frame invariant parameterization of grasps to learn instances of object shapes. To train a classifier, training data is generated rapidly and solely in a computational process without interaction with real objects. We then propose and compare between two iterative algorithms that can integrate any trained classifier. The classifiers and algorithms are independent of any particular robot hand and, therefore, can be exerted on various ones. We show in experiments, that with few grasps, the algorithms acquire accurate classification. Furthermore, we show that the object recognition approach is scalable to objects of various sizes. Similarly, a global classifier is trained to identify general geometries (e.g., an ellipsoid or a box) rather than particular ones and demonstrated on a large set of objects. Full scale experiments and analysis are provided to show the performance of the method

    Learning Haptic-based Object Pose Estimation for In-hand Manipulation Control with Underactuated Robotic Hands

    Full text link
    Unlike traditional robotic hands, underactuated compliant hands are challenging to model due to inherent uncertainties. Consequently, pose estimation of a grasped object is usually performed based on visual perception. However, visual perception of the hand and object can be limited in occluded or partly-occluded environments. In this paper, we aim to explore the use of haptics, i.e., kinesthetic and tactile sensing, for pose estimation and in-hand manipulation with underactuated hands. Such haptic approach would mitigate occluded environments where line-of-sight is not always available. We put an emphasis on identifying the feature state representation of the system that does not include vision and can be obtained with simple and low-cost hardware. For tactile sensing, therefore, we propose a low-cost and flexible sensor that is mostly 3D printed along with the finger-tip and can provide implicit contact information. Taking a two-finger underactuated hand as a test-case, we analyze the contribution of kinesthetic and tactile features along with various regression models to the accuracy of the predictions. Furthermore, we propose a Model Predictive Control (MPC) approach which utilizes the pose estimation to manipulate objects to desired states solely based on haptics. We have conducted a series of experiments that validate the ability to estimate poses of various objects with different geometry, stiffness and texture, and show manipulation to goals in the workspace with relatively high accuracy

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    A STUDY TOWARDS DEVELOPMENT OF AN AUTOMATED HAPTIC USER INTERFACE (AHUI) FOR INDIVIDUALS WHO ARE BLIND OR VISUALLY IMPAIRED

    Get PDF
    An increasing amount of information content used in schools, work and everyday living is being presented in graphical form, creating accessibility challenges for individuals who are blind or visually impaired, especially in dynamic environments, such as over the internet. Refreshable haptic displays that can interact with computers can be used to access such information tactually. Main focus of this study was the development of specialized computer applications allowing users to actively compensate for the inherent issues of haptics when exploring visual diagrams as compared to vision, which we hypothesized, would improve the usability of such devices. An intuitive zooming algorithm capable of automatically detecting significant different zoom levels, providing auditory feedback, preventing cropping of information and preventing zooming in on areas where no features were present was developed to compensate for the lower spatial resolution of haptics and was found to significantly improve the performance of the participants. Another application allowing the users to perform dynamic simplifications on the diagram to compensate for the serial based nature of processing 2D geometric information was tested and found to significantly improve the performance of the participants. For both applications participants liked the user interface and found it more usable, as expected. In addition, in this study we investigated methods that can be used to effectively present different visual features as well as overlaying features present in the visual diagrams. Three methods using several combinations of tactile and auditory modalities were tested. We found that the performance significantly improves when using the overlapping method using different modalities. For tactile only methods developed for deaf blind individuals, the toggle method was surprisingly preferred as compared to the overlapping method

    Making Graphical Information Accessible Without Vision Using Touch-based Devices

    Get PDF
    Accessing graphical material such as graphs, figures, maps, and images is a major challenge for blind and visually impaired people. The traditional approaches that have addressed this issue have been plagued with various shortcomings (such as use of unintuitive sensory translation rules, prohibitive costs and limited portability), all hindering progress in reaching the blind and visually-impaired users. This thesis addresses aspects of these shortcomings, by designing and experimentally evaluating an intuitive approach —called a vibro-audio interface— for non-visual access to graphical material. The approach is based on commercially available touch-based devices (such as smartphones and tablets) where hand and finger movements over the display provide position and orientation cues by synchronously triggering vibration patterns, speech output and auditory cues, whenever an on-screen visual element is touched. Three human behavioral studies (Exp 1, 2, and 3) assessed usability of the vibro-audio interface by investigating whether its use leads to development of an accurate spatial representation of the graphical information being conveyed. Results demonstrated efficacy of the interface and importantly, showed that performance was functionally equivalent with that found using traditional hardcopy tactile graphics, which are the gold standard of non-visual graphical learning. One limitation of this approach is the limited screen real estate of commercial touch-screen devices. This means large and deep format graphics (e.g., maps) will not fit within the screen. Panning and zooming operations are traditional techniques to deal with this challenge but, performing these operations without vision (i.e., using touch) represents several computational challenges relating both to cognitive constraints of the user and technological constraints of the interface. To address these issues, two human behavioral experiments were conducted, that assessed the influence of panning (Exp 4) and zooming (Exp 5) operations in non-visual learning of graphical material and its related human factors. Results from experiments 4 and 5 indicated that the incorporation of panning and zooming operations enhances the non-visual learning process and leads to development of more accurate spatial representation. Together, this thesis demonstrates that the proposed approach —using a vibro-audio interface— is a viable multimodal solution for presenting dynamic graphical information to blind and visually-impaired persons and supporting development of accurate spatial representations of otherwise inaccessible graphical materials

    Recognition of Haptic Interaction Patterns in Dyadic Joint Object Manipulation

    Get PDF
    The development of robots that can physically cooperate with humans has attained interest in the last decades. Obviously, this effort requires a deep understanding of the intrinsic properties of interaction. Up to now, many researchers have focused on inferring human intents in terms of intermediate or terminal goals in physical tasks. On the other hand, working side by side with people, an autonomous robot additionally needs to come up with in-depth information about underlying haptic interaction patterns that are typically encountered during human-human cooperation. However, to our knowledge, no study has yet focused on characterizing such detailed information. In this sense, this work is pioneering as an effort to gain deeper understanding of interaction patterns involving two or more humans in a physical task. We present a labeled human-human-interaction dataset, which captures the interaction of two humans, who collaboratively transport an object in an haptics-enabled virtual environment. In the light of information gained by studying this dataset, we propose that the actions of cooperating partners can be examined under three interaction types: In any cooperative task, the interacting humans either 1) work in harmony, 2) cope with conflicts, or 3) remain passive during interaction. In line with this conception, we present a taxonomy of human interaction patterns; then propose five different feature sets, comprising force-, velocity-and power-related information, for the classification of these patterns. Our evaluation shows that using a multi-class support vector machine (SVM) classifier, we can accomplish a correct classification rate of 86 percent for the identification of interaction patterns, an accuracy obtained by fusing a selected set of most informative features by Minimum Redundancy Maximum Relevance (mRMR) feature selection method
    • …
    corecore