2,156 research outputs found

    Design method for an anthropomorphic hand able to gesture and grasp

    Get PDF
    This paper presents a numerical method to conceive and design the kinematic model of an anthropomorphic robotic hand used for gesturing and grasping. In literature, there are few numerical methods for the finger placement of human-inspired robotic hands. In particular, there are no numerical methods, for the thumb placement, that aim to improve the hand dexterity and grasping capabilities by keeping the hand design close to the human one. While existing models are usually the result of successive parameter adjustments, the proposed method determines the fingers placements by mean of empirical tests. Moreover, a surgery test and the workspace analysis of the whole hand are used to find the best thumb position and orientation according to the hand kinematics and structure. The result is validated through simulation where it is checked that the hand looks well balanced and that it meets our constraints and needs. The presented method provides a numerical tool which allows the easy computation of finger and thumb geometries and base placements for a human-like dexterous robotic hand.Comment: IEEE International Conference on Robotics and Automation, May 2015, Seattle, United States. IEEE, 2015, Proceeding IEEE International Conference on Robotics and Automatio

    Neuromotor Control of the Hand During Smartphone Manipulation

    Get PDF
    The primary focus of this dissertation was to understand the motor control strategy used by our neuromuscular system for the multi-layered motor tasks involved during smartphone manipulation. To understand this control strategy, we recorded the kinematics and multi-muscle activation pattern of the right limb during smartphone manipulation, including grasping with/out tapping, movement conditions (MCOND), and arm heights. In the first study (chapter 2), we examined the neuromuscular control strategy of the upper limb during grasping with/out tapping executed with a smartphone by evaluating muscle-activation patterns of the upper limb during different movement conditions (MCOND). There was a change in muscle activity for MCOND and segments. We concluded that our neuromuscular system generates the motor strategy that would allow smartphone manipulation involving grasping and tapping while maintaining MCOND by generating continuous and distinct multi-muscle activation patterns in the upper limb muscles. In the second study (chapter 3), we examined the muscle activity of the upper limb when the smartphone was manipulated at two arm heights: shoulder and abdomen to understand the influence of the arm height on the neuromuscular control strategy of the upper limb. Some muscles showed a significant effect for ABD, while some muscle showed a significant effect for SHD. We concluded that the motor control strategy was influenced by the arm height as there were changes in the shoulder and elbow joint angles along with the muscular activity of the upper limb. Further, shoulder position helped in holding the head upright while abdomen reduced the moment arm and moment and ultimately, muscle loading compared to the shoulder. Overall, our neuromuscular system generates motor command by activating a multi-muscle activation pattern in the upper limb, which would be dependent upon the task demands such as grasping with/out tapping, MCOND, and arm heights. Similarly, our neuromuscular system does not appear to increase muscle activation when there is a combined effect of MCOND and arm heights. Instead, it utilizes a simple control strategy that would select an appropriate muscle and activate them based on the levels of MCOND and arm heights

    Get a grip: Analysis of muscle activity and perceived comfort in using stylus grips

    Get PDF
    The design of handwriting instruments has been based primarily on touch, feel, aesthetics, and muscle exertion. Previous studies make it clear that different pen characteristics have to be considered along with hand-instrument interaction in the design of writing instruments. This should include pens designed for touch screens and computer based writing surfaces. Hence, this study focuses primarily on evaluating grip style’s impact on user comfort and muscle activity associated with handgrip while using a stylus-pen. Surface EMG measures were taken approximate to the adductor pollicis, flexor digitorum, and extensor indicis of eight participants while they performed writing, drawing, and point-and-click tasks on a tablet using a standard stylus and two grip options. Participants were also timed and surveyed on comfort level for each trial. Results of this study indicate that participants overall felt using a grip was more comfortable than using a stylus alone. The claw grip was the preferred choice for writing and drawing, and the crossover grip was preferred for pointing and clicking. There was reduction in muscle activity of the extensor indicis using the claw or crossover grip for the drawing and point and click tasks. The reduced muscle activity and the perceived comfort shows the claw grip to be a viable option for improving comfort for writing or drawing on a touchscreen device

    Understanding face and eye visibility in front-facing cameras of smartphones used in the wild

    Get PDF
    Commodity mobile devices are now equipped with high-resolution front-facing cameras, allowing applications in biometrics (e.g., FaceID in the iPhone X), facial expression analysis, or gaze interaction. However, it is unknown how often users hold devices in a way that allows capturing their face or eyes, and how this impacts detection accuracy. We collected 25,726 in-the-wild photos, taken from the front-facing camera of smartphones as well as associated application usage logs. We found that the full face is visible about 29% of the time, and that in most cases the face is only partially visible. Furthermore, we identified an influence of users' current activity; for example, when watching videos, the eyes but not the entire face are visible 75% of the time in our dataset. We found that a state-of-the-art face detection algorithm performs poorly against photos taken from front-facing cameras. We discuss how these findings impact mobile applications that leverage face and eye detection, and derive practical implications to address state-of-the art's limitations

    Understanding grip shifts:how form factors impact hand movements on mobile phones

    Get PDF
    In this paper we present an investigation into how hand usage is affected by different mobile phone form factors. Our initial (qualitative) study explored how users interact with various mobile phone types (touchscreen, physical keyboard and stylus). The analysis of the videos revealed that each type of mobile phone affords specific handgrips and that the user shifts these grips and consequently the tilt and rotation of the phone depending on the context of interaction. In order to further investigate the tilt and rotation effects we conducted a controlled quantitative study in which we varied the size of the phone and the type of grips (Symmetric bimanual, Asymmetric bimanual with finger, Asymmetric bimanual with thumb and Single handed) to better understand how they affect the tilt and rotation during a dual pointing task. The results showed that the size of the phone does have a consequence and that the distance needed to reach action items affects the phones’ tilt and rotation. Additionally, we found that the amount of tilt, rotation and reach required corresponded with the participant’s grip preference. We finish the paper by discussing the design lessons for mobile UI and proposing design guidelines and applications for these insights

    EthoHand: A dexterous robotic hand with ball-joint thumb enables complex in-hand object manipulation

    No full text
    Our dexterous hand is a fundmanetal human feature that distinguishes us from other animals by enabling us to go beyond grasping to support sophisticated in-hand object manipulation. Our aim was the design of a dexterous anthropomorphic robotic hand that matches the human hand's 24 degrees of freedom, under-actuated by seven motors. With the ability to replicate human hand movements in a naturalistic manner including in-hand object manipulation. Therefore, we focused on the development of a novel thumb and palm articulation that would facilitate in-hand object manipulation while avoiding mechanical design complexity. Our key innovation is the use of a tendon-driven ball joint as a basis for an articulated thumb. The design innovation enables our under-actuated hand to perform complex in-hand object manipulation such as passing a ball between the fingers or even writing text messages on a smartphone with the thumb's end-point while holding the phone in the palm of the same hand. We then proceed to compare the dexterity of our novel robotic hand design to other designs in prosthetics, robotics and humans using simulated and physical kinematic data to demonstrate the enhanced dexterity of our novel articulation exceeding previous designs by a factor of two. Our innovative approach achieves naturalistic movement of the human hand, without requiring translation in the hand joints, and enables teleoperation of complex tasks, such as single (robot) handed messaging on a smartphone without the need for haptic feedback. Our simple, under-actuated design outperforms current state-of-the-art prostheses or robotic and prosthetic hands regarding abilities that encompass from grasps to activities of daily living which involve complex in-hand object manipulation

    LensLeech: On-Lens Interaction for Arbitrary Camera Devices

    Get PDF
    Cameras provide a vast amount of information at high rates and are part of many specialized or general-purpose devices. This versatility makes them suitable for many interaction scenarios, yet they are constrained by geometry and require objects to keep a minimum distance for focusing. We present the LensLeech, a soft silicone cylinder that can be placed directly on or above lenses. The clear body itself acts as a lens to focus a marker pattern from its surface into the camera it sits on. This allows us to detect rotation, translation, and deformation-based gestures such as pressing or squeezing the soft silicone. We discuss design requirements, describe fabrication processes, and report on the limitations of such on-lens widgets. To demonstrate the versatility of LensLeeches, we built prototypes to show application examples for wearable cameras, smartphones, and interchangeable-lens cameras, extending existing devices by providing both optical input and output for new functionality

    Embodied Interactions for Spatial Design Ideation: Symbolic, Geometric, and Tangible Approaches

    Get PDF
    Computer interfaces are evolving from mere aids for number crunching into active partners in creative processes such as art and design. This is, to a great extent, the result of mass availability of new interaction technology such as depth sensing, sensor integration in mobile devices, and increasing computational power. We are now witnessing the emergence of maker culture that can elevate art and design beyond the purview of enterprises and professionals such as trained engineers and artists. Materializing this transformation is not trivial; everyone has ideas but only a select few can bring them to reality. The challenge is the recognition and the subsequent interpretation of human actions into design intent

    GRAB: A Dataset of Whole-Body Human Grasping of Objects

    Full text link
    Training computers to understand, model, and synthesize human grasping requires a rich dataset containing complex 3D object shapes, detailed contact information, hand pose and shape, and the 3D body motion over time. While "grasping" is commonly thought of as a single hand stably lifting an object, we capture the motion of the entire body and adopt the generalized notion of "whole-body grasps". Thus, we collect a new dataset, called GRAB (GRasping Actions with Bodies), of whole-body grasps, containing full 3D shape and pose sequences of 10 subjects interacting with 51 everyday objects of varying shape and size. Given MoCap markers, we fit the full 3D body shape and pose, including the articulated face and hands, as well as the 3D object pose. This gives detailed 3D meshes over time, from which we compute contact between the body and object. This is a unique dataset, that goes well beyond existing ones for modeling and understanding how humans grasp and manipulate objects, how their full body is involved, and how interaction varies with the task. We illustrate the practical value of GRAB with an example application; we train GrabNet, a conditional generative network, to predict 3D hand grasps for unseen 3D object shapes. The dataset and code are available for research purposes at https://grab.is.tue.mpg.de.Comment: ECCV 202
    • …
    corecore