65 research outputs found

    Get a Grip:Evaluating Grip Gestures for VR Input Using a Lightweight Pen

    Get PDF
    The use of Virtual Reality (VR) in applications such as data analysis, artistic creation, and clinical settings requires high precision input. However, the current design of handheld controllers, where wrist rotation is the primary input approach, does not exploit the human fingers' capability for dexterous movements for high precision pointing and selection. To address this issue, we investigated the characteristics and potential of using a pen as a VR input device. We conducted two studies. The first examined which pen grip allowed the largest range of motion---we found a tripod grip at the rear end of the shaft met this criterion. The second study investigated target selection via 'poking' and ray-casting, where we found the pen grip outperformed the traditional wrist-based input in both cases. Finally, we demonstrate potential applications enabled by VR pen input and grip postures

    Barehand Mode Switching in Touch and Mid-Air Interfaces

    Get PDF
    Raskin defines a mode as a distinct setting within an interface where the same user input will produce results different to those it would produce in other settings. Most interfaces have multiple modes in which input is mapped to different actions, and, mode-switching is simply the transition from one mode to another. In touch interfaces, the current mode can change how a single touch is interpreted: for example, it could draw a line, pan the canvas, select a shape, or enter a command. In Virtual Reality (VR), a hand gesture-based 3D modelling application may have different modes for object creation, selection, and transformation. Depending on the mode, the movement of the hand is interpreted differently. However, one of the crucial factors determining the effectiveness of an interface is user productivity. Mode-switching time of different input techniques, either in a touch interface or in a mid-air interface, affects user productivity. Moreover, when touch and mid-air interfaces like VR are combined, making informed decisions pertaining to the mode assignment gets even more complicated. This thesis provides an empirical investigation to characterize the mode switching phenomenon in barehand touch-based and mid-air interfaces. It explores the potential of using these input spaces together for a productivity application in VR. And, it concludes with a step towards defining and evaluating the multi-faceted mode concept, its characteristics and its utility, when designing user interfaces more generally

    Ipsi- and contralateral corticospinal influences in uni- and bimanual movements in humans

    Full text link
    Il existe des projections corticospinales (CS) vers les motoneurones (MNs) aussi bien contra- (c) qu’ipsilatĂ©rales (i). Les influences CSc sur les MNs du poignet sont connues pour ĂȘtre modulĂ©es entre autres par la position du poignet et les affĂ©rences cutanĂ©es. Pour cette raison, notre objectif Ă©tait de vĂ©rifier si ces caractĂ©ristiques sont aussi valides pour les influences CSi. En utilisant la stimulation transcrĂąnienne magnĂ©tique au niveau du cortex primaire droit, nous avons tout d’abord comparĂ© les influences CSi sur les MNs des flĂ©chisseurs du poignet Ă  des positions maintenues de flexion et d’extension durant une tĂąche uni-manuelle ainsi que deux tĂąches bimanuelles, ceci chez des sujets droitiers (n=23). Nous avons ensuite comparĂ© les influences CSi dans cinq tĂąches bi-manuelles de tenue d’objet durant lesquelles les sujets avaient Ă  tenir entre leurs mains un bloc Ă  la surface soit lisse, soit rugueuse, dont le poids Ă©tait supportĂ© ou non, ceci en position de flexion (n=21). Dans une tĂąche, un poids Ă©tait ajoutĂ© au bloc lisse en condition non supportĂ©e pour amplifier les forces de prĂ©hension requises. Une modulation positiondĂ©pendante Ă©tait observĂ©e au niveau des potentiels Ă©voquĂ©s moteurs (iPEM), mais seulement lors de la tĂąche bi-manuelle quand les deux mains interagissaient via un bloc (p= 0.01). Une modulation basĂ©e sur la texture Ă©tait Ă©galement prĂ©sente, quel que soit le support de poids, et le bloc lisse Ă©tait associĂ© avec des iPEMs plus importants en comparaison avec le bloc rugueux (p= 0.001). Ainsi, les influences CSi sur les MNs n’étaient modulĂ©es que lors des tĂąches bi-manuelles et dĂ©pendaient de la maniĂšre dont les mains interagissaient. De plus, les affĂ©rences cutanĂ©es modulaient les influences CSi facilitatrices et pourraient ainsi participer Ă  la prise en main des objets. Il en est conclu que les hĂ©misphĂšres droit et gauche coopĂšrent durant les tĂąches bimanuelles impliquant la tenue d’objet entre les mains, avec la participation potentielle de projections mono-, et poly-synaptiques, transcallosales inclues. La possibilitĂ© de la contribution de reflexes cutanĂ©s et d’étirement (spinaux et transcorticaux) est discutĂ©e sur la base de la notion que tout mouvement dĂ©coule du contrĂŽle indirect, de la « rĂ©fĂ©rence » (referent control). Ces rĂ©sultats pourraient ĂȘtre essentiels Ă  la comprĂ©hension du rĂŽle des interactions interhĂ©misphĂ©riques chez les sujets sains et cliniques.There are both contra- (c) and ipsilateral (i) corticospinal (CS) projections to motoneurons (MNs). There is evidence that cCS influences on wrist MNs are modulated by wrist position and cutaneous afferents. Thus, we aimed to test whether these findings are valid for iCS influences as well. Using transcranial magnetic stimulation applied over the right primary motor cortex, we first compared iCS influences on wrist flexor MNs at actively maintained flexion and extension wrist positions in one uni- and two bimanual tasks in right-handed subjects (n=23). We further compared iCS influences in five bimanual holding tasks in which subjects had to hold a smooth or coarse block between their hands, with or without its weight being supported, in flexion position (n=21). In one task, a weight was added to the unsupported smooth block to increase load forces. A position-dependent modulation of the short-latency motor evoked potential (iMEP) was observed, but only in the bimanual task when the two hands interacted through a block (p=0.01). A texture-dependent modulation was present regardless of the weight supported, and the smooth block was associated with larger iMEPs in comparison to the coarse block (p=0.001). Hence, iCS influences on MNs were modulated only in bimanual tasks and depended on how the two hands interacted. Furthermore, cutaneous afferents modulated facilitatory iCS influences and thus may participate to grip forces scaling and maintaining. It is concluded that the left and right cortices cooperate in bimanual tasks involving holding an object between the hands, with possible participation of mono- and poly-synaptic, including transcallosal projections to MNs. The possible involvement of spinal and trans-cortical stretch and cutaneous reflexes in bimanual tasks when holding an object is discussed based on the notion that indirect, referent control underlies motor actions. Results might be essential for the understanding of the role of intercortical interaction in healthy and neurological subjects

    How do humans mediate with the external physical world? From perception to control of articulated objects

    Get PDF
    Many actions in our daily life involve operation with articulated tools. Despite the ubiquity of articulated objects in daily life, human ability in perceiving the properties and control of articulated objects has been merely studied. Articulated objects are composed of links and revolute or prismatic joints. Moving one part of the linkage results in the movement of the other ones. Reaching a position with the tip of a tool requires adapting the motor commands to the change of position of the endeffector different from the action of reaching the same position with the hand. The dynamic properties are complex and variant in the movement of articulated bodies. For instance, apparent mass, a quantity that measures the dynamic interaction of the articulated object, varies as a function of the changes in configuration. An actuated articulated system can generate a static, but position-dependent force field with constant torques about joints. There are evidences that internal models are involved in the perception and control of tools. In the present work, we aim to investigate several aspects of the perception and control of articulated objects and address two questions, The first question is how people perceive the kinematic and dynamic properties in the haptic interaction with articulated objects? And the second question is what effect has seeing the tool on the planning and execution of reaching movements with a complex tool? Does the visual representation of mechanism structures help in the reaching movement and how? To address these questions, 3D printed physical articulated objects and robotic systems have been designed and developed for the psychophysical studies. The present work involves three studies in different aspects of perception and control of articulated objects. We first did haptic size discrimination tasks using three different types of objects, namely, wooden boxes, actuated apparatus with two movable flat surfaces, and large-size pliers, in unimanual, bimanual grounded and bimanual free conditions. We found bimanual integration occurred in particular in the free manipulation of objects. The second study was on the visuo-motor reaching with complex tools. We found that seeing the mechanism of the tool, even briefly at the beginning of the trial, improved the reaching performance. The last study was about force perception, evidences showed that people could take use of the force field at the end-effector to induce the torque about the joints generated by the articulated system

    Phrasing Bimanual Interaction for Visual Design

    Get PDF
    Architects and other visual thinkers create external representations of their ideas to support early-stage design. They compose visual imagery with sketching to form abstract diagrams as representations. When working with digital media, they apply various visual operations to transform representations, often engaging in complex sequences. This research investigates how to build interactive capabilities to support designers in putting together, that is phrasing, sequences of operations using both hands. In particular, we examine how phrasing interactions with pen and multi-touch input can support modal switching among different visual operations that in many commercial design tools require using menus and tool palettes—techniques originally designed for the mouse, not pen and touch. We develop an interactive bimanual pen+touch diagramming environment and study its use in landscape architecture design studio education. We observe interesting forms of interaction that emerge, and how our bimanual interaction techniques support visual design processes. Based on the needs of architects, we develop LayerFish, a new bimanual technique for layering overlapping content. We conduct a controlled experiment to evaluate its efficacy. We explore the use of wearables to identify which user, and distinguish what hand, is touching to support phrasing together direct-touch interactions on large displays. From design and development of the environment and both field and controlled studies, we derive a set methods, based upon human bimanual specialization theory, for phrasing modal operations through bimanual interactions without menus or tool palettes

    Extending touch with eye gaze input

    Get PDF
    Direct touch manipulation with displays has become one of the primary means by which people interact with computers. Exploration of new interaction methods that work in unity with the standard direct manipulation paradigm will be of bene t for the many users of such an input paradigm. In many instances of direct interaction, both the eyes and hands play an integral role in accomplishing the user's interaction goals. The eyes visually select objects, and the hands physically manipulate them. In principle this process includes a two-step selection of the same object: users rst look at the target, and then move their hand to it for the actual selection. This thesis explores human-computer interactions where the principle of direct touch input is fundamentally changed through the use of eye-tracking technology. The change we investigate is a general reduction to a one-step selection process. The need to select using the hands can be eliminated by utilising eye-tracking to enable users to select an object of interest using their eyes only, by simply looking at it. Users then employ their hands for manipulation of the selected object, however they can manipulate it from anywhere as the selection is rendered independent of the hands. When a spatial o set exists between the hands and the object, the user's manual input is indirect. This allows users to manipulate any object they see from any manual input position. This fundamental change can have a substantial e ect on the many human-computer interactions that involve user input through direct manipulation, such as temporary touchscreen interactions. However it is unclear if, when, and how it can become bene cial to users of such an interaction method. To approach these questions, our research in this topic is guided by the following two propositions. The rst proposition is that gaze input can transform a direct input modality such as touch to an indirect modality, and with it provide new and powerful interaction capabilities. We develop this proposition in context of our investigation on integrated gaze interactions within direct manipulation user interfaces. We rst regard eye gaze for generic multi-touch displays, introducing Gaze-Touch as a technique based on the division of labour: gaze selects and touch manipulates. We investigate this technique with a design space analysis, protyping of application examples, and an informal user evaluation. The proposition is further developed by an exploration of hybrid eye and hand inputs with a stylus, for precise and cursor based indirect control; with bimanual input, to rapidly issue input from two hands to gaze-selected objects; with tablets, where Gaze-Touch enables one-handed interaction across the whole screen with the same hand that holds the device; and free-hand gesture in virtual reality to interact with any viewed object at a distance located in the virtual scene. Overall, we demonstrate that using eye gaze to enable indirect input yields many interaction bene ts, such as whole-screen reachability, occlusion-free manipulation, high precision cursor input, and low physical e ort. Integration of eye gaze with manual input raises new questions about how it can complement, instead of replace, the direct interactions users are familiar with. This is important to allow users the choice between direct and indirect inputs as each a ords distinct pros and cons for the usability of human-computer interfaces. These two input forms are normally considered separately from each other, but here we investigate interactions that combine those within the same interface. In this context, the second proposition is that gaze and touch input enables new and seamless ways of combining direct and indirect forms of interaction. We develop this proposition by regarding multiple interaction tasks that a user usually perform in a sequence, or simultaneously. First, we introduce a method to enable users switching between both input forms by implicitly exploiting visual attention during manual input. Direct input is active when looking at the input, and otherwise users will manipulate the object they look at indirectly. A design application for typical drawing and vector-graphics tasks has been prototyped to illustrate and explore this principle. The application contributes many example use cases, where direct drawing activities are complemented with indirect menu actions, precise cursor inputs, and seamless context switching at a glance. We further develop the proposition by investigating simultaneous direct and indirect input by bimanual input, where each input is assigned to one hand. We present an empirical study with an in-depth analysis of using indirect navigation in one hand, and direct pen drawing on the other. We extend this input constellation to tablet devices, by designing compound techniques for use in a more naturalistic setting when one hand holds the device. The interactions show that many typical tablet scenarios, such as browsing, map navigation, homescreen selections, or image gallery, can be enhanced through exploiting eye gaze

    Investigating New Forms of Single-handed Physical Phone Interaction with Finger Dexterity

    Get PDF
    With phones becoming more powerful and such an essential part of our lives, manufacturers are creating new device forms and interactions to better support even more diverse functions. A common goal is to enable a larger input space and expand the input vocabulary using new physical phone interactions other than touchscreen input. This thesis explores how utilizing our hand and finger dexterity can expand physical phone interactions. To understand how we can physically manipulate a phone using the fine motor skills of finger, we identify and evaluate single-handed "dexterous gestures". Four manipulations are defined: shift, spin (yaw axis), rotate (roll axis) and flip (pitch axis), with a formative survey showing all except flip have been performed for various reasons. A controlled experiment examines the speed, behaviour, and preference of manipulations in the form of dexterous gestures, by considering two directions and two movement magnitudes. Using a heuristic recognizer for spin, rotate, and flip, a one-week usability experiment finds increased practice and familiarity improve the speed and comfort of dexterous gestures. With the confirmation that users can loosen their grip and perform gestures with finger dexterity, we investigate the performance of one-handed touch input on the side of a mobile phone. An experiment examines grip change and subjective preference when reaching for side targets using different fingers. Two following experiments examine taps and flicks using the thumb and index finger in a new two-dimensional input space. We simulate a side-touch sensor with a combination of capacitive sensing and motion tracking to distinguish touches on the lower, middle, or upper edges. We further focus on physical phone interaction with a new phone form factor by exploring and evaluating single-handed folding interactions suitable for "modern flip phones": smartphones with a bendable full screen touch display. Three categories of interactions are identified: only-fold, touch-enhanced fold, and fold-enhanced touch; in which gestures are created using fold direction, fold magnitude, and touch position. A prototype evaluation device is built to resemble current flip phones, but with a modified spring system to enable folding in both directions. A study investigates performance and preference for 30 fold gestures, revealing which are most promising. Overall, our exploration shows that users can loosen their grip to physically interact with phones in new ways, and these interactions could be practically integrated into daily phone applications

    Ergonomic Design Guidelines for Non-flexible, Foldable, and Rollable Mobile Devices

    Get PDF
    Department of Human Factors EngineeringSmartphones are mobile devices used daily by people of almost all ages. Therefore, improving these devices from an ergonomic perspective can benefit many people. Similarly, future mobile devices with new displays must be designed from an ergonomic perspective. The purpose of this thesis was to develop ergonomic design guidelines for current non-flexible smartphones as well as future flexible display devices, considering perceived grip comfort, user preference, attractive design, and/or muscle activity. This thesis consists of six studies. The first two studies are on current smartphones with non-flexible displays, and the remaining four studies are on future mobile devices with flexible (foldable and rollable) displays. Study 1 examined the effects of task (neutral, comfortable, maximum, vertical, and horizontal strokes), phone width (60 and 90 mm), and hand length (small, medium, and large) on grasp, index finger reach zone, discomfort, and muscle activation for smartphone rear interaction. Ninety individuals participated in this study. The grasp was classified into two groups for rear interaction usage. The recommended zone for rear interaction was 8.8???10.1 cm from the bottom and 0.3???2.0 cm to the right of the vertical center line. Horizontal (vertical) strokes deviated from the horizontal axis in the range ???10.8?? to ???13.5?? (81.6 to 88.4??). Maximum strokes appeared to be excessive as these caused 43.8% greater discomfort than neutral strokes did. A 90-mm width also appeared to be excessive as it resulted in a 12.3% increase in discomfort relative to the 60-mm width. The small-hand group reported 11.9???18.2% higher discomfort ratings, and the percentage of maximum voluntary exertion of the flexor digitorum superficialis was 6.4% higher. Study 2 aimed to identify ergonomic forms of non-flexible smartphone by investigating the effects of hand length, four major smartphone dimensions (height, width, thickness, and edge roundness), and mass on one-handed grip comfort and design attractiveness. Seventy-two individuals participated. Study 2 was conducted in three stages. Stage 1 determined the ranges of the four smartphone dimensions suitable for grip comfort. Stage 2 investigated the effects of width and thickness (determined to have the greatest influence) on grip comfort and design attractiveness. Stage 3 investigated the effect of mass on grip comfort and design attractiveness. Phone width was found to significantly influence grip comfort and design attractiveness, and the dimensions of 140??65(or 70)??8??2.5 mm (height??width??thickness??edge roundness) provided higher one-handed grip comfort and design attractiveness. The selected dimensions were fit with a mass of 122 g and compared within a range of 106???137 g. Study 3 examined ergonomic forms for mobile foldable display devices in terms of folding/unfolding comfort and preference. Sixty individuals participated. Study 3 was conducted in two stages. In stage 1, suitable screen sizes for five tasks (messaging, calling, texting, web searching, and gaming) were determined. In stage 2, the most preferred folding methods among 14 different bi-folding and tri-folding methods were determined. The device dimension of 140H??60W was preferred for calling, whereas 140H??130W was preferred for web searches and gaming. The most preferred tri-fold concept (140H??198W) utilized Z-shaped screen folding. A trade-off was observed between screen protection and easy screen access. Study 4 examined the effects of gripping condition, device thickness, and hand length on bimanual grip comfort when using mobile devices with a rollable display. Thirty individuals evaluated three rollable display device prototypes (2, 6, and 10 mm right-side thickness) using three distinct gripping conditions (unrestricted, restricted, and pulp pinch grips). Rollable display devices should have at least 20 mm side bezel width and 10 mm thickness to ensure high grip comfort for bilateral screen pulling. Grip comfort increased as the device thickness was increased. Relative to device thickness, gripping condition greatly influenced bimanual grip comfort. Study 5 examined the effects of device height (70, 140, and 210 mm), task (web searching, video watching, and E-mail composing), and hand length (small, medium, and large hand groups) on various UX elements associated with using rollable display devices. Thirty individuals participated. Six UX elements (preferred screen width, preferred screen aspect ratio, user satisfaction, grip comfort, portability, design attractiveness, and gripping method) were assessed. Among device height, task, and hand length, device height was the most influential on the UX elements. The 95th percentile preferred screen width of three prototypes (device heights of 210, 140, and 70 mm) was 311.1, 206.2, and 100.0 mm, respectively. The larger the hand length, the wider the preferred screen width. A device (screen) height of 140 (120) mm with a 206.2 mm wide screen improved the overall user experience. Study 6 examined the effects of gender (15 males and 15 females), device thickness (2T, 6T, and 10T), and pulling duration (0.5s, 1.0s, and 1.5s) on preferred and acceptable pulling forces, muscle activities, and perceived comfort of the upper limbs associated with unrolling rollable displays. Thirty individuals evaluated three rollable display prototypes by laterally pulling each prototype for three different durations. Preferred and acceptable pulling forces of the upper limbs were measured, and the corresponding muscle activation and perceived comfort were obtained. Pulling duration largely accounted for %MVC of posterior deltoid (PD), flexor carpi radialis (FCR), and extensor carpi radialis (ECR), whereas gender largely accounted for perceived comfort. In consideration of perceived comfort, the device thickness was recommended to be 2 to 6T for both genders. %MVC of PD, FCR, and ECR of the female group was 1.4-2.4 times as high as that of the male group. The perceived comfort of the male group was 1.1-1.3 times higher than that of the female group. Overall, 6T was the best thickness. Users preferred a shorter pulling duration with a higher level of muscle activation than a longer pulling duration with a lower level of muscle activation to unroll the rollable screen. This work suggested ergonomic design guidelines for non-flexible smartphones and flexible mobile devices. Through these guidelines, basic dimensions and concepts for current and future mobile devices can be specified. In future studies, it is necessary to consider the intangible UX for future mobile devices by investigating the GUI based on the PUI proposed in this study.clos

    Facteurs influencing haptic shape perception

    Full text link
    Le but de cette Ă©tude Ă©tait de dĂ©terminer la contribution de plusieurs facteurs (le design de la tĂąche, l’orientation d’angle, la position de la tĂȘte et du regard) sur la capacitĂ© des sujets Ă  percevoir les diffĂ©rences de formes bidimensionnelles (2-D) en utilisant le toucher haptique. Deux sĂ©ries d'expĂ©riences (n = 12 chacune) ont Ă©tĂ© effectuĂ©es. Dans tous les cas, les angles ont Ă©tĂ© explorĂ©s avec l'index du bras tendu. La premiĂšre expĂ©rience a dĂ©montrĂ© que le seuil de discrimination des angles 2-D a Ă©tĂ© nettement plus Ă©levĂ©, 7,4°, que le seuil de catĂ©gorisation des angles 2-D, 3,9°. Ce rĂ©sultat Ă©tend les travaux prĂ©cĂ©dents, en montrant que la diffĂ©rence est prĂ©sente dans les mĂȘmes sujets testĂ©s dans des conditions identiques (connaissance des rĂ©sultats, conditions d'essai visuel, l’orientation d’angle). Les rĂ©sultats ont Ă©galement montrĂ© que l'angle de catĂ©gorisation ne varie pas en fonction de l'orientation des angles dans l'espace (oblique, verticale). Étant donnĂ© que les angles prĂ©sentĂ©s Ă©taient tous distribuĂ©s autour de 90°, ce qui peut ĂȘtre un cas particulier comme dans la vision, cette constatation doit ĂȘtre Ă©tendue Ă  diffĂ©rentes gammes d'angles. Le seuil plus Ă©levĂ© dans la tĂąche de discrimination reflĂšte probablement une exigence cognitive accrue de cette tĂąche en demandant aux sujets de mĂ©moriser temporairement une reprĂ©sentation mentale du premier angle explorĂ© et de la comparer avec le deuxiĂšme angle explorĂ©. La deuxiĂšme expĂ©rience reprĂ©sente la suite logique d’une expĂ©rience antĂ©rieure dans laquelle on a constatĂ© que le seuil de catĂ©gorisation est modifiĂ© avec la direction du regard, mais pas avec la position de la tĂȘte quand les angles (non visibles) sont explorĂ©s en position excentrique, 60° Ă  la droite de la ligne mĂ©diane. Cette expĂ©rience a testĂ© l'hypothĂšse que l'augmentation du seuil, quand le regard est dirigĂ© vers l'extrĂȘme droite, pourrait reflĂ©ter une action de l'attention spatiale. Les sujets ont explorĂ© les angles situĂ©s Ă  droite de la ligne mĂ©diane, variant systĂ©matiquement la direction du regard (loin ou vers l’angle) de mĂȘme que l'emplacement d'angle (30° et 60° vers la droite). Les seuils de catĂ©gorisation n’ont dĂ©montrĂ© aucun changement parmi les conditions testĂ©es, bien que le biais (point d'Ă©galitĂ© subjective) ait Ă©tĂ© modifiĂ© (dĂ©calage aux valeurs infĂ©rieurs Ă  90°). Puisque notre test avec le regard fixĂ© Ă  l’extrĂȘme droite (loin) n'a eu aucun effet sur le seuil, nous proposons que le facteur clĂ© contribuant Ă  l'augmentation du seuil vu prĂ©cĂ©demment (tĂȘte tout droit/regard Ă  droite) doit ĂȘtre cette combinaison particuliĂšre de la tĂȘte/regard/angles et non l’attention spatiale.The purpose was to determine the contribution of several factors (design of the task, angle orientation, head position and gaze) to the ability of subjects to perceive differences in twodimensional (2-D) shape using haptic touch. Two series of experiments (n=12 each) were carried out. In all cases the angles were explored with the index finger of the outstretched arm. The first experiment showed that the mean threshold for 2-D angle discrimination was significantly higher, 7.4°, than for 2-D angle categorization, 3.9°. This result extended previous work, by showing that the difference is present in the same subjects tested under identical conditions (knowledge of results, visual test conditions, angle orientation). The results also showed that angle categorization did not vary as a function of the orientation of the angles in space (oblique, upright). Given that the angles presented were all distributed around 90°, and that this may be a special case as in vision, this finding needs to be extended to different ranges of angles. The higher threshold with angle discrimination likely reflects the increased cognitive demands of this task which required subjects to temporarily store a mental representation of the first angle scanned, and to compare this to the second scanned angle. The second experiment followed up on observations that categorization thresholds are modified with gaze direction but not head position when the unseen angles are explored in an eccentric position, 60° to the right of midline. This experiment tested the hypothesis that the increased threshold when gaze was directed to the far right might reflect an action of spatial attention. Subjects explored angles located to the right of midline, systematically varying the direction of gaze (away from or to the angles) along with angle location (30° and 60° to the right). Categorization thresholds showed no change across the conditions tested, although bias (point of subjective equality) was changed (shift to lower angle values). Since our testing with far right gaze (away) had no effect on threshold, we suggest that the key factor contributing to the increased threshold seen previously (head forward/gaze right) must have been this particular combination of head/gaze/angles used and not spatial attention

    MOTOR CONTROL OF THUMB-INDEX SYSTEM IN HEALTHY POPULATION

    Get PDF
    Thumb and Index fingers are involved in many daily tasks, it is understandable how injuries, musculoskeletal, rheumatologic, and neurological diseases could affect hand function causing severe disability. The evaluation of motor control deficits of the thumb-index system is necessary to identify impairments and to propose specific therapeutic or surgical proposes. Pinch maximal voluntary contraction is the most investigated parameter, it is a valid estimator of general hand function. However, thumb and index are rarely involved at their maximal contraction, usually they are used in precision pinches at low submaximal forces exerted for a short-to-long time. For this reason other parameters must be investigated. In this dissertation, a multiparametric evaluation of thumb-index system was proposed. The battery of tests consisted of the maximal voluntary contraction (MVC) of pinch grip (TP, tip pinch and PP, palmar pinch) and of the opposite movement (E, extension of thumb and index), the endurance (SC, sustained contraction), the accuracy and precision of pinch force in a pinch and release task (DC, dynamic contraction) and the force coordination between hands in a bimanual simultaneous task (BSC, bimanual strength coordination). The tasks were measured with a measurement system consisted of two pinch gauges, connected to a PC, the visual feedback was displayed on a monitor through the graphical user interface of an ad-hoc developed software. To be usable in the clinical context, it is important to check the reliability of the tasks and collecting data in healthy samples permits on the one hand to analyse how values changes as function of anthropometric variables, hand dominance, dexterity, and on the other hand to define the reference values to compare pathological populations. Therefore this dissertation was conducted through test-retest reliability studies and cross-sectional studies to establish normative data of PP, TP, E MVCs, SC, DC and BSC in the Italian population. All the tasks proved reliable and consistent, MVC and SC showed high reliability, DC and BSC reliability was lower but clinically suitable. Strength, analysed through PP, TP, E MVCs, declined in line with the normal process of aging that also entails muscle fibers and the reduction of daily activities in older adults. In relative terms, E-MVC showed the highest strength loss in the over 75y. SC showed similar values in all age groups, variables of DC and BSC showed instead large effect related to age-decline. Women performed better than men only in SC, in MVC, DC and BSC men excelled. A hand dominance effect emerged only in TP and PP MVC. Correlations between tasks were very low to low, suggesting that different constructs were measured by the tasks. This Ph.D. project proposed novel tasks to evaluate pinch motor control which were showed reliable in healthy people and their normative data were obtained, representing a useful aid in the clinical field. The results become a starting point for future studies to highlight impairments of the thumb-index system in different neurological and musculoskeletal disorders and to guide the rehabilitation and the therapeutic intervention
    • 

    corecore