7 research outputs found

    Historique et méthodologie de la nouvelle disposition de clavier AZERTY

    Get PDF
    Document de travail utilisé dans la rédaction de l’annexe H “Historique et méthodologie” de la norme AFNOR Z 71-300 : “Dispositions de clavier bureautique français”

    AZERTY amélioré: computational design on a national scale

    Get PDF
    International audienceFrance is the first country in the world to adopt a keyboard standard informed by computational methods, improving the performance, ergonomics, and intuitiveness of the keyboard while enabling input of many more characters. We describe a human-centric approach developed jointly with stakeholders to utilize computational methods in the decision process not only to solve a well-defined problem but also to understand the design requirements, to inform subjective views, or to communicate the outcomes. To be more broadly useful, research must develop computational methods that can be used in a participatory and inclusive fashion respecting the different needs and roles of stakeholders

    Of keyboards and beyond - optimization in human-computer interaction

    Get PDF
    In this thesis, we present optimization frameworks in the area of Human-Computer Interaction. At first, we discuss keyboard layout problems with a special focus on a project we participated in, which aimed at designing the new French keyboard standard. The special nature of this national-scale project and its optimization ingredients are discussed in detail; we specifically highlight our algorithmic contribution to this project. Exploiting the special structure of this design problem, we propose an optimization framework that was efficiently computes keyboard layouts and provides very good optimality guarantees in form of tight lower bounds. The optimized layout that we showed to be nearly optimal was the basis of the new French keyboard standard recently published in the National Assembly in Paris. Moreover, we propose a relaxation for the quadratic assignment problem (a generalization of keyboard layouts) that is based on semidefinite programming. In a branch-and-bound framework, this relaxation achieves competitive results compared to commonly used linear programming relaxations for this problem. Finally, we introduce a modeling language for mixed integer programs that especially focuses on the challenges and features that appear in participatory optimization problems similar to the French keyboard design process.Diese Arbeit behandelt Ansätze zu Optimierungsproblemen im Bereich Human-Computer Interaction. Zuerst diskutieren wir Tastaturbelegungsprobleme mit einem besonderen Fokus auf einem Projekt, an dem wir teilgenommen haben: die Erstellung eines neuen Standards für die französische Tastatur. Wir gehen auf die besondere Struktur dieses Problems und unseren algorithmischen Beitrag ein: ein Algorithmus, der mit Optimierungsmethoden die Struktur dieses speziellen Problems ausnutzt. Mithilfe dieses Algorithmus konnten wir effizient Tastaturbelegungen berechnen und die Qualität dieser Belegungen effektiv (in Form von unteren Schranken) nachweisen. Das finale optimierte Layout, welches mit unserer Methode bewiesenermaßen nahezu optimal ist, diente als Grundlage für den kürzlich in der französischen Nationalversammlung veröffentlichten neuen französischen Tastaturstandard. Darüberhinaus beschreiben wir eine Relaxierung für das quadratische Zuweisungsproblem (eine Verallgemeinerung des Tastaturbelegungsproblems), die auf semidefinieter Programmierung basiert. Wir zeigen, dass unser Algorithmus im Vergleich zu üblich genutzten linearen Relaxierung gut abschneidet. Abschließend definieren und diskutieren wir eine Modellierungssprache für gemischt integrale Programme. Diese Sprache ist speziell auf die besonderen Herausforderungen abgestimmt, die bei interaktiven Optimierungsproblemen auftreten, welche einen ähnlichen Charakter haben wie der Prozess des Designs der französischen Tastatur

    Creating and augmenting keyboards for extended reality with the Keyboard Augmentation Toolkit

    Get PDF
    This article discusses the Keyboard Augmentation Toolkit (KAT), which supports the creation of virtual keyboards that can be used both for standalone input (e.g., for mid-air text entry) and to augment physically tracked keyboards/surfaces in mixed reality. In a user study, we firstly examine the impact and pitfalls of visualising shortcuts on a tracked physical keyboard, exploring the utility of virtual per-keycap displays. Supported by this and other recent developments in XR keyboard research, we then describe the design, development, and evaluation-by-demonstration of KAT. KAT simplifies the creation of virtual keyboards (optionally bound to a tracked physical keyboard) that support enhanced display —2D/3D per-key content that conforms to the virtual key bounds; enhanced interactivity —supporting extensible per-key states such as tap, dwell, touch, swipe; flexible keyboard mappings that can encapsulate groups of interaction and display elements, e.g., enabling application-dependent interactions; and flexible layouts —allowing the virtual keyboard to merge with and augment a physical keyboard, or switch to an alternate layout (e.g., mid-air) based on need. Through these features, KAT will assist researchers in the prototyping, creation and replication of XR keyboard experiences, fundamentally altering the keyboard’s form and function

    Designing Intra-Hand Input for Wearable Devices

    Get PDF
    Department of Biomedical Engineering (Human Factors Engineering)Current trends toward the miniaturization of digital technology have enabled the development of versatile smart wearable devices. Powered by capable processors and equipped with advanced sensors, this novel device category can substantially impact application areas as diverse as education, health care, and entertainment. However, despite their increasing sophistication and potential, input techniques for wearable devices are still relatively immature and often fail to reflect key practical constraints in this design space. For example, on-device touch surfaces, such as the temple touchpad of Google Glass, are typically small and out-of-sight, thus limiting their expressivity capability. Furthermore, input techniques designed specifically for Head-Mounted Displays (HMDs), such as free-hand (e.g., Microsoft Hololens) or dedicated controller (e.g., Oculus VR) tracking, exhibit low levels of social acceptability (e.g., large-scale hand gestures are arguably unsuited for use in public settings) and are vulnerable to cause fatigue (e.g., gorilla arm) in long-term use. Such factors limit their real-world applicability. In addition to these difficulties, typical wearable use scenarios feature various situational impairments, such as encumbered use (e.g., having one hand busy), mobile use (e.g., while walking), and eyes-free use (e.g., while responding to real-world stimuli). These considerations are weakly catered for by the design of current wearable input systems. This dissertation seeks to address these problems by exploring the design space of intra-hand input, which refers to small-scale actions made within a single hand. In particular, through a hand-mounted sensing system, intra-hand input can include diverse input surfaces, such as between fingers (e.g., fingers-to-thumb and thumb-to-fingers inputs) to body surfaces (e.g., hand-to-face inputs). Here, I identify several advantages of this form of hand input, as follows. First, the hand???s high dexterity can enable comfortable, quick, accurate, and expressive inputs of various types (e.g., tap, flick, or swipe touches) at multiple locations (e.g., on each of the five fingers or other body surfaces). In addition, many viable forms of these input movements are small-scale, promising low fatigue over long-term use and basic actions that are discrete and socially acceptable. Finally, intra-hand input is inherently robust to many common situational impairments, such as use that take place in eyes-free, public, or mobile settings. Consolidating these prospective advantages, the general claim of this dissertation is that intra-hand input is an expressive and effective modality for interaction with wearable devices such as HMDs. The dissertation seeks to demonstrate that this claim holds in a range of wearable scenarios and applications, and with measures of both objective performance (e.g., time, errors, accuracy) and subjective experience (e.g., comfort or social acceptability). Specifically, in this dissertation, I verify the referred general claim by demonstrating it in three separate scenarios. I begin by exploring the design space of intra-hand input by studying the specific case of touches to a set of five touch-sensitive five nails. To this end, I first conduct an exploratory design process in which a large set of 144 input actions are generated, followed by two empirical studies on comfort and performance that refine such a large set to 29 viable inputs. The results of this work indicate that nail touches are an accessible, expressive, and comfortable form of input. Based on these results, in the second scenario, I focused on text entry in a mobile setting with the same nail form-factor system. Through a comparative empirical study involving both sitting and mobile conditions, nail-based touches were confirmed to be robust to physical disturbance while mobile. A follow-up word repetition study indicated that text entry studies of up to 33.1 WPM could be achieved when key layouts were appropriately optimized for the nail form factor. These results reveal that intra-hand inputs are suitable for complex input tasks in mobile contexts. In the third scenario, I explored an alternative form of intra-hand input that relies on small-scale hand touches to the face via the lens of social acceptability. This scenario is especially valuable for multi-wearables usage contexts, as single hand-mounted systems can enable input from a proximate distance for each scattered device around the body (e.g., hand-to-face input for smartglass or ear-worn device and inter-finger input with wristwatch usage posture for smartwatch). In fact, making an input on the face can attract unwanted, undue attention from the public. Thus, the design stage of this work involved elicitation of diverse unobtrusive and socially acceptable hand-to-face actions from users, that is, outcomes that were then refined into five design strategies that can achieve socially acceptable input in this setting. Follow-up studies on a prototype that instantiates these strategies validate their effectiveness and provide a characterization of the speed and accuracy achieved by the user with each system. I argue that this spectrum of metrics, recorded over a diverse set of scenarios, supports the general claim that intra-hand inputs for wearable devices can be expressively and effectively operated in terms of objective performance (e.g., time, errors, accuracy) and subjective experience (e.g., comfort or social acceptability) in common wearable use scenarios, such as when mobile and in public. I conclude with a discussion of the contributions of this work, scope for further developments, and the design issues that need to be considered by researchers, designers, and developers who seek to implement these types of input. This discussion spans diverse considerations, such as suitable tracking technologies, appropriate body regions, viable input types, and effective design processes. Through this discussion, this dissertation seeks to provide practical guidance to support and accelerate further research efforts aimed at achieving real-world systems that realize the potential of intra-hand input for wearables.clos

    WearPut : Designing Dexterous Wearable Input based on the Characteristics of Human Finger Motions

    Get PDF
    Department of Biomedical Engineering (Human Factors Engineering)Powerful microchips for computing and networking allow a wide range of wearable devices to be miniaturized with high fidelity and availability. In particular, the commercially successful smartwatches placed on the wrist drive market growth by sharing the role of smartphones and health management. The emerging Head Mounted Displays (HMDs) for Augmented Reality (AR) and Virtual Reality (VR) also impact various application areas in video games, education, simulation, and productivity tools. However, these powerful wearables have challenges in interaction with the inevitably limited space for input and output due to the specialized form factors for fitting the body parts. To complement the constrained interaction experience, many wearable devices still rely on other large form factor devices (e.g., smartphones or hand-held controllers). Despite their usefulness, the additional devices for interaction can constrain the viability of wearable devices in many usage scenarios by tethering users' hands to the physical devices. This thesis argues that developing novel Human-Computer interaction techniques for the specialized wearable form factors is vital for wearables to be reliable standalone products. This thesis seeks to address the issue of constrained interaction experience with novel interaction techniques by exploring finger motions during input for the specialized form factors of wearable devices. The several characteristics of the finger input motions are promising to enable increases in the expressiveness of input on the physically limited input space of wearable devices. First, the input techniques with fingers are prevalent on many large form factor devices (e.g., touchscreen or physical keyboard) due to fast and accurate performance and high familiarity. Second, many commercial wearable products provide built-in sensors (e.g., touchscreen or hand tracking system) to detect finger motions. This enables the implementation of novel interaction systems without any additional sensors or devices. Third, the specialized form factors of wearable devices can create unique input contexts while the fingers approach their locations, shapes, and components. Finally, the dexterity of fingers with a distinctive appearance, high degrees of freedom, and high sensitivity of joint angle perception have the potential to widen the range of input available with various movement features on the surface and in the air. Accordingly, the general claim of this thesis is that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. This thesis demonstrates the general claim by providing evidence in various wearable scenarios with smartwatches and HMDs. First, this thesis explored the comfort range of static and dynamic touch input with angles on the touchscreen of smartwatches. The results showed the specific comfort ranges on variations in fingers, finger regions, and poses due to the unique input context that the touching hand approaches a small and fixed touchscreen with a limited range of angles. Then, finger region-aware systems that recognize the flat and side of the finger were constructed based on the contact areas on the touchscreen to enhance the expressiveness of angle-based touch input. In the second scenario, this thesis revealed distinctive touch profiles of different fingers caused by the unique input context for the touchscreen of smartwatches. The results led to the implementation of finger identification systems for distinguishing two or three fingers. Two virtual keyboards with 12 and 16 keys showed the feasibility of touch-based finger identification that enables increases in the expressiveness of touch input techniques. In addition, this thesis supports the general claim with a range of wearable scenarios by exploring the finger input motions in the air. In the third scenario, this thesis investigated the motions of in-air finger stroking during unconstrained in-air typing for HMDs. The results of the observation study revealed details of in-air finger motions during fast sequential input, such as strategies, kinematics, correlated movements, inter-fingerstroke relationship, and individual in-air keys. The in-depth analysis led to a practical guideline for developing robust in-air typing systems with finger stroking. Lastly, this thesis examined the viable locations of in-air thumb touch input to the virtual targets above the palm. It was confirmed that fast and accurate sequential thumb touch can be achieved at a total of 8 key locations with the built-in hand tracking system in a commercial HMD. Final typing studies with a novel in-air thumb typing system verified increases in the expressiveness of virtual target selection on HMDs. This thesis argues that the objective and subjective results and novel interaction techniques in various wearable scenarios support the general claim that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. Finally, this thesis concludes with thesis contributions, design considerations, and the scope of future research works, for future researchers and developers to implement robust finger-based interaction systems on various types of wearable devices.ope

    Assignment Problems for Optimizing Text Input

    No full text
    Text input methods are an integral part of our daily interaction with digital devices. However, their design poses a complex problem: for any method, we must decide which input action (a button press, a hand gesture, etc.) produces which symbol (e.g., a character or word). With only 26 symbols and input actions, there are already more than 10^26 distinct solutions, making it impossible to find the best one through manual design. Prior work has shown that we can use optimization methods to search such large design spaces efficiently and automatically find the best solution for a given task and objective. However, work in this domain has been limited mostly to the performance optimization of keyboards. The Ph.D. thesis advances the field of text-entry optimization by enlarging the space of optimizable text-input methods and proposing new criteria for assessing their optimality. Firstly, the design problem is formulated as an assignment problem for integer programming. This enables the use of standard mathematical solvers and algorithms for efficiently finding good solutions. Then, objective functions are developed, for assessing their optimality with respect to motor performance, ergonomics, and learnability. The corresponding models extend beyond interaction with soft keyboards, to consider multi-finger input, novel sensors, and alternative form factors. In addition, the thesis illustrates how to formulate models from prior work in terms of an assignment problem, providing a coherent theoretical basis for text-entry optimization. The proposed objectives are applied in the optimization of three assignment problems: text input with multi-finger gestures in mid-air, text input on a long piano keyboard, and -- for a contribution to the official French keyboard standard -- input of special characters via a physical keyboard. Combining the proposed models offers a multi-objective optimization approach able to capture the complex cognitive and motor processes during typing. Finally, the dissertation discusses future work that is needed to solve the long-standing problem of finding the optimal layout for physical keyboards, in light of empirical evidence that prior models are insufficient to respond to the diverse typing strategies people employ with modern keyboards. The thesis advances the state of the art in text-entry optimization by proposing novel objective functions that quantify the performance, ergonomics and learnabilityof a text input method. The objectives presented are formulated as assignment problems, which can be solved with integer programming via standard mathematical solvers or heuristic algorithms. While the work focused on text input, the assignment problem can be used to model other design problems in HCI (e.g., how best to assign commands to UI controls or distribute UI elements across several devices), for which the same problem formulations, optimization techniques, and even models could be applied
    corecore