32 research outputs found

    TEKNIK ENTRI TEKS PADA JAM TANGAN PINTAR: STUDI LITERATUR

    Get PDF
    Jam tangan pintar atau lebih dikenal dengan sebutan smartwatch, merupakan devais komputer digital berukuran mini yang memiliki fungsi selain sebagai jam tangan juga dapat melakukan tugas komputasi cerdas. Oleh karena keterbatasan ukuran layarnya, jam tangan pintar memerlukan teknik desain khusus pada masukan dan keluarannya agar dapat berinteraksi dengan penggunanya. Teknik untuk masukan dan keluaran tersebut, spesifik terhadap jenis aplikasi tertentu. Entri teks seringkali dibutuhkan oleh berbagai aplikasi, namun karena layar jam tangan pintar berukuran kecil, maka teknik entri teks yang biasa digunakan pada komputer personal tidak praktis bila diterapkan pada jam tangan pintar. Penelitian ini berfokus pada eksplorasi teknik-teknik entri teks yang digunakan pada jam tangan pintar, dan evaluasi performanya berdasarkan hasil pengujian yang telah dilakukan tiap metode beserta kesimpulannya.  DOI : https://doi.org/10.33005/sibc.v13i1.179

    WearPut : Designing Dexterous Wearable Input based on the Characteristics of Human Finger Motions

    Get PDF
    Department of Biomedical Engineering (Human Factors Engineering)Powerful microchips for computing and networking allow a wide range of wearable devices to be miniaturized with high fidelity and availability. In particular, the commercially successful smartwatches placed on the wrist drive market growth by sharing the role of smartphones and health management. The emerging Head Mounted Displays (HMDs) for Augmented Reality (AR) and Virtual Reality (VR) also impact various application areas in video games, education, simulation, and productivity tools. However, these powerful wearables have challenges in interaction with the inevitably limited space for input and output due to the specialized form factors for fitting the body parts. To complement the constrained interaction experience, many wearable devices still rely on other large form factor devices (e.g., smartphones or hand-held controllers). Despite their usefulness, the additional devices for interaction can constrain the viability of wearable devices in many usage scenarios by tethering users' hands to the physical devices. This thesis argues that developing novel Human-Computer interaction techniques for the specialized wearable form factors is vital for wearables to be reliable standalone products. This thesis seeks to address the issue of constrained interaction experience with novel interaction techniques by exploring finger motions during input for the specialized form factors of wearable devices. The several characteristics of the finger input motions are promising to enable increases in the expressiveness of input on the physically limited input space of wearable devices. First, the input techniques with fingers are prevalent on many large form factor devices (e.g., touchscreen or physical keyboard) due to fast and accurate performance and high familiarity. Second, many commercial wearable products provide built-in sensors (e.g., touchscreen or hand tracking system) to detect finger motions. This enables the implementation of novel interaction systems without any additional sensors or devices. Third, the specialized form factors of wearable devices can create unique input contexts while the fingers approach their locations, shapes, and components. Finally, the dexterity of fingers with a distinctive appearance, high degrees of freedom, and high sensitivity of joint angle perception have the potential to widen the range of input available with various movement features on the surface and in the air. Accordingly, the general claim of this thesis is that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. This thesis demonstrates the general claim by providing evidence in various wearable scenarios with smartwatches and HMDs. First, this thesis explored the comfort range of static and dynamic touch input with angles on the touchscreen of smartwatches. The results showed the specific comfort ranges on variations in fingers, finger regions, and poses due to the unique input context that the touching hand approaches a small and fixed touchscreen with a limited range of angles. Then, finger region-aware systems that recognize the flat and side of the finger were constructed based on the contact areas on the touchscreen to enhance the expressiveness of angle-based touch input. In the second scenario, this thesis revealed distinctive touch profiles of different fingers caused by the unique input context for the touchscreen of smartwatches. The results led to the implementation of finger identification systems for distinguishing two or three fingers. Two virtual keyboards with 12 and 16 keys showed the feasibility of touch-based finger identification that enables increases in the expressiveness of touch input techniques. In addition, this thesis supports the general claim with a range of wearable scenarios by exploring the finger input motions in the air. In the third scenario, this thesis investigated the motions of in-air finger stroking during unconstrained in-air typing for HMDs. The results of the observation study revealed details of in-air finger motions during fast sequential input, such as strategies, kinematics, correlated movements, inter-fingerstroke relationship, and individual in-air keys. The in-depth analysis led to a practical guideline for developing robust in-air typing systems with finger stroking. Lastly, this thesis examined the viable locations of in-air thumb touch input to the virtual targets above the palm. It was confirmed that fast and accurate sequential thumb touch can be achieved at a total of 8 key locations with the built-in hand tracking system in a commercial HMD. Final typing studies with a novel in-air thumb typing system verified increases in the expressiveness of virtual target selection on HMDs. This thesis argues that the objective and subjective results and novel interaction techniques in various wearable scenarios support the general claim that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. Finally, this thesis concludes with thesis contributions, design considerations, and the scope of future research works, for future researchers and developers to implement robust finger-based interaction systems on various types of wearable devices.ope

    Making Spatial Information Accessible on Touchscreens for Users who are Blind and Visually Impaired

    Get PDF
    Touchscreens have become a de facto standard of input for mobile devices as they most optimally use the limited input and output space that is imposed by their form factor. In recent years, people who are blind and visually impaired have been increasing their usage of smartphones and touchscreens. Although basic access is available, there are still many accessibility issues left to deal with in order to bring full inclusion to this population. One of the important challenges lies in accessing and creating of spatial information on touchscreens. The work presented here provides three new techniques, using three different modalities, for accessing spatial information on touchscreens. The first system makes geometry and diagram creation accessible on a touchscreen through the use of text-to-speech and gestural input. This first study is informed by a qualitative study of how people who are blind and visually impaired currently access and create graphs and diagrams. The second system makes directions through maps accessible using multiple vibration sensors without any sound or visual output. The third system investigates the use of binaural sound on a touchscreen to make various types of applications accessible such as physics simulations, astronomy, and video games

    Accessible On-Body Interaction for People With Visual Impairments

    Get PDF
    While mobile devices offer new opportunities to gain independence in everyday activities for people with disabilities, modern touchscreen-based interfaces can present accessibility challenges for low vision and blind users. Even with state-of-the-art screenreaders, it can be difficult or time-consuming to select specific items without visual feedback. The smooth surface of the touchscreen provides little tactile feedback compared to physical button-based phones. Furthermore, in a mobile context, hand-held devices present additional accessibility issues when both of the users’ hands are not available for interaction (e.g., on hand may be holding a cane or a dog leash). To improve mobile accessibility for people with visual impairments, I investigate on-body interaction, which employs the user’s own skin surface as the input space. On-body interaction may offer an alternative or complementary means of mobile interaction for people with visual impairments by enabling non-visual interaction with extra tactile and proprioceptive feedback compared to a touchscreen. In addition, on-body input may free users’ hands and offer efficient interaction as it can eliminate the need to pull out or hold the device. Despite this potential, little work has investigated the accessibility of on-body interaction for people with visual impairments. Thus, I begin by identifying needs and preferences of accessible on-body interaction. From there, I evaluate user performance in target acquisition and shape drawing tasks on the hand compared to on a touchscreen. Building on these studies, I focus on the design, implementation, and evaluation of an accessible on-body interaction system for visually impaired users. The contributions of this dissertation are: (1) identification of perceived advantages and limitations of on-body input compared to a touchscreen phone, (2) empirical evidence of the performance benefits of on-body input over touchscreen input in terms of speed and accuracy, (3) implementation and evaluation of an on-body gesture recognizer using finger- and wrist-mounted sensors, and (4) design implications for accessible non-visual on-body interaction for people with visual impairments

    From wearable towards epidermal computing : soft wearable devices for rich interaction on the skin

    Get PDF
    Human skin provides a large, always available, and easy to access real-estate for interaction. Recent advances in new materials, electronics, and human-computer interaction have led to the emergence of electronic devices that reside directly on the user's skin. These conformal devices, referred to as Epidermal Devices, have mechanical properties compatible with human skin: they are very thin, often thinner than human hair; they elastically deform when the body is moving, and stretch with the user's skin. Firstly, this thesis provides a conceptual understanding of Epidermal Devices in the HCI literature. We compare and contrast them with other technical approaches that enable novel on-skin interactions. Then, through a multi-disciplinary analysis of Epidermal Devices, we identify the design goals and challenges that need to be addressed for advancing this emerging research area in HCI. Following this, our fundamental empirical research investigated how epidermal devices of different rigidity levels affect passive and active tactile perception. Generally, a correlation was found between the device rigidity and tactile sensitivity thresholds as well as roughness discrimination ability. Based on these findings, we derive design recommendations for realizing epidermal devices. Secondly, this thesis contributes novel Epidermal Devices that enable rich on-body interaction. SkinMarks contributes to the fabrication and design of novel Epidermal Devices that are highly skin-conformal and enable touch, squeeze, and bend sensing with co-located visual output. These devices can be deployed on highly challenging body locations, enabling novel interaction techniques and expanding the design space of on-body interaction. Multi-Touch Skin enables high-resolution multi-touch input on the body. We present the first non-rectangular and high-resolution multi-touch sensor overlays for use on skin and introduce a design tool that generates such sensors in custom shapes and sizes. Empirical results from two technical evaluations confirm that the sensor achieves a high signal-to-noise ratio on the body under various grounding conditions and has a high spatial accuracy even when subjected to strong deformations. Thirdly, Epidermal Devices are in contact with the skin, they offer opportunities for sensing rich physiological signals from the body. To leverage this unique property, this thesis presents rapid fabrication and computational design techniques for realizing Multi-Modal Epidermal Devices that can measure multiple physiological signals from the human body. Devices fabricated through these techniques can measure ECG (Electrocardiogram), EMG (Electromyogram), and EDA (Electro-Dermal Activity). We also contribute a computational design and optimization method based on underlying human anatomical models to create optimized device designs that provide an optimal trade-off between physiological signal acquisition capability and device size. The graphical tool allows for easily specifying design preferences and to visually analyze the generated designs in real-time, enabling designer-in-the-loop optimization. Experimental results show high quantitative agreement between the prediction of the optimizer and experimentally collected physiological data. Finally, taking a multi-disciplinary perspective, we outline the roadmap for future research in this area by highlighting the next important steps, opportunities, and challenges. Taken together, this thesis contributes towards a holistic understanding of Epidermal Devices}: it provides an empirical and conceptual understanding as well as technical insights through contributions in DIY (Do-It-Yourself), rapid fabrication, and computational design techniques.Die menschliche Haut bietet eine große, stets verfĂŒgbare und leicht zugĂ€ngliche FlĂ€che fĂŒr Interaktion. JĂŒngste Fortschritte in den Bereichen Materialwissenschaft, Elektronik und Mensch-Computer-Interaktion (Human-Computer-Interaction, HCI) [so that you can later use the Englisch abbreviation] haben zur Entwicklung elektronischer GerĂ€te gefĂŒhrt, die sich direkt auf der Haut des Benutzers befinden. Diese sogenannten EpidermisgerĂ€te haben mechanische Eigenschaften, die mit der menschlichen Haut kompatibel sind: Sie sind sehr dĂŒnn, oft dĂŒnner als ein menschliches Haar; sie verformen sich elastisch, wenn sich der Körper bewegt, und dehnen sich mit der Haut des Benutzers. Diese Thesis bietet, erstens, ein konzeptionelles VerstĂ€ndnis von EpidermisgerĂ€ten in der HCI-Literatur. Wir vergleichen sie mit anderen technischen AnsĂ€tzen, die neuartige Interaktionen auf der Haut ermöglichen. Dann identifizieren wir durch eine multidisziplinĂ€re Analyse von EpidermisgerĂ€ten die Designziele und Herausforderungen, die angegangen werden mĂŒssen, um diesen aufstrebenden Forschungsbereich voranzubringen. Im Anschluss daran untersuchten wir in unserer empirischen Grundlagenforschung, wie epidermale GerĂ€te unterschiedlicher Steifigkeit die passive und aktive taktile Wahrnehmung beeinflussen. Im Allgemeinen wurde eine Korrelation zwischen der Steifigkeit des GerĂ€ts und den taktilen Empfindlichkeitsschwellen sowie der FĂ€higkeit zur Rauheitsunterscheidung festgestellt. Basierend auf diesen Ergebnissen leiten wir Designempfehlungen fĂŒr die Realisierung epidermaler GerĂ€te ab. Zweitens trĂ€gt diese Thesis zu neuartigen EpidermisgerĂ€ten bei, die eine reichhaltige Interaktion am Körper ermöglichen. SkinMarks trĂ€gt zur Herstellung und zum Design neuartiger EpidermisgerĂ€te bei, die hochgradig an die Haut angepasst sind und BerĂŒhrungs-, Quetsch- und Biegesensoren mit gleichzeitiger visueller Ausgabe ermöglichen. Diese GerĂ€te können an sehr schwierigen Körperstellen eingesetzt werden, ermöglichen neuartige Interaktionstechniken und erweitern den Designraum fĂŒr die Interaktion am Körper. Multi-Touch Skin ermöglicht hochauflösende Multi-Touch-Eingaben am Körper. Wir prĂ€sentieren die ersten nicht-rechteckigen und hochauflösenden Multi-Touch-Sensor-Overlays zur Verwendung auf der Haut und stellen ein Design-Tool vor, das solche Sensoren in benutzerdefinierten Formen und GrĂ¶ĂŸen erzeugt. Empirische Ergebnisse aus zwei technischen Evaluierungen bestĂ€tigen, dass der Sensor auf dem Körper unter verschiedenen Bedingungen ein hohes Signal-Rausch-VerhĂ€ltnis erreicht und eine hohe rĂ€umliche Auflösung aufweist, selbst wenn er starken Verformungen ausgesetzt ist. Drittens, da EpidermisgerĂ€te in Kontakt mit der Haut stehen, bieten sie die Möglichkeit, reichhaltige physiologische Signale des Körpers zu erfassen. Um diese einzigartige Eigenschaft zu nutzen, werden in dieser Arbeit Techniken zur schnellen Herstellung und zum computergestĂŒtzten Design von multimodalen EpidermisgerĂ€ten vorgestellt, die mehrere physiologische Signale des menschlichen Körpers messen können. Die mit diesen Techniken hergestellten GerĂ€te können EKG (Elektrokardiogramm), EMG (Elektromyogramm) und EDA (elektrodermale AktivitĂ€t) messen. DarĂŒber hinaus stellen wir eine computergestĂŒtzte Design- und Optimierungsmethode vor, die auf den zugrunde liegenden anatomischen Modellen des Menschen basiert, um optimierte GerĂ€tedesigns zu erstellen. Diese Designs bieten einen optimalen Kompromiss zwischen der FĂ€higkeit zur Erfassung physiologischer Signale und der GrĂ¶ĂŸe des GerĂ€ts. Das grafische Tool ermöglicht die einfache Festlegung von DesignprĂ€ferenzen und die visuelle Analyse der generierten Designs in Echtzeit, was eine Optimierung durch den Designer im laufenden Betrieb ermöglicht. Experimentelle Ergebnisse zeigen eine hohe quantitative Übereinstimmung zwischen den Vorhersagen des Optimierers und den experimentell erfassten physiologischen Daten. Schließlich skizzieren wir aus einer multidisziplinĂ€ren Perspektive einen Fahrplan fĂŒr zukĂŒnftige Forschung in diesem Bereich, indem wir die nĂ€chsten wichtigen Schritte, Möglichkeiten und Herausforderungen hervorheben. Insgesamt trĂ€gt diese Arbeit zu einem ganzheitlichen VerstĂ€ndnis von EpidermisgerĂ€ten bei: Sie liefert ein empirisches und konzeptionelles VerstĂ€ndnis sowie technische Einblicke durch BeitrĂ€ge zu DIY (Do-It-Yourself), schneller Fertigung und computergestĂŒtzten Entwurfstechniken

    Motion-based Interaction for Head-Mounted Displays

    Get PDF
    Recent advances in affordable sensing technologies have enabled motion-based interaction (MbI) for head-mounted displays (HMDs). Unlike traditional input devices like the mouse and keyboard, which often offer comparatively limited interaction possibilities (e.g., single-touch interaction), MbI does not have these constraints and is more natural because they reflect more closely people do things in real life. However, several issues exist in MbI for HMDs due to the technical limitations of the sensing and tracking devices, higher degrees of freedom afforded to users, and limited research in the area due to the rapid advancement of HMDs and tracking technologies. This thesis first outlines four core challenges in the design space of MbI for HMDs: (1) boundary awareness for hand-based interaction, (2) efficient hands-free head-based interface for HMDs, (3) efficient and feasible full-body interaction for general tasks with HMDs, and (4) accessible full-body interaction for applications in HMDs. Then, this thesis presents an investigation into the contributions of these challenges in MbI for HMDs. The first challenge is addressed by providing visual feedback during interaction tailored for such technologies. The second challenge is addressed by using a circular layout with a go-and-hit selection style for head-based interaction using text entry as the scenario. In addition, this thesis explores additional interaction mechanisms that leverage the affordances of these techniques, and in doing so, we propose directional full-body motions as an interaction approach to perform general tasks with HDMs as an example to address the third challenge. The last challenge is addressed by (1) exploring the differences between performing full-body interaction for HMDs and common displays (i.e., TV) and (2) providing a set of design guidelines that are specific to current and future HMDs. The results of this thesis show that: (1) visual methods for boundary awareness can help with mid-air hand-based interaction in HMDs; (2) head-based interaction and interfaces that take advantages of MbI, such as a circular interface, can be very efficient and low error hands-free input method for HMDs; (3) directional full-body interaction can be a feasible and efficient interaction approach for general tasks involving HMDs; (4) full-body interaction for applications in HMDs should be designed differently than for traditional displays. In addition to these results, this thesis provides a set of design recommendations and takeaway messages for MbI for HMDs

    Mixed Reality Interiors: Exploring Augmented Reality Immersive Space Planning Design Archetypes for the Creation of Interior Spatial Volume 3D User Interfaces

    Get PDF
    Augmented reality is an increasingly relevant medium of interaction and media reception with the advances in user worn or hand-held input/output technologies endowing perception of the digital nested within and reactive to the native physical. Our interior spaces are becoming the media interface and this emergence affords designers the opportunity to delve further into crafting an aesthetics for the medium. Beyond having the virtual assets and applications in correct registration with the real-world environment, critical topics are addressed such as the compositional roles of virtual and physical design features including their purpose, modulation, interaction potentials and implementation into varying indoor settings. Examining and formulating methodologies for mixed reality interior 3D UI schemes derived from the convergence of digital media and interior design disciplines comprise the scope of this design research endeavor. A holistic approach is investigated to produce a framework for augmented reality 3D user interface interiors through research and development of pattern language systems for the balanced blending of complimentary digital and physical design elements. These foundational attributes serve in the creation, organization and exploration of interactive possibilities and implications of these hybrid futuristic spatial interface layouts.M.S., Digital Media -- Drexel University, 201

    Mixed reality entertainment with wearable computers

    Get PDF
    Master'sMASTER OF ENGINEERIN

    Extending mobile touchscreen interaction

    Get PDF
    Touchscreens have become a de facto interface for mobile devices, and are penetrating further beyond their core application domain of smartphones. This work presents a design space for extending touchscreen interaction, to which new solutions may be mapped. Specific touchscreen enhancements in the domains of manual input, visual output and haptic feedback are explored and quantitative and experiental findings reported. Particular areas covered are unintentional interaction, screen locking, stereoscopic displays and picoprojection. In addition, the novel interaction approaches of finger identification and onscreen physical guides are also explored. The use of touchscreens in the domains of car dashboards and smart handbags are evaluated as domain specific use cases. This work draws together solutions from the broad area of mobile touchscreen interaction. Fruitful directions for future research are identified, and information is provided for future researchers addressing those topics.KosketusnÀytöistÀ on muodostunut mobiililaitteiden pÀÀasiallinen kÀyttöliittymÀ, ja ne ovat levinneet alkuperÀiseltÀ ydinsovellusalueeltaan, matkapuhelimista, myös muihin laitteisiin. TyössÀ tutkitaan uusia vuorovaikutuksen, visualisoinnin ja kÀyttöliittymÀpalautteen keinoja, jotka laajentavat perinteistÀ kosketusnÀytön avulla tapahtuvaa vuorovaikutusta. NÀihin liittyen vÀitöskirjassa esitetÀÀn sekÀ kvantitatiivisia tuloksia ettÀ uutta kartoittavia löydöksiÀ. Erityisesti työ tarkastelee tahatonta kosketusnÀytön kÀyttöÀ, kosketusnÀytön lukitusta, stereoskooppisia kosketusnÀyttöjÀ ja pikoprojektoreiden hyödyntÀmistÀ. LisÀksi kartoitetaan uusia vuorovaikutustapoja, jotka liittyvÀt sormien identifioimiseen vuorovaikutuksen yhteydessÀ, ja fyysisiin, liikettÀ ohjaaviin rakenteisiin kosketusnÀytöllÀ. KosketusnÀytön kÀyttöÀ autossa sekÀ osana ÀlykÀstÀ kÀsilaukkua tarkastellaan esimerkkeinÀ kÀyttökonteksteista. VÀitöskirjassa esitetÀÀn vuorovaikutussuunnittelun viitekehys, joka laajentaa kosketusnÀyttöjen kautta tapahtuvaa vuorovaikutusta mobiililaitteen kanssa, ja johon työssÀ esitellyt, uudet vuorovaikutustavat voidaan sijoittaa. VÀitöskirja yhdistÀÀ kosketusnÀyttöihin liittyviÀ kÀyttöliittymÀsuunnittelun ratkaisuja laajalta alueelta. Työ esittelee potentiaalisia suuntaviivoja tulevaisuuden tutkimuksille ja tuo uutta tutkimustietoa, jota mobiililaitteiden vuorovaikutuksen tutkijat ja kÀyttöliittymÀsuunnittelijat voivat hyödyntÀÀ
    corecore