91 research outputs found

    EdgeGlass: Exploring Tapping Performance on Smart Glasses while Sitting and Walking

    Get PDF
    Department of Human Factors EngineeringCurrently, smart glasses allow only touch sensing area which supports front mounted touch pads. However, touches on top, front and bottom sides of glass mounted touchpad is not yet explored. We made a customized touch sensor (length: 5-6 cm, height: 1 cm, width: 0.5 cm) featuring the sensing on its top, front, and bottom surfaces. For doing that, we have used capacitive touch sensing technology (MPR121 chips) with an electrode size of ~4.5 mm square, which is typical in the modern touchscreens. We have created a hardware system which consists of a total of 48 separate touch sensors. We investigated the interaction technique by it for both the sitting and walking situation, using a single finger sequential tapping and a pair finger simultaneous tapping. We have divided each side into three equal target areas and this separation made a total of 36 combinations. Our quantitative result showed that pair finger simultaneous tapping touches were faster, less error-prone in walking condition, compared to single finger sequential tapping into walking condition. Whereas, single finger sequence tapping touches were slower, but less error-prone in sitting condition, compared to pair simultaneous tapping in sitting condition. However, single finger sequential tapping touches were slower, much less error-prone in sitting condition compared to walking. Interestingly, double finger tapping touches had similar performance result in terms of both, error rate and completion time, in both sitting and walking conditions. Mental, physical, performance, effort did not have any effect on any temporal tapping???s and body poses experience of workload. In case of the parameter of temporal demand, for single finger sequential tapping mean temporal (time pressure) workload demand was higher than pair finger simultaneous tapping but body poses did not affect temporal (time pressure) workload for both of the sequential and simultaneous tapping type. In case of the parameter of frustration, the result suggested that mean frustration workload was higher for single finger sequential tapping experienced by the participants compared to pair finger simultaneous tapping and among body poses, walking experienced higher frustration mean workload than sitting. The subjective measure of overall workload during the performance study showed no significant difference between both independent variable: body pose (sitting and walking) and temporal tapping (single finger sequential tapping and pair finger simultaneous tapping).ope

    Credential hardening by using touchstroke dynamics

    Get PDF
    Today, reliance on digital devices for daily routines has been shifted towards portable mobile devices. Therefore, the need for security enhancements within this platform is imminent. Numerous research works have been performed on strengthening password authentication by using keystroke dynamics biometrics, which involve computer keyboards and cellular phones as input devices. Nevertheless, experiments performed specifically on touch screen devices are relatively lacking. This paper describes a novel technique to strengthen security authentication systems on touch screen devices via a new sub variant behavioural biometrics called touchstroke dynamics. We capitalize on the high resolution timing latency and the pressure information on touch screen panel as feature data. Following this a light weight algorithm is introduced to calculate the similarity between feature vectors. In addition, a fusion approach is proposed to enhance the overall performance of the system to an equal error rate of 7.71% (short input) and 6.27% (long input)

    Synthesis and Drop-on-Demand Deposition of Graphene Derivative Inks for Flexible Thin Film Electronics

    Get PDF
    This dissertation presents methods for deposition and post-processing of Graphene-Carboxymethyl Cellulose (G-CMC) and Graphene Oxide (GO) aqueous functional inks using a custom drop-on-demand (DOD) printer to fabricate mechanically flexible, non-transparent and transparent thin film electronic devices. Thin films on flexible substrates find use in lightweight, low profile, and conformable electronic devices. Such devices can include chemical sensors, flexible RFID tags, bioelectronics circuits, lightweight electronics for space systems, and transparent electrodes for optoelectronic systems. The goal of this research project is to provide simple methods for fabrication of these devices using environmentally friendly and easy to synthesize functional inks. Therefore, two graphene based inks are utilized; GO and a novel Carboxymethyl Cellulose (CMC) functionalized aqueous dispersion of Graphene, G-CMC. Proposed functional inks are deposited on treated substrates by DOD printing. Deposited thin films were post-processed by use of a muffle furnace or a pulsed laser system. Furthermore, gold doped G-CMC films and G-Silver Nanoprism (G-AgNP) composite inks were developed to enhance film electrical properties. Inkjet printed films on glass substrates were characterized in terms of their electrical, optical, and mechanical properties. Correlations between film thickness, optical transmittance, and conductivity were investigated. It was possible to deposit homogeneous thin films at 100 nm to 2000 nm thickness. G-CMC films exhibited good scaling of conductance where thicker films had ~ 660 ℩/sq sheet resistance. Gold doped and G-AgNP composite semi-transparent films exhibited enhanced conductance with sheet resistances of ~ 700 ℩/sq at 35% transparency and ~ 374 ℩/sq at 50% transparency, respectively. Laser assisted treatment of samples was conducted to investigate two opportunities; pulsed laser thermal treatment and pulsed laser micromachining on rigid and flexible substrates. Effect of laser parameters was investigated to establish guidelines for thin film thermal treatment and micromachining Finally, novel flexible sensors and circuits were fabricated to demonstrate task driven performance of proposed materials and methods. Based on the presented work, proposed methods and functional inks show promise for fabricating simple electronic devices on flexible and rigid substrates. It is believed that presented advances may benefit industrial fields that require scalable and simple thin film fabrication methods

    Integrating passive ubiquitous surfaces into human-computer interaction

    Get PDF
    Mobile technologies enable people to interact with computers ubiquitously. This dissertation investigates how ordinary, ubiquitous surfaces can be integrated into human-computer interaction to extend the interaction space beyond the edge of the display. It turns out that acoustic and tactile features generated during an interaction can be combined to identify input events, the user, and the surface. In addition, it is shown that a heterogeneous distribution of different surfaces is particularly suitable for realizing versatile interaction modalities. However, privacy concerns must be considered when selecting sensors, and context can be crucial in determining whether and what interaction to perform.Mobile Technologien ermöglichen den Menschen eine allgegenwĂ€rtige Interaktion mit Computern. Diese Dissertation untersucht, wie gewöhnliche, allgegenwĂ€rtige OberflĂ€chen in die Mensch-Computer-Interaktion integriert werden können, um den Interaktionsraum ĂŒber den Rand des Displays hinaus zu erweitern. Es stellt sich heraus, dass akustische und taktile Merkmale, die wĂ€hrend einer Interaktion erzeugt werden, kombiniert werden können, um Eingabeereignisse, den Benutzer und die OberflĂ€che zu identifizieren. DarĂŒber hinaus wird gezeigt, dass eine heterogene Verteilung verschiedener OberflĂ€chen besonders geeignet ist, um vielfĂ€ltige InteraktionsmodalitĂ€ten zu realisieren. Bei der Auswahl der Sensoren mĂŒssen jedoch Datenschutzaspekte berĂŒcksichtigt werden, und der Kontext kann entscheidend dafĂŒr sein, ob und welche Interaktion durchgefĂŒhrt werden soll

    Touch Screen Theory

    Get PDF
    Technology companies claim to connect people through touchscreens, but by conflating physical contact with emotional sentiments, they displace the constructed aspects of devices and women and other oppressed individuals' critiques of how such technologies function. Technology companies and device designers correlate touchscreens and online sites with physical contact and emotional sentiments, promising unmediated experiences in which the screen falls away in favor of visceral materiality and connections. While touchscreens are key elements of most people's everyday lives, critical frameworks for understanding the embodied experiences of using them are wanting. In Touch Screen Theory, Michele White focuses on the relation between physically touching and emotionally feeling to recenter the bodies and identities that are empowered, produced, and displaced by these digital technologies and settings. Drawing on detailed cases and humanities methods, White shows how and why gender, race, and sexuality should be further analyzed in relation to touchscreen use and design. White delves into such details as how women are informed that their bodies and fingernails are not a fit for iPhones, how cellphone surfaces are correlated with skin and understood as erotic, the ways social networks use heart buttons and icons to seem to physically and emotionally connect with individuals, how online references to feminine and queer feelings are resisted by many men, and how women producers of autonomous sensory meridian response (ASMR) videos use tactile strategies and touchscreens to emotionally bond with viewers. Proposing critical methods for studying touchscreens and digital engagement, Touch Screen Theory expands a variety of research areas, including digital and internet cultures, hardware, interfaces, media and screens, and popular culture

    Eye Gaze Tracking for Human Computer Interaction

    Get PDF
    With a growing number of computer devices around us, and the increasing time we spend for interacting with such devices, we are strongly interested in finding new interaction methods which ease the use of computers or increase interaction efficiency. Eye tracking seems to be a promising technology to achieve this goal. This thesis researches interaction methods based on eye-tracking technology. After a discussion of the limitations of the eyes regarding accuracy and speed, including a general discussion on Fitts’ law, the thesis follows three different approaches on how to utilize eye tracking for computer input. The first approach researches eye gaze as pointing device in combination with a touch sensor for multimodal input and presents a method using a touch sensitive mouse. The second approach examines people’s ability to perform gestures with the eyes for computer input and the separation of gaze gestures from natural eye movements. The third approach deals with the information inherent in the movement of the eyes and its application to assist the user. The thesis presents a usability tool for recording of interaction and gaze activity. It also describes algorithms for reading detection. All approaches present results based on user studies conducted with prototypes developed for the purpose

    Continuous User Authentication Using Multi-Modal Biometrics

    Get PDF
    It is commonly acknowledged that mobile devices now form an integral part of an individual’s everyday life. The modern mobile handheld devices are capable to provide a wide range of services and applications over multiple networks. With the increasing capability and accessibility, they introduce additional demands in term of security. This thesis explores the need for authentication on mobile devices and proposes a novel mechanism to improve the current techniques. The research begins with an intensive review of mobile technologies and the current security challenges that mobile devices experience to illustrate the imperative of authentication on mobile devices. The research then highlights the existing authentication mechanism and a wide range of weakness. To this end, biometric approaches are identified as an appropriate solution an opportunity for security to be maintained beyond point-of-entry. Indeed, by utilising behaviour biometric techniques, the authentication mechanism can be performed in a continuous and transparent fashion. This research investigated three behavioural biometric techniques based on SMS texting activities and messages, looking to apply these techniques as a multi-modal biometric authentication method for mobile devices. The results showed that linguistic profiling; keystroke dynamics and behaviour profiling can be used to discriminate users with overall Equal Error Rates (EER) 12.8%, 20.8% and 9.2% respectively. By using a combination of biometrics, the results showed clearly that the classification performance is better than using single biometric technique achieving EER 3.3%. Based on these findings, a novel architecture of multi-modal biometric authentication on mobile devices is proposed. The framework is able to provide a robust, continuous and transparent authentication in standalone and server-client modes regardless of mobile hardware configuration. The framework is able to continuously maintain the security status of the devices. With a high level of security status, users are permitted to access sensitive services and data. On the other hand, with the low level of security, users are required to re-authenticate before accessing sensitive service or data

    Taux : a system for evaluating sound feedback in navigational tasks

    Get PDF
    This thesis presents the design and development of an evaluation system for generating audio displays that provide feedback to persons performing navigation tasks. It first develops the need for such a system by describing existing wayfinding solutions, investigating new electronic location-based methods that have the potential of changing these solutions and examining research conducted on relevant audio information representation techniques. An evaluation system that supports the manipulation of two basic classes of audio display is then described. Based on prior work on wayfinding with audio display, research questions are developed that investigate the viability of different audio displays. These are used to generate hypotheses and develop an experiment which evaluates four variations of audio display for wayfinding. Questions are also formulated that evaluate a baseline condition that utilizes visual feedback. An experiment which tests these hypotheses on sighted users is then described. Results from the experiment suggest that spatial audio combined with spoken hints is the best approach of the approaches comparing spatial audio. The test experiment results also suggest that muting a varying audio signal when a subject is on course did not improve performance. The system and method are then refined. A second experiment is conducted with improved displays and an improved experiment methodology. After adding blindfolds for sighted subjects and increasing the difficulty of navigation tasks by reducing the arrival radius, similar comparisons were observed. Overall, the two experiments demonstrate the viability of the prototyping tool for testing and refining multiple different audio display combinations for navigational tasks. The detailed contributions of this work and future research opportunities conclude this thesis

    Platform Embedded Security Technology Revealed

    Get PDF
    Computer scienc

    WearPut : Designing Dexterous Wearable Input based on the Characteristics of Human Finger Motions

    Get PDF
    Department of Biomedical Engineering (Human Factors Engineering)Powerful microchips for computing and networking allow a wide range of wearable devices to be miniaturized with high fidelity and availability. In particular, the commercially successful smartwatches placed on the wrist drive market growth by sharing the role of smartphones and health management. The emerging Head Mounted Displays (HMDs) for Augmented Reality (AR) and Virtual Reality (VR) also impact various application areas in video games, education, simulation, and productivity tools. However, these powerful wearables have challenges in interaction with the inevitably limited space for input and output due to the specialized form factors for fitting the body parts. To complement the constrained interaction experience, many wearable devices still rely on other large form factor devices (e.g., smartphones or hand-held controllers). Despite their usefulness, the additional devices for interaction can constrain the viability of wearable devices in many usage scenarios by tethering users' hands to the physical devices. This thesis argues that developing novel Human-Computer interaction techniques for the specialized wearable form factors is vital for wearables to be reliable standalone products. This thesis seeks to address the issue of constrained interaction experience with novel interaction techniques by exploring finger motions during input for the specialized form factors of wearable devices. The several characteristics of the finger input motions are promising to enable increases in the expressiveness of input on the physically limited input space of wearable devices. First, the input techniques with fingers are prevalent on many large form factor devices (e.g., touchscreen or physical keyboard) due to fast and accurate performance and high familiarity. Second, many commercial wearable products provide built-in sensors (e.g., touchscreen or hand tracking system) to detect finger motions. This enables the implementation of novel interaction systems without any additional sensors or devices. Third, the specialized form factors of wearable devices can create unique input contexts while the fingers approach their locations, shapes, and components. Finally, the dexterity of fingers with a distinctive appearance, high degrees of freedom, and high sensitivity of joint angle perception have the potential to widen the range of input available with various movement features on the surface and in the air. Accordingly, the general claim of this thesis is that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. This thesis demonstrates the general claim by providing evidence in various wearable scenarios with smartwatches and HMDs. First, this thesis explored the comfort range of static and dynamic touch input with angles on the touchscreen of smartwatches. The results showed the specific comfort ranges on variations in fingers, finger regions, and poses due to the unique input context that the touching hand approaches a small and fixed touchscreen with a limited range of angles. Then, finger region-aware systems that recognize the flat and side of the finger were constructed based on the contact areas on the touchscreen to enhance the expressiveness of angle-based touch input. In the second scenario, this thesis revealed distinctive touch profiles of different fingers caused by the unique input context for the touchscreen of smartwatches. The results led to the implementation of finger identification systems for distinguishing two or three fingers. Two virtual keyboards with 12 and 16 keys showed the feasibility of touch-based finger identification that enables increases in the expressiveness of touch input techniques. In addition, this thesis supports the general claim with a range of wearable scenarios by exploring the finger input motions in the air. In the third scenario, this thesis investigated the motions of in-air finger stroking during unconstrained in-air typing for HMDs. The results of the observation study revealed details of in-air finger motions during fast sequential input, such as strategies, kinematics, correlated movements, inter-fingerstroke relationship, and individual in-air keys. The in-depth analysis led to a practical guideline for developing robust in-air typing systems with finger stroking. Lastly, this thesis examined the viable locations of in-air thumb touch input to the virtual targets above the palm. It was confirmed that fast and accurate sequential thumb touch can be achieved at a total of 8 key locations with the built-in hand tracking system in a commercial HMD. Final typing studies with a novel in-air thumb typing system verified increases in the expressiveness of virtual target selection on HMDs. This thesis argues that the objective and subjective results and novel interaction techniques in various wearable scenarios support the general claim that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. Finally, this thesis concludes with thesis contributions, design considerations, and the scope of future research works, for future researchers and developers to implement robust finger-based interaction systems on various types of wearable devices.ope
    • 

    corecore