66 research outputs found

    WatchMI: pressure touch, twist and pan gesture input on unmodified smartwatches

    Get PDF
    The screen size of a smartwatch provides limited space to enable expressive multi-touch input, resulting in a markedly difficult and limited experience. We present WatchMI: Watch Movement Input that enhances touch interaction on a smartwatch to support continuous pressure touch, twist, pan gestures and their combinations. Our novel approach relies on software that analyzes, in real-time, the data from a built-in Inertial Measurement Unit (IMU) in order to determine with great accuracy and different levels of granularity the actions performed by the user, without requiring additional hardware or modification of the watch. We report the results of an evaluation with the system, and demonstrate that the three proposed input interfaces are accurate, noise-resistant, easy to use and can be deployed on a variety of smartwatches. We then showcase the potential of this work with seven different applications including, map navigation, an alarm clock, a music player, pan gesture recognition, text entry, file explorer and controlling remote devices or a game character.Postprin

    WearPut : Designing Dexterous Wearable Input based on the Characteristics of Human Finger Motions

    Get PDF
    Department of Biomedical Engineering (Human Factors Engineering)Powerful microchips for computing and networking allow a wide range of wearable devices to be miniaturized with high fidelity and availability. In particular, the commercially successful smartwatches placed on the wrist drive market growth by sharing the role of smartphones and health management. The emerging Head Mounted Displays (HMDs) for Augmented Reality (AR) and Virtual Reality (VR) also impact various application areas in video games, education, simulation, and productivity tools. However, these powerful wearables have challenges in interaction with the inevitably limited space for input and output due to the specialized form factors for fitting the body parts. To complement the constrained interaction experience, many wearable devices still rely on other large form factor devices (e.g., smartphones or hand-held controllers). Despite their usefulness, the additional devices for interaction can constrain the viability of wearable devices in many usage scenarios by tethering users' hands to the physical devices. This thesis argues that developing novel Human-Computer interaction techniques for the specialized wearable form factors is vital for wearables to be reliable standalone products. This thesis seeks to address the issue of constrained interaction experience with novel interaction techniques by exploring finger motions during input for the specialized form factors of wearable devices. The several characteristics of the finger input motions are promising to enable increases in the expressiveness of input on the physically limited input space of wearable devices. First, the input techniques with fingers are prevalent on many large form factor devices (e.g., touchscreen or physical keyboard) due to fast and accurate performance and high familiarity. Second, many commercial wearable products provide built-in sensors (e.g., touchscreen or hand tracking system) to detect finger motions. This enables the implementation of novel interaction systems without any additional sensors or devices. Third, the specialized form factors of wearable devices can create unique input contexts while the fingers approach their locations, shapes, and components. Finally, the dexterity of fingers with a distinctive appearance, high degrees of freedom, and high sensitivity of joint angle perception have the potential to widen the range of input available with various movement features on the surface and in the air. Accordingly, the general claim of this thesis is that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. This thesis demonstrates the general claim by providing evidence in various wearable scenarios with smartwatches and HMDs. First, this thesis explored the comfort range of static and dynamic touch input with angles on the touchscreen of smartwatches. The results showed the specific comfort ranges on variations in fingers, finger regions, and poses due to the unique input context that the touching hand approaches a small and fixed touchscreen with a limited range of angles. Then, finger region-aware systems that recognize the flat and side of the finger were constructed based on the contact areas on the touchscreen to enhance the expressiveness of angle-based touch input. In the second scenario, this thesis revealed distinctive touch profiles of different fingers caused by the unique input context for the touchscreen of smartwatches. The results led to the implementation of finger identification systems for distinguishing two or three fingers. Two virtual keyboards with 12 and 16 keys showed the feasibility of touch-based finger identification that enables increases in the expressiveness of touch input techniques. In addition, this thesis supports the general claim with a range of wearable scenarios by exploring the finger input motions in the air. In the third scenario, this thesis investigated the motions of in-air finger stroking during unconstrained in-air typing for HMDs. The results of the observation study revealed details of in-air finger motions during fast sequential input, such as strategies, kinematics, correlated movements, inter-fingerstroke relationship, and individual in-air keys. The in-depth analysis led to a practical guideline for developing robust in-air typing systems with finger stroking. Lastly, this thesis examined the viable locations of in-air thumb touch input to the virtual targets above the palm. It was confirmed that fast and accurate sequential thumb touch can be achieved at a total of 8 key locations with the built-in hand tracking system in a commercial HMD. Final typing studies with a novel in-air thumb typing system verified increases in the expressiveness of virtual target selection on HMDs. This thesis argues that the objective and subjective results and novel interaction techniques in various wearable scenarios support the general claim that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. Finally, this thesis concludes with thesis contributions, design considerations, and the scope of future research works, for future researchers and developers to implement robust finger-based interaction systems on various types of wearable devices.ope

    HandSight: A Touch-Based Wearable System to Increase Information Accessibility for People with Visual Impairments

    Get PDF
    Many activities of daily living such as getting dressed, preparing food, wayfinding, or shopping rely heavily on visual information, and the inability to access that information can negatively impact the quality of life for people with vision impairments. While numerous researchers have explored solutions for assisting with visual tasks that can be performed at a distance, such as identifying landmarks for navigation or recognizing people and objects, few have attempted to provide access to nearby visual information through touch. Touch is a highly attuned means of acquiring tactile and spatial information, especially for people with vision impairments. By supporting touch-based access to information, we may help users to better understand how a surface appears (e.g., document layout, clothing patterns), thereby improving the quality of life. To address this gap in research, this dissertation explores methods to augment a visually impaired user’s sense of touch with interactive, real-time computer vision to access information about the physical world. These explorations span three application areas: reading and exploring printed documents, controlling mobile devices, and identifying colors and visual textures. At the core of each application is a system called HandSight that uses wearable cameras and other sensors to detect touch events and identify surface content beneath the user’s finger. To create HandSight, we designed and implemented the physical hardware, developed signal processing and computer vision algorithms, and designed real-time feedback that enables users to interpret visual or digital content. We involve visually impaired users throughout the design and development process, conducting several user studies to assess usability and robustness and to improve our prototype designs. The contributions of this dissertation include: (i) developing and iteratively refining HandSight, a novel wearable system to assist visually impaired users in their daily lives; (ii) evaluating HandSight across a diverse set of tasks, and identifying tradeoffs of a finger-worn approach in terms of physical design, algorithmic complexity and robustness, and usability; and (iii) identifying broader design implications for future wearable systems and for the fields of accessibility, computer vision, augmented and virtual reality, and human-computer interaction

    Integrating passive ubiquitous surfaces into human-computer interaction

    Get PDF
    Mobile technologies enable people to interact with computers ubiquitously. This dissertation investigates how ordinary, ubiquitous surfaces can be integrated into human-computer interaction to extend the interaction space beyond the edge of the display. It turns out that acoustic and tactile features generated during an interaction can be combined to identify input events, the user, and the surface. In addition, it is shown that a heterogeneous distribution of different surfaces is particularly suitable for realizing versatile interaction modalities. However, privacy concerns must be considered when selecting sensors, and context can be crucial in determining whether and what interaction to perform.Mobile Technologien ermöglichen den Menschen eine allgegenwärtige Interaktion mit Computern. Diese Dissertation untersucht, wie gewöhnliche, allgegenwärtige Oberflächen in die Mensch-Computer-Interaktion integriert werden können, um den Interaktionsraum über den Rand des Displays hinaus zu erweitern. Es stellt sich heraus, dass akustische und taktile Merkmale, die während einer Interaktion erzeugt werden, kombiniert werden können, um Eingabeereignisse, den Benutzer und die Oberfläche zu identifizieren. Darüber hinaus wird gezeigt, dass eine heterogene Verteilung verschiedener Oberflächen besonders geeignet ist, um vielfältige Interaktionsmodalitäten zu realisieren. Bei der Auswahl der Sensoren müssen jedoch Datenschutzaspekte berücksichtigt werden, und der Kontext kann entscheidend dafür sein, ob und welche Interaktion durchgeführt werden soll

    Älykellojen käyttöliittymien ja käytettävyyden tutkimus

    Get PDF
    Tiivistelmä. Puettava teknologia on lisääntynyt kiihtyvää vauhtia ihmisten päivittäisessä käytössä. Älykellojen ja aktiivisuusrannekkeiden avulla voidaan tukea ihmisten liikkuvuutta ja ne voivat tuoda apuvälineitä päivittäisiin askareihin. Älykellot ovat suunniteltu kaikille ikäryhmille käytettäviksi. Pienen näytön ja vähäisten ohjaus metodien vuoksi niiden käyttöliittymät pitäisi olla helppokäyttöisiä kaikille. Uusia ominaisuuksia ja aplikaatioita tuotetaan älykelloille valmistajien toimesta jatkuvasti. Tässä kandidaatintyössä tutkitaan viimeisten vuosien aikana tapahtunutta kehitystä älykellojen käytettävyydessä ja käyttöliittymien suunnittelussa. Tutkielmassa käydään läpi uusia kehitettyjä älykellojen käyttöliittymä tyyppejä ja älykellojen käytettävyydestä tehtyjä tutkimuksia. Tarkoituksena on tutkia, millä tavoin käytettävyyttä on parannettu. Tutkielmassa analysoidaan, millaiset elementit käyttöliittymissä tekevät älykelloista helppokäyttöisiä ja millaiset aiheuttavat ongelmia käyttäjille. Työssä myös otetaan selvää, miten käytettävyysongelmia on ratkaistu ja mitä niistä on opittu. Käymme myös läpi tutkimusten avulla tehtyjä havaintoja käytettävyydestä ja esittelimme uusia innovaatioita, joilla käyttöä tuetaan. Tutkielmassa löysimme tapoja, joilla älykellon näyttöä, nappeja, liike-eleitä ja muita liitettäviä lisäosia voidaan hyödyntää käyttöliittymän kehittämiseen. Saimme selville, että 4–8 kuvaketta on paras elementtien määrä älykellon näytöllä. Kuitenkin opimme, että älykello ei välttämättä vaadi näyttöä tarjotakseen tarvittavan informaation. Totesimme myös, että älykellon näyttö soveltuu hyvin muiden suurempien laitteiden ohjaukseen. Parhaaksi mekaaniseksi syötteeksi valitsimme digitaalisen kruunun. Kosketusnäytön osoitimme olevan hyödyllinen, sillä käyttäjä voi kirjoittaa näppäimistöä hyödyntäen. Esittelemme useita liike-eleitä käyttäviä prototyyppejä, jotka helpottavat käyttöä. Osoitamme, kuinka pienillä käden liikkeillä saadaan annettua komentoja kellolle. Toteamme myös äänieleiden ja sormuslisäosien helpottavan käyttöä. Päädyimme selkeään johtopäätökseen kellojen käyttötarkoituksesta tutkimuksia analysoidessa. Älykellot soveltuvat vain ilmoitusten katsomiseen, puheluiden hallitsemiseen, musiikin soittamiseen ja muihin yksinkertaisiin tehtäviin. Käyttötapahtumat ovat lyhyitä ja niitä on tihein välein. Tämän vuoksi niiden käyttöliittymät tulisi suunnitella näitä varten. Kävimme läpi myös käyttäjäryhmiin tehtyjä tutkimuksia ja löysimme niihin liittyviä käyttöongelmia, joista on tehty suunnittelua tukevia käytettävyyskehyksiä. Löysimme käyttöliittymiin parannusehdotuksiksi kuvakkeiden koon kasvatuksen, valikkojen selkeytyksen ja näyttöjen kosketusherkkyyden lisäämisen.Smartwatch user interface and usability study. Abstract. The use of wearable technology has increased at an accelerating pace in people’s daily life. Smartwatches and activity bracelets can support people’s mobility and provide tools to help with daily chores. Smart watches are made for all ages. User interface must be easy to use for everyone because smartwatches have small screens and minimal control methods. New improvements and applications for smartwatches are continually produced by manufacturers. This bachelor thesis examines the development that has happened over the last five years in the use of smartwatches and in the design of smartwatch user interfaces. This thesis goes through the different types of smartwatch user interfaces that have been developed in recent years and studies on the usability of smartwatches. The aim is to examine the way usability has the advanced. In this thesis we try to find the elements in the interfaces which make smartwatches easy to use and which create problems for the users. In this study we explore how usage problems have been solved and what we have learned from them. We go through studies on the smartwatch usability and introduce new innovations that help their usage. Analysing the studies we found ways that smartwatches screen, buttons, movement based gestures and other add-ons can be used for developing easier user interfaces. We found out that 4–8 icons to be correct number of elements on the screen. However, we learned that a smartwatch does not necessarily require a display to provide the necessary information. We also found that the smartwatch’s display is well suited for controlling other larger devices. We chose the digital crown as the best mechanical input. We proved the touch screen to be useful by showing how it can be used for typing on a keyboard. We present several prototypes that use movement based gestures to improve the usability. By using their hand movement the user can give commands to the smartwatch. We also prove sound gestures and ring accessories to be useful. We also found out the ways people use smartwatches. Smartwatches are suitable for viewing notifications, managing calls, playing music and other simple tasks. Usage sessions are short and occur at frequent intervals. That is why we recommend their user interfaces should be designed for these tasks. In the thesis we also go through studies made on the user groups and find the types of user problems connected to them. We show how these user problems have been implemented for making usability frameworks which support usable smartwatch interface design. We found suggestions for improvements in user interfaces. Suggestions are to increase the size of the icons, clarify the menus, and increase the sensitivity of the screens

    Machine Learning in Sensors and Imaging

    Get PDF
    Machine learning is extending its applications in various fields, such as image processing, the Internet of Things, user interface, big data, manufacturing, management, etc. As data are required to build machine learning networks, sensors are one of the most important technologies. In addition, machine learning networks can contribute to the improvement in sensor performance and the creation of new sensor applications. This Special Issue addresses all types of machine learning applications related to sensors and imaging. It covers computer vision-based control, activity recognition, fuzzy label classification, failure classification, motor temperature estimation, the camera calibration of intelligent vehicles, error detection, color prior model, compressive sensing, wildfire risk assessment, shelf auditing, forest-growing stem volume estimation, road management, image denoising, and touchscreens

    Metafore mobilnih komunikacija ; Метафоры мобильной связи.

    Get PDF
    Mobilne komunikacije su polje informacione i komunikacione tehnologije koje karakteriše brzi razvoj i u kome se istraživanjem u analitičkim okvirima kognitivne lingvistike, zasnovanom na uzorku od 1005 odrednica, otkriva izrazito prisustvo metafore, metonimije, analogije i pojmovnog objedinjavanja. Analiza uzorka reči i izraza iz oblasti mobilnih medija, mobilnih operativnih sistema, dizajna korisničkih interfejsa, terminologije mobilnih mreža, kao i slenga i tekstizama koje upotrebljavaju korisnici mobilnih naprava ukazuje da pomenuti kognitivni mehanizmi imaju ključnu ulogu u olakšavanju interakcije između ljudi i širokog spektra mobilnih uređaja sa računarskim sposobnostima, od prenosivih računara i ličnih digitalnih asistenata (PDA), do mobilnih telefona, tableta i sprava koje se nose na telu. Ti mehanizmi predstavljaju temelj razumevanja i nalaze se u osnovi principa funkcionisanja grafičkih korisničkih interfejsa i direktne manipulacije u računarskim okruženjima. Takođe je analiziran i poseban uzorak od 660 emotikona i emođija koji pokazuju potencijal za proširenje značenja, imajući u vidu značaj piktograma za tekstualnu komunikaciju u vidu SMS poruka i razmenu tekstualnih sadržaja na društvenim mrežama kojima se redovno pristupa putem mobilnih uređaja...Mobile communications are a fast-developing field of information and communication technology whose exploration within the analytical framework of cognitive linguistics, based on a sample of 1005 entries, reveals the pervasive presence of metaphor, metonymy analogy and conceptual integration. The analysis of the sample consisting of words and phrases related to mobile media, mobile operating systems and interface design, the terminology of mobile networking, as well as the slang and textisms employed by mobile gadget users shows that the above cognitive mechanisms play a key role in facilitating interaction between people and a wide range of mobile computing devices from laptops and PDAs to mobile phones, tablets and wearables. They are the cornerstones of comprehension that are behind the principles of functioning of graphical user interfaces and direct manipulation in computing environments. A separate sample, featuring a selection of 660 emoticons and emoji, exhibiting the potential for semantic expansion was also analyzed, in view of the significance of pictograms for text-based communication in the form of text messages or exchanges on social media sites regularly accessed via mobile devices..

    Personalized Interaction with High-Resolution Wall Displays

    Get PDF
    Fallende Hardwarepreise sowie eine zunehmende Offenheit gegenüber neuartigen Interaktionsmodalitäten haben in den vergangen Jahren den Einsatz von wandgroßen interaktiven Displays möglich gemacht, und in der Folge ist ihre Anwendung, unter anderem in den Bereichen Visualisierung, Bildung, und der Unterstützung von Meetings, erfolgreich demonstriert worden. Aufgrund ihrer Größe sind Wanddisplays für die Interaktion mit mehreren Benutzern prädestiniert. Gleichzeitig kann angenommen werden, dass Zugang zu persönlichen Daten und Einstellungen — mithin personalisierte Interaktion — weiterhin essentieller Bestandteil der meisten Anwendungsfälle sein wird. Aktuelle Benutzerschnittstellen im Desktop- und Mobilbereich steuern Zugriffe über ein initiales Login. Die Annahme, dass es nur einen Benutzer pro Bildschirm gibt, zieht sich durch das gesamte System, und ermöglicht unter anderem den Zugriff auf persönliche Daten und Kommunikation sowie persönliche Einstellungen. Gibt es hingegen mehrere Benutzer an einem großen Bildschirm, müssen hierfür Alternativen gefunden werden. Die daraus folgende Forschungsfrage dieser Dissertation lautet: Wie können wir im Kontext von Mehrbenutzerinteraktion mit wandgroßen Displays personalisierte Schnittstellen zur Verfügung stellen? Die Dissertation befasst sich sowohl mit personalisierter Interaktion in der Nähe (mit Touch als Eingabemodalität) als auch in etwas weiterer Entfernung (unter Nutzung zusätzlicher mobiler Geräte). Grundlage für personalisierte Mehrbenutzerinteraktion sind technische Lösungen für die Zuordnung von Benutzern zu einzelnen Interaktionen. Hierzu werden zwei Alternativen untersucht: In der ersten werden Nutzer via Kamera verfolgt, und in der zweiten werden Mobilgeräte anhand von Ultraschallsignalen geortet. Darauf aufbauend werden Interaktionstechniken vorgestellt, die personalisierte Interaktion unterstützen. Diese nutzen zusätzliche Mobilgeräte, die den Zugriff auf persönliche Daten sowie Interaktion in einigem Abstand von der Displaywand ermöglichen. Einen weiteren Teil der Arbeit bildet die Untersuchung der praktischen Auswirkungen der Ausgabe- und Interaktionsmodalitäten für personalisierte Interaktion. Hierzu wird eine qualitative Studie vorgestellt, die Nutzerverhalten anhand des kooperativen Mehrbenutzerspiels Miners analysiert. Der abschließende Beitrag beschäftigt sich mit dem Analyseprozess selber: Es wird das Analysetoolkit für Wandinteraktionen GIAnT vorgestellt, das Nutzerbewegungen, Interaktionen, und Blickrichtungen visualisiert und dadurch die Untersuchung der Interaktionen stark vereinfacht.An increasing openness for more diverse interaction modalities as well as falling hardware prices have made very large interactive vertical displays more feasible, and consequently, applications in settings such as visualization, education, and meeting support have been demonstrated successfully. Their size makes wall displays inherently usable for multi-user interaction. At the same time, we can assume that access to personal data and settings, and thus personalized interaction, will still be essential in most use-cases. In most current desktop and mobile user interfaces, access is regulated via an initial login and the complete user interface is then personalized to this user: Access to personal data, configurations and communications all assume a single user per screen. In the case of multiple people using one screen, this is not a feasible solution and we must find alternatives. Therefore, this thesis addresses the research question: How can we provide personalized interfaces in the context of multi-user interaction with wall displays? The scope spans personalized interaction both close to the wall (using touch as input modality) and further away (using mobile devices). Technical solutions that identify users at each interaction can replace logins and enable personalized interaction for multiple users at once. This thesis explores two alternative means of user identification: Tracking using RGB+depth-based cameras and leveraging ultrasound positioning of the users' mobile devices. Building on this, techniques that support personalized interaction using personal mobile devices are proposed. In the first contribution on interaction, HyDAP, we examine pointing from the perspective of moving users, and in the second, SleeD, we propose using an arm-worn device to facilitate access to private data and personalized interface elements. Additionally, the work contributes insights on practical implications of personalized interaction at wall displays: We present a qualitative study that analyses interaction using a multi-user cooperative game as application case, finding awareness and occlusion issues. The final contribution is a corresponding analysis toolkit that visualizes users' movements, touch interactions and gaze points when interacting with wall displays and thus allows fine-grained investigation of the interactions

    Analyzing the sustainability of apple’s competitive advantage

    Get PDF
    This report presents the case study “Competitive Position of Apple Inc. in 2021”, which describes Apple’s current overall business situation while presenting a comparison to its competitors. This case was designed to be studied at both master and executive level. Furthermore, on this thesis a case analysis is also presented. Following the guidelines of this Field Lab, said analysis starts by reviewing the relevant literature to understand the competitive situation of Apple. Additionally, both an industry and a company analysis were performed by applying the literature review concepts, while also studying the disruptive innovation linked to Apple. So far, Apple’s current strategy is not sustainable in the long run due to both high dependence on oneproduct and to the deviation of consumer’s brand perception. Then, the analysis focus on Apple and the video streaming industry. After a brief introduction about the services sector and its growth, the video streaming industry was presented with its features, trends and significant competitors. Therefore, a Porter’s Five Forces Analysis was conducted in order to understand the level of competitive rivalry in the industry. Finally, Apple Tv + was analyzed, focusing on its differentiation and pricing strategy. The research has led to the conclusion that Apple Tv + is trying to gain sustainable competitive advantage through its differentiation strategy, but for the moment it seems more an efficient marketing tool to promote Apple
    corecore