62 research outputs found

    Augmented reality selection through smart glasses

    Get PDF
    O mercado de óculos inteligentes está em crescimento. Este crescimento abre a possibilidade de um dia os óculos inteligentes assumirem um papel mais ativo tal como os smartphones já têm na vida quotidiana das pessoas. Vários métodos de interação com esta tecnologia têm sido estudados, mas ainda não é claro qual o método que poderá ser o melhor para interagir com objetos virtuais. Neste trabalho são mencionados diversos estudos que se focam nos diferentes métodos de interação para aplicações de realidade aumentada. É dado destaque às técnicas de interação para óculos inteligentes tal como às suas vantagens e desvantagens. No contexto deste trabalho foi desenvolvido um protótipo de Realidade Aumentada para locais fechados, implementando três métodos de interação diferentes. Foram também estudadas as preferências do utilizador e sua vontade de executar o método de interação em público. Além disso, é extraído o tempo de reação que é o tempo entre a deteção de uma marca e o utilizador interagir com ela. Um protótipo de Realidade Aumentada ao ar livre foi desenvolvido a fim compreender os desafios diferentes entre uma aplicação de Realidade Aumentada para ambientes interiores e exteriores. Na discussão é possível entender que os utilizadores se sentem mais confortáveis usando um método de interação semelhante ao que eles já usam. No entanto, a solução com dois métodos de interação, função de toque nos óculos inteligentes e movimento da cabeça, permitem obter resultados próximos aos resultados do controlador. É importante destacar que os utilizadores não passaram por uma fase de aprendizagem os resultados apresentados nos testes referem-se sempre à primeira e única vez com o método de interação. O que leva a crer que o futuro de interação com óculos inteligentes possa ser uma fusão de diferentes técnicas de interação.The smart glasses’ market continues growing. It enables the possibility of someday smart glasses to have a presence as smartphones have already nowadays in people's daily life. Several interaction methods for smart glasses have been studied, but it is not clear which method could be the best to interact with virtual objects. In this research, it is covered studies that focus on the different interaction methods for reality augmented applications. It is highlighted the interaction methods for smart glasses and the advantages and disadvantages of each interaction method. In this work, an Augmented Reality prototype for indoor was developed, implementing three different interaction methods. It was studied the users’ preferences and their willingness to perform the interaction method in public. Besides that, it is extracted the reaction time which is the time between the detection of a marker and the user interact with it. An outdoor Augmented Reality application was developed to understand the different challenges between indoor and outdoor Augmented Reality applications. In the discussion, it is possible to understand that users feel more comfortable using an interaction method similar to what they already use. However, the solution with two interaction methods, smart glass’s tap function, and head movement allows getting results close to the results of the controller. It is important to highlight that was always the first time of the users, so there was no learning before testing. This leads to believe that the future of smart glasses interaction can be the merge of different interaction methods

    User expectations for media sharing practices in open display networks

    Get PDF
    Open Display Networks have the potential to allow many content creators to publish their media to an open-ended set of screen displays. However, this raises the issue of how to match that content to the right displays. In this study, we aim to understand how the perceived utility of particular media sharing scenarios is affected by three independent variables, more specifically: (a) the locativeness of the content being shared; (b) how personal that content is and (c) the scope in which it is being shared. To assess these effects, we composed a set of 24 media sharing scenarios embedded with different treatments of our three independent variables. We then asked 100 participants to express their perception of the relevance of those scenarios. The results suggest a clear preference for scenarios where content is both local and directly related to the person that is publishing it. This is in stark contrast to the types of content that are commonly found in public displays, and confirms the opportunity that open displays networks may represent a new media for self-expression. This novel understanding may inform the design of new publication paradigms that will enable people to share media across the display networks.This research has received funding from FCT under the Carnegie Mellon—Portugal agreement. Project Wesp (Grant CMU-PT/SE/028/2008)

    Defining and Identifying Attention Capture Deceptive Designs in Digital Interfaces

    Get PDF
    Many tech companies exploit psychological vulnerabilities to design digital interfaces that maximize the frequency and duration of user visits. Consequently, users often report feeling dissatisfied with time spent on such services. Prior work has developed typologies of damaging design patterns (or dark patterns) that contribute to financial and privacy harms, which has helped designers to resist these patterns and policymakers to regulate them. However, we are missing a collection of similar problematic patterns that lead to attentional harms. To close this gap, we conducted a systematic literature review for what we call 'attention capture damaging patterns' (ACDPs). We analyzed 43 papers to identify their characteristics, the psychological vulnerabilities they exploit, and their impact on digital wellbeing. We propose a definition of ACDPs and identify eleven common types, from Time Fog to Infinite Scroll. Our typology offers technologists and policymakers a common reference to advocate, design, and regulate against attentional harms

    Studies on Multi-Device Usage Practices and Interaction Methods

    Get PDF
    People today commonly have multiple information devices, including smartphones, tablets, computers, home media centers, and other devices. As people have many devices, situations and workflows where several devices are combined and used together to accomplish a task have become usual. Groups of co-located persons may also join their information devices together for collaborative activities and experiences. While these developments towards computing with multiple devices offer many opportunities, they also create a need for interfaces and applications that support using multiple devices together.The overall goal of this doctoral thesis is to create new scientific knowledge to inform the design of future interfaces, applications, and technologies that better support multi-device use. The thesis belongs to the field of Human-Computer Interaction (HCI) research. It contains five empirical studies with a total of 110 participants. The study results have been reported in five original publications. The thesis generally follows the design science research methodology.More specifically, this thesis addresses three research questions related to multidevice use. The first question investigates how people actually use multiple information devices together in their daily lives. The results provide a rich picture of everyday multi-device use, including the most common devices and their characteristic practices of use, a categorization of patterns of multi-device use, and an analysis of the process of determining which devices to use. The second question examines the factors that influence the user experience of multi-device interaction methods. The results suggest a set of experiential factors that should be considered when designing methods for multi-device interaction. The set of factors is based on comparative studies of alternative methods for two common tasks in multi-device interaction: device binding and cross-display object movement. The third question explores a more futuristic topic of multi-device interaction methods for wearable devices, focusing on the two most popular categories of wearable devices today: smartwatches and smartglasses. The results present a categorization of actions that people would naturally do to initiate interactions between their wearable devices based on elicitation studies with groups of participants.The results of this thesis advance the scientific knowledge of multi-device use in the domain of human-computer interaction research. The results can be applied in the design of novel interfaces, applications, and technologies that involve the use of multiple information devices

    Going Remote: Demonstration and Evaluation of Remote Technology Delivery and Usability Assessment with Older Adults

    Get PDF
    Background: The COVID-19 pandemic necessitated “going remote” with the delivery, support, and assessment of a study intervention targeting older adults enrolled in a clinical trial. While remotely delivering and assessing technology is not new, there are few methods available in the literature that are proven to be effective with diverse populations, and none for older adults specifically. Older adults comprise a diverse population, including in terms of their experience with and access to technology, making this a challenging endeavor. Objective: Our objective was to remotely deliver and conduct usability testing for a mobile health (mHealth) technology intervention for older adult participants enrolled in a clinical trial of the technology. This paper describes the methodology used, its successes, and its limitations. Methods: We developed a conceptual model for remote operations, called the Framework for Agile and Remote Operations (FAR Ops), that combined the general requirements for spaceflight operations with Agile project management processes to quickly respond to this challenge. Using this framework, we iteratively created care packages that differed in their contents based on participant needs and were sent to study participants to deliver the study intervention—a medication management app—and assess its usability. Usability data were collected using the System Usability Scale (SUS) and a novel usability questionnaire developed to collect more in-depth data. Results: In the first 6 months of the project, we successfully delivered 21 care packages. We successfully designed and deployed a minimum viable product in less than 6 weeks, generally maintained a 2-week sprint cycle, and achieved a 40% to 50% return rate for both usability assessment instruments. We hypothesize that lack of engagement due to the pandemic and our use of asynchronous communication channels contributed to the return rate of usability assessments being lower than desired. We also provide general recommendations for performing remote usability testing with diverse populations based on the results of our work, including implementing screen sharing capabilities when possible, and determining participant preference for phone or email communications. Conclusions: The FAR Ops model allowed our team to adopt remote operations for our mHealth trial in response to interruptions from the COVID-19 pandemic. This approach can be useful for other research or practice-based projects under similar circumstances or to improve efficiency, cost, effectiveness, and participant diversity in general. In addition to offering a replicable approach, this paper tells the often-untold story of practical challenges faced by mHealth projects and practical strategies used to address them

    The Aalborg Survey / Part 4 - Literature Study:Diverse Urban Spaces (DUS)

    Get PDF

    Designing gaze-based interaction for pervasive public displays

    Get PDF
    The last decade witnessed an increasing adoption of public interactive displays. Displays can now be seen in many public areas, such as shopping malls, and train stations. There is also a growing trend towards using large public displays especially in airports, urban areas, universities and libraries. Meanwhile, advances in eye tracking and visual computing promise straightforward integration of eye tracking on these displays for both: 1) monitoring the user's visual behavior to evaluate different aspects of the display, such as measuring the visual attention of passersby, and for 2) interaction purposes, such as allowing users to provide input, retrieve content, or transfer data using their eye movements. Gaze is particularly useful for pervasive public displays. In addition to being natural and intuitive, eye gaze can be detected from a distance, bringing interactivity to displays that are physically unreachable. Gaze reflects the user's intention and visual interests, and its subtle nature makes it well-suited for public interactions where social embarrassment and privacy concerns might hinder the experience. On the downside, eye tracking technologies have traditionally been developed for desktop settings, where a user interacts from a stationary position and for a relatively long period of time. Interaction with public displays is fundamentally different and hence poses unique challenges when employing eye tracking. First, users of public displays are dynamic; users could approach the display from different directions, and interact from different positions or even while moving. This means that gaze-enabled displays should not expect users to be stationary at a specific position, but instead adapt to users' ever-changing position in front of the display. Second, users of public displays typically interact for short durations, often for a few seconds only. This means that contrary to desktop settings, public displays cannot afford requiring users to perform time-consuming calibration prior to interaction. In this publications-based dissertation, we first report on a review of challenges of interactive public displays, and discuss the potential of gaze in addressing these challenges. We then showcase the implementation and in-depth evaluation of two applications where gaze is leveraged to address core problems in today's public displays. The first presents an eye-based solution, EyePACT, that tackles the parallax effect which is often experienced on today's touch-based public displays. We found that EyePACT significantly improves accuracy even with varying degrees of parallax. The second is a novel multimodal system, GTmoPass, that combines gaze and touch input for secure user authentication on public displays. GTmoPass was found to be highly resilient to shoulder surfing, thermal attacks and smudge attacks, thereby offering a secure solution to an important problem on public displays. The second part of the dissertation explores specific challenges of gaze-based interaction with public displays. First, we address the user positioning problem by means of active eye tracking. More specifically, we built a novel prototype, EyeScout, that dynamically moves the eye tracker based on the user's position without augmenting the user. This, in turn, allowed us to study and understand gaze-based interaction with public displays while walking, and when approaching the display from different positions. An evaluation revealed that EyeScout is well perceived by users, and improves the time needed to initiate gaze interaction by 62% compared to state-of-the-art. Second, we propose a system, Read2Calibrate, for calibrating eye trackers implicitly while users read text on displays. We found that although text-based calibration is less accurate than traditional methods, it integrates smoothly while reading and thereby more suitable for public displays. Finally, through our prototype system, EyeVote, we show how to allow users to select textual options on public displays via gaze without calibration. In a field deployment of EyeVote, we studied the trade-off between accuracy and selection speed when using calibration-free selection techniques. We found that users of public displays value faster interactions over accurate ones, and are willing to correct system errors in case of inaccuracies. We conclude by discussing the implications of our findings on the design of gaze-based interaction for public displays, and how our work can be adapted for other domains apart from public displays, such as on handheld mobile devices.In den letzten zehn Jahren wurden vermehrt interaktive Displays in öffentlichen Bereichen wie Einkaufszentren, Flughäfen und Bahnhöfen eingesetzt. Große öffentliche Displays finden sich zunehmend in städtischen Gebieten, beispielsweise in Universitäten und Bibliotheken. Fortschritte in der Eye-Tracking-Technologie und der Bildverarbeitung versprechen eine einfache Integration von Eye-Tracking auf diesen Displays. So kann zum einen das visuelle Verhalten der Benutzer verfolgt und damit ein Display nach verschiedenen Aspekten evaluiert werden. Zum anderen eröffnet Eye-Tracking auf öffentlichen Displays neue Interaktionsmöglichkeiten. Blickbasierte Interaktion ist besonders nützlich für Bildschirme im allgegenwärtigen öffentlichen Raum. Der Blick bietet mehr als eine natürliche und intuitive Interaktionsmethode: Blicke können aus der Ferne erkannt und somit für Interaktion mit sonst unerreichbaren Displays genutzt werden. Aus der Interaktion mit dem Blick (Gaze) lassen sich Absichten und visuelle Interessen der Benutzer ableiten. Dadurch eignet es sich besonders für den öffentlichen Raum, wo Nutzer möglicherweise Datenschutzbedenken haben könnten oder sich bei herkömmlichen Methoden gehemmt fühlen würden in der Öffentlichkeit mit den Displays zu interagieren. Dadurch wird ein uneingeschränktes Nutzererlebnis ermöglicht. Eye-Tracking-Technologien sind jedoch in erster Linie für Desktop-Szenarien entwickelt worden, bei denen ein Benutzer für eine relativ lange Zeitspanne in einer stationären Position mit dem System interagiert. Die Interaktion mit öffentlichen Displays ist jedoch grundlegend anders. Daher gilt es völlig neuartige Herausforderungen zu bewältigen, wenn Eye-Tracking eingesetzt wird. Da sich Nutzer von öffentlichen Displays bewegen, können sie sich dem Display aus verschiedenen Richtungen nähern und sogar währenddessen mit dem Display interagieren. Folglich sollten "Gaze-enabled Displays" nicht davon ausgehen, dass Nutzer sich stets an einer bestimmten Position befinden, sondern sollten sich an die ständig wechselnde Position des Nutzers anpassen können. Zum anderen interagieren Nutzer von öffentlichen Displays üblicherweise nur für eine kurze Zeitspannen von ein paar Sekunden. Eine zeitaufwändige Kalibrierung durch den Nutzer vor der eigentlichen Interaktion ist hier im Gegensatz zu Desktop-Szenarien also nicht adäquat. Diese kumulative Dissertation überprüft zunächst die Herausforderungen interaktiver öffentlicher Displays und diskutiert das Potenzial von blickbasierter Interaktion zu deren Bewältigung. Anschließend wird die Implementierung und eingehende Evaluierung von zwei beispielhaften Anwendungen vorgestellt, bei denen Nutzer durch den Blick mit öffentlichen Displays interagieren. Daraus ergeben sich weitere greifbare Vorteile der blickbasierten Interaktion für öffentliche Display-Kontexte. Bei der ersten Anwendung, EyePACT, steht der Parallaxeneffekt im Fokus, der heutzutage häufig ein Problem auf öffentlichen Displays darstellt, die über Berührung (Touch) gesteuert werden. Die zweite Anwendung ist ein neuartiges multimodales System, GTmoPass, das Gaze- und Touch-Eingabe zur sicheren Benutzerauthentifizierung auf öffentlichen Displays kombiniert. GTmoPass ist sehr widerstandsfähig sowohl gegenüber unerwünschten fremden Blicken als auch gegenüber sogenannten thermischen Angriffen und Schmierangriffen. Es bietet damit eine sichere Lösung für ein wichtiges Sicherheits- und Datenschutzproblem auf öffentlichen Displays. Der zweite Teil der Dissertation befasst sich mit spezifischen Herausforderungen der Gaze-Interaktion mit öffentlichen Displays. Zuerst wird der Aspekt der Benutzerpositionierung durch aktives Eye-Tracking adressiert. Der neuartige Prototyp EyeScout bewegt den Eye-Tracker passend zur Position des Nutzers, ohne dass dieser dafür mit weiteren Geräten oder Sensoren ausgestattet werden muss. Dies ermöglicht blickbasierte Interaktion mit öffentlichen Displays auch in jenen Situationen zu untersuchen und zu verstehen, in denen Nutzer in Bewegung sind und sich dem Display von verschiedenen Positionen aus nähern. Zweitens wird das System Read2Calibrate präsentiert, das Eye-Tracker implizit kalibriert, während Nutzer Texte auf Displays lesen. Der Prototyp EyeVote zeigt, wie man die Auswahl von Textantworten auf öffentlichen Displays per Blick ohne Kalibrierung ermöglichen kann. In einer Feldstudie mit EyeVote wird der Kompromiss zwischen Genauigkeit und Auswahlgeschwindigkeit unter der Verwendung kalibrierungsfreier Auswahltechniken untersucht. Die Implikationen der Ergebnisse für das Design von blickbasierter Interaktion öffentlicher Displays werden diskutiert. Abschließend wird erörtert wie die verwendete Methodik auf andere Bereiche, z.B. auf mobilie Geräte, angewendet werden kann
    corecore