2,691 research outputs found

    Multimodal Data Fusion and Behavioral Analysis Tooling for Exploring Trust, Trust-propensity, and Phishing Victimization in Online Environments

    Get PDF
    Online environments, including email and social media platforms, are continuously threatened by malicious content designed by attackers to install malware on unsuspecting users and/or phish them into revealing sensitive data about themselves. Often slipping past technical mitigations (e.g. spam filters), attacks target the human element and seek to elicit trust as a means of achieving their nefarious ends. Victimized end-users lack the discernment, visual acuity, training, and/or experience to correctly identify the nefarious antecedents of trust that should prompt suspicion. Existing literature has explored trust, trust-propensity, and victimization, but studies lack data capture richness, realism, and/or the ability to investigate active user interactions. This paper defines a data collection and fusion approach alongside new open-sourced behavioral analysis tooling that addresses all three factors to provide researchers with empirical, evidence-based, insights into active end-user trust behaviors. The approach is evaluated in terms of comparative analysis, run-time performance, and fused data accuracy

    Evaluating Cross-Device Transitioning Experience in Seated and Moving Contexts

    Get PDF
    Cross-platform services allow access to information across different devices in different locations and situational contexts. We observed forty-five participants completing tasks while transitioning between a laptop and a mobile phone across different contexts (seated–moving and seated–seated). Findings showed that in each test setting, users were sensitive to the same cross-platform user experience (UX) elements. However, the seated–moving settings generated more issues, for example, more consistency problems. Two moving-related factors (attentiveness and manageability) also affected cross-platform UX. In addition, we found design issues associated with using mobile user interfaces (UIs) while walking. We analyzed the issues and proposed a set of UX design principles for mobile UIs in moving situations, such as reduction and aesthetic simplicity. This suggests designing context-aware cross-platform services that take transitioning into account for enhanced mobility

    Eye-tracking assistive technologies for individuals with amyotrophic lateral sclerosis

    Get PDF
    Amyotrophic lateral sclerosis, also known as ALS, is a progressive nervous system disorder that affects nerve cells in the brain and spinal cord, resulting in the loss of muscle control. For individuals with ALS, where mobility is limited to the movement of the eyes, the use of eye-tracking-based applications can be applied to achieve some basic tasks with certain digital interfaces. This paper presents a review of existing eye-tracking software and hardware through which eye-tracking their application is sketched as an assistive technology to cope with ALS. Eye-tracking also provides a suitable alternative as control of game elements. Furthermore, artificial intelligence has been utilized to improve eye-tracking technology with significant improvement in calibration and accuracy. Gaps in literature are highlighted in the study to offer a direction for future research

    The Usefulness of Multi-Sensor Affect Detection on User Experience: An Application of Biometric Measurement Systems on Online Purchasing

    Get PDF
    abstract: Traditional usability methods in Human-Computer Interaction (HCI) have been extensively used to understand the usability of products. Measurements of user experience (UX) in traditional HCI studies mostly rely on task performance and observable user interactions with the product or services, such as usability tests, contextual inquiry, and subjective self-report data, including questionnaires, interviews, and usability tests. However, these studies fail to directly reflect a user’s psychological involvement and further fail to explain the cognitive processing and the related emotional arousal. Thus, capturing how users think and feel when they are using a product remains a vital challenge of user experience evaluation studies. Conversely, recent research has revealed that sensor-based affect detection technologies, such as eye tracking, electroencephalography (EEG), galvanic skin response (GSR), and facial expression analysis, effectively capture affective states and physiological responses. These methods are efficient indicators of cognitive involvement and emotional arousal and constitute effective strategies for a comprehensive measurement of UX. The literature review shows that the impacts of sensor-based affect detection systems to the UX can be categorized in two groups: (1) confirmatory to validate the results obtained from the traditional usability methods in UX evaluations; and (2) complementary to enhance the findings or provide more precise and valid evidence. Both provided comprehensive findings to uncover the issues related to mental and physiological pathways to enhance the design of product and services. Therefore, this dissertation claims that it can be efficient to integrate sensor-based affect detection technologies to solve the current gaps or weaknesses of traditional usability methods. The dissertation revealed that the multi-sensor-based UX evaluation approach through biometrics tools and software corroborated user experience identified by traditional UX methods during an online purchasing task. The use these systems enhanced the findings and provided more precise and valid evidence to predict the consumer purchasing preferences. Thus, their impact was “complementary” on overall UX evaluation. The dissertation also provided information of the unique contributions of each tool and recommended some ways user experience researchers can combine both sensor-based and traditional UX approaches to explain consumer purchasing preferences.Dissertation/ThesisDoctoral Dissertation Human Systems Engineering 201

    Continuous Authentication for Voice Assistants

    Full text link
    Voice has become an increasingly popular User Interaction (UI) channel, mainly contributing to the ongoing trend of wearables, smart vehicles, and home automation systems. Voice assistants such as Siri, Google Now and Cortana, have become our everyday fixtures, especially in scenarios where touch interfaces are inconvenient or even dangerous to use, such as driving or exercising. Nevertheless, the open nature of the voice channel makes voice assistants difficult to secure and exposed to various attacks as demonstrated by security researchers. In this paper, we present VAuth, the first system that provides continuous and usable authentication for voice assistants. We design VAuth to fit in various widely-adopted wearable devices, such as eyeglasses, earphones/buds and necklaces, where it collects the body-surface vibrations of the user and matches it with the speech signal received by the voice assistant's microphone. VAuth guarantees that the voice assistant executes only the commands that originate from the voice of the owner. We have evaluated VAuth with 18 users and 30 voice commands and find it to achieve an almost perfect matching accuracy with less than 0.1% false positive rate, regardless of VAuth's position on the body and the user's language, accent or mobility. VAuth successfully thwarts different practical attacks, such as replayed attacks, mangled voice attacks, or impersonation attacks. It also has low energy and latency overheads and is compatible with most existing voice assistants

    Examining User Feedback of a Teleneuropsychological Platform in a mixed sample of Norwegians and Polish People in Norway

    Get PDF
    Denne studien forsøker å utforske brukernes tilfredshet og tilbakemeldinger ved bruk av telenevropsykologiske tester blant norske og polske deltakere bosatt i Norge, samt å undersøke potensielle forskjeller i kognitive vurderinger for norsk for polske og norske deltakere. Sosiale medieinnlegg ble brukt for å rekruttere deltakere, noe som resulterte i 37 påmeldinger, hvorav åtte deltakere fullførte testene og bruker-tilfredshetsundersøkelsen, som resulterte i en svarprosent på 22%. Utvalget inkluderte én person som ønsket sammenligning med biologiske menn og syv med biologiske kvinner. Polske deltakere utgjorde 25% av utvalget. Deltakernes alder varierte fra 18 til 69 år, og 75% hadde fullført 16 års utdanning eller mer. TeleNP-testene ble utført ved hjelp av Mindmores screeningsbatteri i den norske versjonen. Flertallet av testene ble automatisk scoret, og brukernes opplevelser ble vurdert på en Likert-skala (0-4), og kvalitative tilbakemeldinger ble samlet inn gjennom åpne tekstspørsmål. Totalt sett rapporterte deltakerne om en positiv brukeropplevelse. Imidlertid opplevde polske deltakere lavere brukertilfredshet og flere tekniske vanskeligheter sammenlignet med de norske deltakerne. Gjennomsnittlig Z-score for kognitive funksjoner var noe under de svenske normative verdiene: Hukommelse z=-0.43; Oppmerksomhet og Tempo z=-0.49; Eksekutive funksjoner z=-0.37. Denne studien belyser potensialet for telenevropsykologisk testing i Norge. Imidlertid krever det lille utvalget forsiktighet ved tolkning av resultatene. Videre forskning med større og mer variert utvalg er nødvendig for å trekke mer definitive konklusjoner.Hovedoppgave psykologprogrammetPROPSY317PRPSY

    Ubiquitous Integration and Temporal Synchronisation (UbilTS) framework : a solution for building complex multimodal data capture and interactive systems

    Get PDF
    Contemporary Data Capture and Interactive Systems (DCIS) systems are tied in with various technical complexities such as multimodal data types, diverse hardware and software components, time synchronisation issues and distributed deployment configurations. Building these systems is inherently difficult and requires addressing of these complexities before the intended and purposeful functionalities can be attained. The technical issues are often common and similar among diverse applications. This thesis presents the Ubiquitous Integration and Temporal Synchronisation (UbiITS) framework, a generic solution to address the technical complexities in building DCISs. The proposed solution is an abstract software framework that can be extended and customised to any application requirements. UbiITS includes all fundamental software components, techniques, system level layer abstractions and reference architecture as a collection to enable the systematic construction of complex DCISs. This work details four case studies to showcase the versatility and extensibility of UbiITS framework’s functionalities and demonstrate how it was employed to successfully solve a range of technical requirements. In each case UbiITS operated as the core element of each application. Additionally, these case studies are novel systems by themselves in each of their domains. Longstanding technical issues such as flexibly integrating and interoperating multimodal tools, precise time synchronisation, etc., were resolved in each application by employing UbiITS. The framework enabled establishing a functional system infrastructure in these cases, essentially opening up new lines of research in each discipline where these research approaches would not have been possible without the infrastructure provided by the framework. The thesis further presents a sample implementation of the framework on a device firmware exhibiting its capability to be directly implemented on a hardware platform. Summary metrics are also produced to establish the complexity, reusability, extendibility, implementation and maintainability characteristics of the framework.Engineering and Physical Sciences Research Council (EPSRC) grants - EP/F02553X/1, 114433 and 11394

    Virtual Reality Applications and Development

    Get PDF
    Virtual Reality (VR) has existed for many years; however, it has only recently gained wide spread popularity and commercial use. This change comes from the innovations in head mounted displays (HMDs) and from the work of many software engineers making quality user experiences (UX). In this thesis, four areas are explored inside of VR. One area of research is within the use of VR for virtual environments and fire simulations. The second area of research is within the use of VR for eye tracking and medical simulations. The third area of research is within multiplayer development for more immersive collaborative simulations. Finally, the fourth area of research is within the development of typing in 3D for virtual reality. Extending from this final area of research, this thesis details an application that details more practical and granular details about developing for VR and using the real-time development platform, Unity

    A brain-machine interface for assistive robotic control

    Get PDF
    Brain-machine interfaces (BMIs) are the only currently viable means of communication for many individuals suffering from locked-in syndrome (LIS) – profound paralysis that results in severely limited or total loss of voluntary motor control. By inferring user intent from task-modulated neurological signals and then translating those intentions into actions, BMIs can enable LIS patients increased autonomy. Significant effort has been devoted to developing BMIs over the last three decades, but only recently have the combined advances in hardware, software, and methodology provided a setting to realize the translation of this research from the lab into practical, real-world applications. Non-invasive methods, such as those based on the electroencephalogram (EEG), offer the only feasible solution for practical use at the moment, but suffer from limited communication rates and susceptibility to environmental noise. Maximization of the efficacy of each decoded intention, therefore, is critical. This thesis addresses the challenge of implementing a BMI intended for practical use with a focus on an autonomous assistive robot application. First an adaptive EEG- based BMI strategy is developed that relies upon code-modulated visual evoked potentials (c-VEPs) to infer user intent. As voluntary gaze control is typically not available to LIS patients, c-VEP decoding methods under both gaze-dependent and gaze- independent scenarios are explored. Adaptive decoding strategies in both offline and online task conditions are evaluated, and a novel approach to assess ongoing online BMI performance is introduced. Next, an adaptive neural network-based system for assistive robot control is presented that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. Exploratory learning, or “learning by doing,” is an unsupervised method in which the robot is able to build an internal model for motor planning and coordination based on real-time sensory inputs received during exploration. Finally, a software platform intended for practical BMI application use is developed and evaluated. Using online c-VEP methods, users control a simple 2D cursor control game, a basic augmentative and alternative communication tool, and an assistive robot, both manually and via high-level goal-oriented commands
    corecore