2,737 research outputs found

    Biomove: Biometric user identification from human kinesiological movements for virtual reality systems

    Get PDF
    © 2020 by the authors. Licensee MDPI, Basel, Switzerland. Virtual reality (VR) has advanced rapidly and is used for many entertainment and business purposes. The need for secure, transparent and non-intrusive identification mechanisms is important to facilitate users’ safe participation and secure experience. People are kinesiologically unique, having individual behavioral and movement characteristics, which can be leveraged and used in security sensitive VR applications to compensate for users’ inability to detect potential observational attackers in the physical world. Additionally, such method of identification using a user’s kinesiological data is valuable in common scenarios where multiple users simultaneously participate in a VR environment. In this paper, we present a user study (n = 15) where our participants performed a series of controlled tasks that require physical movements (such as grabbing, rotating and dropping) that could be decomposed into unique kinesiological patterns while we monitored and captured their hand, head and eye gaze data within the VR environment. We present an analysis of the data and show that these data can be used as a biometric discriminant of high confidence using machine learning classification methods such as kNN or SVM, thereby adding a layer of security in terms of identification or dynamically adapting the VR environment to the users’ preferences. We also performed a whitebox penetration testing with 12 attackers, some of whom were physically similar to the participants. We could obtain an average identification confidence value of 0.98 from the actual participants’ test data after the initial study and also a trained model classification accuracy of 98.6%. Penetration testing indicated all attackers resulted in confidence values of less than 50% (\u3c50%), although physically similar attackers had higher confidence values. These findings can help the design and development of secure VR systems

    Applications of Affective Computing in Human-Robot Interaction: state-of-art and challenges for manufacturing

    Get PDF
    The introduction of collaborative robots aims to make production more flexible, promoting a greater interaction between humans and robots also from physical point of view. However, working closely with a robot may lead to the creation of stressful situations for the operator, which can negatively affect task performance. In Human-Robot Interaction (HRI), robots are expected to be socially intelligent, i.e., capable of understanding and reacting accordingly to human social and affective clues. This ability can be exploited implementing affective computing, which concerns the development of systems able to recognize, interpret, process, and simulate human affects. Social intelligence is essential for robots to establish a natural interaction with people in several contexts, including the manufacturing sector with the emergence of Industry 5.0. In order to take full advantage of the human-robot collaboration, the robotic system should be able to perceive the psycho-emotional and mental state of the operator through different sensing modalities (e.g., facial expressions, body language, voice, or physiological signals) and to adapt its behaviour accordingly. The development of socially intelligent collaborative robots in the manufacturing sector can lead to a symbiotic human-robot collaboration, arising several research challenges that still need to be addressed. The goals of this paper are the following: (i) providing an overview of affective computing implementation in HRI; (ii) analyzing the state-of-art on this topic in different application contexts (e.g., healthcare, service applications, and manufacturing); (iii) highlighting research challenges for the manufacturing sector

    Biomove: Biometric user identification from human kinesiological movements for virtual reality systems

    Get PDF
    © 2020 by the authors. Licensee MDPI, Basel, Switzerland. Virtual reality (VR) has advanced rapidly and is used for many entertainment and business purposes. The need for secure, transparent and non-intrusive identification mechanisms is important to facilitate users’ safe participation and secure experience. People are kinesiologically unique, having individual behavioral and movement characteristics, which can be leveraged and used in security sensitive VR applications to compensate for users’ inability to detect potential observational attackers in the physical world. Additionally, such method of identification using a user’s kinesiological data is valuable in common scenarios where multiple users simultaneously participate in a VR environment. In this paper, we present a user study (n = 15) where our participants performed a series of controlled tasks that require physical movements (such as grabbing, rotating and dropping) that could be decomposed into unique kinesiological patterns while we monitored and captured their hand, head and eye gaze data within the VR environment. We present an analysis of the data and show that these data can be used as a biometric discriminant of high confidence using machine learning classification methods such as kNN or SVM, thereby adding a layer of security in terms of identification or dynamically adapting the VR environment to the users’ preferences. We also performed a whitebox penetration testing with 12 attackers, some of whom were physically similar to the participants. We could obtain an average identification confidence value of 0.98 from the actual participants’ test data after the initial study and also a trained model classification accuracy of 98.6%. Penetration testing indicated all attackers resulted in confidence values of less than 50% (\u3c50%), although physically similar attackers had higher confidence values. These findings can help the design and development of secure VR systems

    Supporting Quality of Service in Scientific Workflows

    Get PDF
    While workflow management systems have been utilized in enterprises to support businesses for almost two decades, the use of workflows in scientific environments was fairly uncommon until recently. Nowadays, scientists use workflow systems to conduct scientific experiments, simulations, and distributed computations. However, most scientific workflow management systems have not been built using existing workflow technology; rather they have been designed and developed from scratch. Due to the lack of generality of early scientific workflow systems, many domain-specific workflow systems have been developed. Generally speaking, those domain-specific approaches lack common acceptance and tool support and offer lower robustness compared to business workflow systems. In this thesis, the use of the industry standard BPEL, a workflow language for modeling business processes, is proposed for the modeling and the execution of scientific workflows. Due to the widespread use of BPEL in enterprises, a number of stable and mature software products exist. The language is expressive (Turingcomplete) and not restricted to specific applications. BPEL is well suited for the modeling of scientific workflows, but existing implementations of the standard lack important features that are necessary for the execution of scientific workflows. This work presents components that extend an existing implementation of the BPEL standard and eliminate the identified weaknesses. The components thus provide the technical basis for use of BPEL in academia. The particular focus is on so-called non-functional (Quality of Service) requirements. These requirements include scalability, reliability (fault tolerance), data security, and cost (of executing a workflow). From a technical perspective, the workflow system must be able to interface with the middleware systems that are commonly used by the scientific workflow community to allow access to heterogeneous, distributed resources (especially Grid and Cloud resources). The major components cover exactly these requirements: Cloud Resource Provisioner Scalability of the workflow system is achieved by automatically adding additional (Cloud) resources to the workflow system’s resource pool when the workflow system is heavily loaded. Fault Tolerance Module High reliability is achieved via continuous monitoring of workflow execution and corrective interventions, such as re-execution of a failed workflow step or replacement of the faulty resource. Cost Aware Data Flow Aware Scheduler The majority of scientific workflow systems only take the performance and utilization of resources for the execution of workflow steps into account when making scheduling decisions. The presented workflow system goes beyond that. By defining preference values for the weighting of costs and the anticipated workflow execution time, workflow users may influence the resource selection process. The developed multiobjective scheduling algorithm respects the defined weighting and makes both efficient and advantageous decisions using a heuristic approach. Security Extensions Because it supports various encryption, signature and authentication mechanisms (e.g., Grid Security Infrastructure), the workflow system guarantees data security in the transfer of workflow data. Furthermore, this work identifies the need to equip workflow developers with workflow modeling tools that can be used intuitively. This dissertation presents two modeling tools that support users with different needs. The first tool, DAVO (domain-adaptable, Visual BPEL Orchestrator), operates at a low level of abstraction and allows users with knowledge of BPEL to use the full extent of the language. DAVO is a software that offers extensibility and customizability for different application domains. These features are used in the implementation of the second tool, SimpleBPEL Composer. SimpleBPEL is aimed at users with little or no background in computer science and allows for quick and intuitive development of BPEL workflows based on predefined components

    Prediction of user action in moving-target selection tasks

    Get PDF
    Selection of moving targets is a common task in human–computer interaction (HCI), and more specifically in virtual reality (VR). In spite of the increased number of applications involving moving–target selection, HCI and VR studies have largely focused on static-target selection. Compared to its static-target counterpart, however, moving-target selection poses special challenges, including the need to continuously and simultaneously track the target and plan to reach for it, which may be difficult depending on the user’s reactiveness and the target’s movement. Action prediction has proven to be the most comprehensive enhancement to address moving-target selection challenges. Current predictive techniques, however, heavily rely on continuous tracking of user actions, without considering the possibility that target-reaching actions may have a dominant pre-programmed component—this theory is known as the pre-programmed control theory. Thus, based on the pre-programmed control theory, this research explores the possibility of predicting moving-target selection prior to action execution. Specifically, three levels of action prediction are investigated: action performance, prospective action difficulty, and intention. The proposed performance models predict the movement time (MT) required to reach for a moving target in 2-D and 3-D space, and are useful to compare users and interfaces objectively. The prospective difficulty (PD) models predict the subjective effort required to reach for a moving target, without actually executing the action, and can therefore be measured when performance can not. Finally, the intention models predict the target that the user plans to select, and can therefore be used to facilitate the selection of the intended target. Intention prediction models are developed using decision trees and scoring functions, and evaluated in two VR studies: the first investigates undirected selection (i.e., tasks in which the users are free to select an object among multiple others), and the second directed selection (i.e., the more common experimental task in which users are instructed to select a specific object). PD models for 1-D, and 2-D moving-target selection tasks are developed based on Fitts’ Law, and evaluated in an online experiment. Finally, MT models with the same structural form of the aforementioned PD models are evaluated in a 3-D moving-target selection experiment deployed in VR. Aside from intention predictions on directed selection, all of the explored models yield relatively high accuracies—up to ~78% predicting intended targets in undirected tasks, R^2 = .97 predicting PD, and R^2 = .93 predicting MT

    Analysis of Visualisation and Interaction Tools Authors

    Get PDF
    This document provides an in-depth analysis of visualization and interaction tools employed in the context of Virtual Museum. This analysis is required to identify and design the tools and the different components that will be part of the Common Implementation Framework (CIF). The CIF will be the base of the web-based services and tools to support the development of Virtual Museums with particular attention to online Virtual Museum.The main goal is to provide to the stakeholders and developers an useful platform to support and help them in the development of their projects, despite the nature of the project itself. The design of the Common Implementation Framework (CIF) is based on an analysis of the typical workflow ofthe V-MUST partners and their perceived limitations of current technologies. This document is based also on the results of the V-MUST technical questionnaire (presented in the Deliverable 4.1). Based on these two source of information, we have selected some important tools (mainly visualization tools) and services and we elaborate some first guidelines and ideas for the design and development of the CIF, that shall provide a technological foundation for the V-MUST Platform, together with the V-MUST repository/repositories and the additional services defined in the WP4. Two state of the art reports, one about user interface design and another one about visualization technologies have been also provided in this document

    Human Machine Interfaces for Teleoperators and Virtual Environments

    Get PDF
    In Mar. 1990, a meeting organized around the general theme of teleoperation research into virtual environment display technology was conducted. This is a collection of conference-related fragments that will give a glimpse of the potential of the following fields and how they interplay: sensorimotor performance; human-machine interfaces; teleoperation; virtual environments; performance measurement and evaluation methods; and design principles and predictive models
    • …
    corecore