50,407 research outputs found

    Learning 3D Navigation Protocols on Touch Interfaces with Cooperative Multi-Agent Reinforcement Learning

    Get PDF
    Using touch devices to navigate in virtual 3D environments such as computer assisted design (CAD) models or geographical information systems (GIS) is inherently difficult for humans, as the 3D operations have to be performed by the user on a 2D touch surface. This ill-posed problem is classically solved with a fixed and handcrafted interaction protocol, which must be learned by the user. We propose to automatically learn a new interaction protocol allowing to map a 2D user input to 3D actions in virtual environments using reinforcement learning (RL). A fundamental problem of RL methods is the vast amount of interactions often required, which are difficult to come by when humans are involved. To overcome this limitation, we make use of two collaborative agents. The first agent models the human by learning to perform the 2D finger trajectories. The second agent acts as the interaction protocol, interpreting and translating to 3D operations the 2D finger trajectories from the first agent. We restrict the learned 2D trajectories to be similar to a training set of collected human gestures by first performing state representation learning, prior to reinforcement learning. This state representation learning is addressed by projecting the gestures into a latent space learned by a variational auto encoder (VAE).Comment: 17 pages, 8 figures. Accepted at The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases 2019 (ECMLPKDD 2019

    Learning 3D Navigation Protocols on Touch Interfaces with Cooperative Multi-Agent Reinforcement Learning

    Get PDF
    Using touch devices to navigate in virtual 3D environments such as computer assisted design (CAD) models or geographical information systems (GIS) is inherently difficult for humans, as the 3D operations have to be performed by the user on a 2D touch surface. This ill-posed problem is classically solved with a fixed and handcrafted interaction protocol, which must be learned by the user. We propose to automatically learn a new interaction protocol allowing to map a 2D user input to 3D actions in virtual environments using reinforcement learning (RL). A fundamental problem of RL methods is the vast amount of interactions often required, which are difficult to come by when humans are involved. To overcome this limitation, we make use of two collaborative agents. The first agent models the human by learning to perform the 2D finger trajectories. The second agent acts as the interaction protocol, interpreting and translating to 3D operations the 2D finger trajectories from the first agent. We restrict the learned 2D trajectories to be similar to a training set of collected human gestures by first performing state representation learning, prior to reinforcement learning. This state representation learning is addressed by projecting the gestures into a latent space learned by a variational auto encoder (VAE)

    Learning 3D Navigation Protocols on Touch Interfaces with Cooperative Multi-Agent Reinforcement Learning

    Get PDF
    International audienceUsing touch devices to navigate in virtual 3D environments such as computer assisted design (CAD) models or geographical information systems(GIS) is inherently difficult for humans, as the 3D operations have to be performed by the user on a 2D touch surface. This ill-posed problem is classically solved with a fixed and handcrafted interaction protocol, which must be learned by the user.We propose to automatically learn a new interaction protocol allowing to map a 2D user input to 3D actions in virtual environments using reinforcement learning (RL). A fundamental problem of RL methods is the vast amount of interactions often required, which are difficult to come by when humans are involved. To overcome this limitation, we make use of two collaborative agents. The first agent models the human by learning to perform the 2D finger trajectories. The second agent acts as the interaction protocol, interpreting and translating to 3D operations the 2D finger trajectories from the first agent. We restrict the learned 2D trajectories to be similar to a training set of collected human gestures by first performing state representation learning, prior to reinforcement learning. This state representation learning is addressed by projecting the gestures into a latent space learned by a variational auto encoder (VAE)

    Security in Peer-to-Peer SIP VoIP

    Get PDF
    VoIP (Voice over Internet Protocol) is one of the fastest growing technologies in the world. It is used by people all over the world for communication. But with the growing popularity of internet, security is one of the biggest concerns. It is important that the intruders are not able to sniff the packets that are transmitted over the internet through VoIP. Session Initiation Protocol (SIP) is the most popular and commonly used protocol of VoIP. Now days, companies like Skype are using Peer-to-Peer SIP VoIP for faster and better performance. Through this project I am improving an already existing Peer-to-Peer SIP VoIP called SOSIMPLE P2P VoIP by adding confidentiality in the protocol with the help of public key cryptography

    A PUF-and biometric-based lightweight hardware solution to increase security at sensor nodes

    Get PDF
    Security is essential in sensor nodes which acquire and transmit sensitive data. However, the constraints of processing, memory and power consumption are very high in these nodes. Cryptographic algorithms based on symmetric key are very suitable for them. The drawback is that secure storage of secret keys is required. In this work, a low-cost solution is presented to obfuscate secret keys with Physically Unclonable Functions (PUFs), which exploit the hardware identity of the node. In addition, a lightweight fingerprint recognition solution is proposed, which can be implemented in low-cost sensor nodes. Since biometric data of individuals are sensitive, they are also obfuscated with PUFs. Both solutions allow authenticating the origin of the sensed data with a proposed dual-factor authentication protocol. One factor is the unique physical identity of the trusted sensor node that measures them. The other factor is the physical presence of the legitimate individual in charge of authorizing their transmission. Experimental results are included to prove how the proposed PUF-based solution can be implemented with the SRAMs of commercial Bluetooth Low Energy (BLE) chips which belong to the communication module of the sensor node. Implementation results show how the proposed fingerprint recognition based on the novel texture-based feature named QFingerMap16 (QFM) can be implemented fully inside a low-cost sensor node. Robustness, security and privacy issues at the proposed sensor nodes are discussed and analyzed with experimental results from PUFs and fingerprints taken from public and standard databases.Ministerio de Economía, Industria y Competitividad TEC2014-57971-R, TEC2017-83557-

    Touchalytics: On the Applicability of Touchscreen Input as a Behavioral Biometric for Continuous Authentication

    Full text link
    We investigate whether a classifier can continuously authenticate users based on the way they interact with the touchscreen of a smart phone. We propose a set of 30 behavioral touch features that can be extracted from raw touchscreen logs and demonstrate that different users populate distinct subspaces of this feature space. In a systematic experiment designed to test how this behavioral pattern exhibits consistency over time, we collected touch data from users interacting with a smart phone using basic navigation maneuvers, i.e., up-down and left-right scrolling. We propose a classification framework that learns the touch behavior of a user during an enrollment phase and is able to accept or reject the current user by monitoring interaction with the touch screen. The classifier achieves a median equal error rate of 0% for intra-session authentication, 2%-3% for inter-session authentication and below 4% when the authentication test was carried out one week after the enrollment phase. While our experimental findings disqualify this method as a standalone authentication mechanism for long-term authentication, it could be implemented as a means to extend screen-lock time or as a part of a multi-modal biometric authentication system.Comment: to appear at IEEE Transactions on Information Forensics & Security; Download data from http://www.mariofrank.net/touchalytics
    corecore