7,553 research outputs found
Survey and Systematization of Secure Device Pairing
Secure Device Pairing (SDP) schemes have been developed to facilitate secure
communications among smart devices, both personal mobile devices and Internet
of Things (IoT) devices. Comparison and assessment of SDP schemes is
troublesome, because each scheme makes different assumptions about out-of-band
channels and adversary models, and are driven by their particular use-cases. A
conceptual model that facilitates meaningful comparison among SDP schemes is
missing. We provide such a model. In this article, we survey and analyze a wide
range of SDP schemes that are described in the literature, including a number
that have been adopted as standards. A system model and consistent terminology
for SDP schemes are built on the foundation of this survey, which are then used
to classify existing SDP schemes into a taxonomy that, for the first time,
enables their meaningful comparison and analysis.The existing SDP schemes are
analyzed using this model, revealing common systemic security weaknesses among
the surveyed SDP schemes that should become priority areas for future SDP
research, such as improving the integration of privacy requirements into the
design of SDP schemes. Our results allow SDP scheme designers to create schemes
that are more easily comparable with one another, and to assist the prevention
of persisting the weaknesses common to the current generation of SDP schemes.Comment: 34 pages, 5 figures, 3 tables, accepted at IEEE Communications
Surveys & Tutorials 2017 (Volume: PP, Issue: 99
Machine Understanding of Human Behavior
A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior
Multimodal Signal Processing and Learning Aspects of Human-Robot Interaction for an Assistive Bathing Robot
We explore new aspects of assistive living on smart human-robot interaction
(HRI) that involve automatic recognition and online validation of speech and
gestures in a natural interface, providing social features for HRI. We
introduce a whole framework and resources of a real-life scenario for elderly
subjects supported by an assistive bathing robot, addressing health and hygiene
care issues. We contribute a new dataset and a suite of tools used for data
acquisition and a state-of-the-art pipeline for multimodal learning within the
framework of the I-Support bathing robot, with emphasis on audio and RGB-D
visual streams. We consider privacy issues by evaluating the depth visual
stream along with the RGB, using Kinect sensors. The audio-gestural recognition
task on this new dataset yields up to 84.5%, while the online validation of the
I-Support system on elderly users accomplishes up to 84% when the two
modalities are fused together. The results are promising enough to support
further research in the area of multimodal recognition for assistive social
HRI, considering the difficulties of the specific task. Upon acceptance of the
paper part of the data will be publicly available
Nomadic input on mobile devices: the influence of touch input technique and walking speed on performance and offset modeling
In everyday life people use their mobile phones on-the-go with different walking speeds and with different touch input techniques. Unfortunately, much of the published research in mobile interaction does not quantify the influence of these variables. In this paper, we analyze the influence of walking speed, gait pattern and input techniques on commonly used performance parameters like error rate, accuracy and tapping speed, and we compare the results to the static condition. We examine the influence of these factors on the machine learned offset model used to correct user input and we make design recommendations. The results show that all performance parameters degraded when the subject started to move, for all input techniques. Index finger pointing techniques demonstrated overall better performance compared to thumb-pointing techniques. The influence of gait phase on tap event likelihood and accuracy was demonstrated for all input techniques and all walking speeds. Finally, it was shown that the offset model built on static data did not perform as well as models inferred from dynamic data, which indicates the speed-specific nature of the models. Also, models identified using specific input techniques did not perform well when tested in other conditions, demonstrating the limited validity of offset models to a particular input technique. The model was therefore calibrated using data recorded with the appropriate input technique, at 75% of preferred walking speed, which is the speed to which users spontaneously slow down when they use a mobile device and which presents a tradeoff between accuracy and usability. This led to an increase in accuracy compared to models built on static data. The error rate was reduced between 0.05% and 5.3% for landscape-based methods and between 5.3% and 11.9% for portrait-based methods
Wearable and mobile devices
Information and Communication Technologies, known as ICT, have undergone dramatic changes in the last 25 years. The 1980s was the decade of the Personal Computer (PC), which brought computing into the home and, in an educational setting, into the classroom. The 1990s gave us the World Wide Web (the Web), building on the infrastructure of the Internet, which has revolutionized the availability and delivery of information. In the midst of this information revolution, we are now confronted with a third wave of novel technologies (i.e., mobile and wearable computing), where computing devices already are becoming small enough so that we can carry them around at all times, and, in addition, they have the ability to interact with devices embedded in the environment. The development of wearable technology is perhaps a logical product of the convergence between the miniaturization of microchips (nanotechnology) and an increasing interest in pervasive computing, where mobility is the main objective. The miniaturization of computers is largely due to the decreasing size of semiconductors and switches; molecular manufacturing will allow for ânot only molecular-scale switches but also nanoscale motors, pumps, pipes, machinery that could mimic skinâ (Page, 2003, p. 2). This shift in the size of computers has obvious implications for the human-computer interaction introducing the next generation of interfaces. Neil Gershenfeld, the director of the Media Labâs Physics and Media Group, argues, âThe world is becoming the interface. Computers as distinguishable devices will disappear as the objects themselves become the means we use to interact with both the physical and the virtual worldsâ (Page, 2003, p. 3). Ultimately, this will lead to a move away from desktop user interfaces and toward mobile interfaces and pervasive computing
Recommended from our members
Towards a Theory of Analytical Behaviour: A Model of Decision-Making in Visual Analytics
This paper introduces a descriptive model of the human-computer processes that lead to decision-making in visual analytics. A survey of nine models from the visual analytics and HCI literature are presented to account for different perspectives such as sense-making, reasoning, and low-level human-computer interactions. The survey examines the people and computers (entities) presented in the models, the divisions of labour between entities (both physical and role-based), the behaviour of both people and machines as constrained by their roles and agency, and finally the elements and processes which define the flow of data both within and between entities. The survey informs the identification of four observations that characterise analytical behaviour - defined as decision-making facilitated by visual analytics: bilateral discourse, divisions of labour, mixed-synchronicity information flows, and bounded behaviour. Based on these principles, a descriptive model is presented as a contribution towards a theory of analytical behaviour. The future intention is to apply prospect theory, a economic model of decision-making under uncertainty, to the study of analytical behaviour. It is our assertion that to apply prospect theory first requires a descriptive model of the processes that facilitate decision-making in visual analytics. We conclude it necessary to measure the perception of risk in future work in order to apply prospect theory to the study of analytical behaviour using our proposed model
CASAM: Collaborative Human-machine Annotation of Multimedia.
The CASAM multimedia annotation system implements a model of cooperative annotation between a human annotator and automated components. The aim is that they work asynchronously but together. The system focuses upon the areas where automated recognition and reasoning are most effective and the user is able to work in the areas where their unique skills are required. The systemâs reasoning is influenced by the annotations provided by the user and, similarly, the user can see the systemâs work and modify and, implicitly, direct it. The CASAM system interacts with the user by providing a window onto the current state of annotation, and by generating requests for information which are important for the final annotation or to constrain its reasoning. The user can modify the annotation, respond to requests and also add their own annotations. The objective is that the human annotatorâs time is used more effectively and that the result is an annotation that is both of higher quality and produced more quickly. This can be especially important in circumstances where the annotator has a very restricted amount of time in which to annotate the document. In this paper we describe our prototype system. We expand upon the techniques used for automatically analysing the multimedia document, for reasoning over the annotations generated and for the generation of an effective interaction with the end-user. We also present the results of evaluations undertaken with media professionals in order to validate the approach and gain feedback to drive further research
- âŠ