20 research outputs found

    A human computer interactions framework for biometric user identification

    Get PDF
    Computer assisted functionalities and services have saturated our world becoming such an integral part of our daily activities that we hardly notice them. In this study we are focusing on enhancements in Human-Computer Interaction (HCI) that can be achieved by natural user recognition embedded in the employed interaction models. Natural identification among humans is mostly based on biometric characteristics representing what-we-are (face, body outlook, voice, etc.) and how-we-behave (gait, gestures, posture, etc.) Following this observation, we investigate different approaches and methods for adapting existing biometric identification methods and technologies to the needs of evolving natural human computer interfaces

    Firmware enhancements for BYOD-aware network security

    Get PDF
    In today’s connected world, users migrate within a complex set of networks, including, but not limited to, 3G and 4G (LTE) services provided by mobile operators, Wi-Fi hotspots in private and public places, as well as wireless and/or wired LAN access in business and home environments. Following the widely expanding Bring Your Own Device (BYOD) approach, many public and educational institutions have begun to encourage customers and students to use their own devices at all times. While this may be cost-effective in terms of decreased investments in hardware and consequently lower maintenance fees on a long-term basis, it may also involve some security risks. In particular, many users are often connected to more than one network and/or communication service provider at the same time, for example to a 3G/4G mobile network and to a Wi-Fi. In a BYOD setting, an infected device or a rogue one can turn into an unwanted gateway, causing a security breach by leaking information across networks. Aiming at investigating in greater detail the implications of BYOD on network security in private and business settings we are building a framework for experiments with mobile routers both in home and business networks. This is a continuation of our earlier work on communications and services with enhanced security for network appliances

    Mobiles and wearables: owner biometrics and authentication

    Get PDF
    We discuss the design and development of HCI models for authentication based on gait and gesture that can be supported by mobile and wearable equipment. The paper proposes to use such biometric behavioral traits for partially transparent and continuous authentication by means of behavioral patterns. © 2016 Copyright held by the owner/author(s)

    Semantic Surfaces for Business Applications

    Get PDF
    In this paper we introduce the concept of semantic surfaces – surfaces which are enhanced to provide additional information to the one visible, and are able to interact with the user. We consider several approaches to their implementation and in particular show the perspectives of the Cluster Pattern Interface (CLUSPI) technology developed and patented by one of the authors. Various business applications of semantic surfaces are outlined which illustrate the potential of the proposed concept

    Specialized CNT-based Sensor Framework for Advanced Motion Tracking

    Get PDF
    In this work, we discuss the design and development of an advanced framework for high-fidelity finger motion tracking based on Specialized Carbon Nanotube (CNT) stretchable sensors developed at our research facilities. Earlier versions of the CNT sensors have been employed in the high-fidelity finger motion tracking Data Glove commercialized by Yamaha, Japan. The framework presented in this paper encompasses our continuing research and development of more advanced CNT-based sensors and the implementation of novel high-fidelity motion tracking products based on them. The CNT sensor production and communication framework components are considered in detail and wireless motion tracking experiments with the developed hardware and software components integrated with the Yamaha Data Glove are reported

    Making Robotic Dogs Detect Objects That Real Dogs Recognize Naturally: A Pilot Study

    Get PDF
    The recent advancements in artificial intelligence (AI) and deep learning have enabled smart products, such as smart toys and robotic dogs, to interact with humans more intelligently and express emotions. As a result, such products become intensively sensorized and integrate multi-modal interaction techniques to detect and infer emotions from spoken utterances, motions, pointing gestures and observed objects, and to plan their actions. However, even for the predictive purposes, a practical challenge for these smart products is that deep learning algorithms typically require high computing power, especially when applying a multimodal method. Moreover, the memory needs for deep learning models usually surpass the limit of many low-end mobile computing devices as their complexities boost up. In this study, we explore the application of lightweight deep neural networks, SqueezeDet model and Single Shot Multi-Box Detector (SSD) model with MobileNet as the backbone, to detect canine beloved objects. These lightweight models are expected to be integrated into a multi-modal emotional support robotics system designed for a smart robot dog. We also introduce our future research works in this direction

    A study of children facial recognition for privacy in smart TV

    Get PDF
    © Springer International Publishing AG 2017. Nowadays Smart TV is becoming very popular in many families. Smart TV provides computing and connectivity capabilities with access to online services, such as video on demand, online games, and even sports and healthcare activities. For example, Google Smart TV, which is based on Google Android, integrates into the users’ daily physical activities through its ability to extract and access context information dependent on the surrounding environment and to react accordingly via built-in camera and sensors. Without a viable privacy protection system in place, however, the expanding use of Smart TV can lead to privacy violations through tracking and user profiling by broadcasters and others. This becomes of particular concern when underage users such as children who may not fully understand the concept of privacy are involved in using the Smart TV services. In this study, we consider digital imaging and ways to identify and properly tag pictures of children in order to prevent unwanted disclosure of personal information. We have conducted a preliminary experiment on the effectiveness of facial recognition technology in Smart TV where experimental recognition of child face presence in feedback image streams is conducted through the Microsoft’s Face Application Programming Interface

    Communicating with Humans and Robots: A Motion Tracking Data Glove for Enhanced Support of Deafblind

    Get PDF
    In this work, we discuss the design and development of a communication system for enhanced support of the deafblind. The system is based on an advanced motion tracking Data Glove that allows for high fidelity determination of finger postures with consequent identification of the basic Malossi alphabet signs. A natural, easy-to-master alphabet extension that supports single-hand signing without touch surface sensing is described, and different scenarios for its use are discussed. The focus is on using the extended Malossi alphabet as a communication medium in a Data Glove-based interface for remote messaging and interactive control of mobile robots. This may be of particular interest to the deafblind community, where distant communications and robotized support and services are rising. The designed Data Glove-based communication interface requires minimal adjustments to the Malossi alphabet and can be mastered after a short training period. The natural interaction style supported by the Data Glove and the popularity of the Malossi alphabet among the deafblind should greatly facilitate the wider adoption of the developed interface

    an interactive tool for sketch based annotation

    Get PDF
    Annotation of Web content is becoming a widespread activity by which users appropriate and enrich information available on the Net. MADCOW (Multimedia Annotation of Digital Content Over the Web) is a system for annotating HTML pages that provides a uniform interactive paradigm to make annotations on text, images and video. MADCOW has also recently included novel features for groups and group annotations with reference to ontological domains, but interaction with the MADCOW client is currently limited to the use of common input devices (e.g., keyboard, mouse), requiring a precise selection of the portions to be annotated. In this paper we present a sketch-based interface, which can be used to annotate not only content but also aspects relative to the presentation of the information. While interacting with a standard Web browser, users can draw free-hand geometrical shapes (e.g., circles, rectangles, closed paths) for selecting specific parts of the Web pages to be annotated. The interaction mode depends on the adopted input device. For example, users interacting with touch-screen devices (e.g., smartphones, tablets) can draw shapes with their fingers, but in principle any device able to detect sketching gestures can be supported (e.g., graphic tablets, optical pens). The paper discusses interaction aspects together with an overview of the system architecture. Finally, preliminary experimental tests and some considerations on the usability are also reported
    corecore