12 research outputs found

    Service Conflict Management Framework for Multi-user Inhabited Smart Home

    No full text
    In this paper, we propose a service conflict management framework for detecting and resolving conflicts of multi-users who share context-aware applications within a smart home. For supporting a general solution to deal with the multi-user conflicts, the framework utilizes an ontology that describes applications and their services, an approach determination tree that assigns an appropriate resolution strategy to the conflict, and a set of resolution strategies. Based on this ontology, it dynamically detects conflicts associated among multiple users who are using various applications affecting each other, or the same application with different preferences. An appropriate resolution method is assigned to the conflict according to the properties involved, their relationship, and users' preferences. The detected conflict is resolved either by an automatic decision, based on either priority or preferences, or by a user decision. Through implementing and evaluating the framework to a smart home test-bed, we found that the proposed framework dynamically detected and flexibly resolved multi-user conflicts which occurred among the services of multiple applications, as well as within a single application

    Cross-Device Computation Coordination for Mobile Collocated Interactions with Wearables

    No full text
    Mobile devices, wearables and Internet-of-Things are crammed into smaller form factors and batteries, yet they encounter demanding applications such as big data analysis, data mining, machine learning, augmented reality and virtual reality. To meet such high demands in the multi-device ecology, multiple devices should communicate collectively to share computation burdens and stay energy-efficient. In this paper, we present a cross-device computation coordination method for scenarios of mobile collocated interactions with wearables. We formally define a cross-device computation coordination problem and propose a method for solving this problem. Lastly, we demonstrate the feasibility of our approach through experiments and exemplar cases using 12 commercial Android devices with varying computation capabilities

    Service Conflict Management Framework for Multi-user Inhabited Smart Home

    No full text
    In this paper, we propose a service conflict management framework for detecting and resolving conflicts of multi-users who share context-aware applications within a smart home. For supporting a general solution to deal with the multi-user conflicts, the framework utilizes an ontology that describes applications and their services, an approach determination tree that assigns an appropriate resolution strategy to the conflict, and a set of resolution strategies. Based on this ontology, it dynamically detects conflicts associated among multiple users who are using various applications affecting each other, or the same application with different preferences. An appropriate resolution method is assigned to the conflict according to the properties involved, their relationship, and users' preferences. The detected conflict is resolved either by an automatic decision, based on either priority or preferences, or by a user decision. Through implementing and evaluating the framework to a smart home test-bed, we found that the proposed framework dynamically detected and flexibly resolved multi-user conflicts which occurred among the services of multiple applications, as well as within a single application

    Defining Stable Touch Area Based on a Large-Screen Smart Device in 3D-Touch Interface

    Get PDF
    Touch interface technologies for mobile devices are essentially in use. The purpose of such touch interfaces is to run an application by touching a screen with a user’s finger or to implement various functions on the device. When the user has an attempt to use the touch interface, users tend to grab the mobile device with one hand. Because of the existence of untouchable areas to which the user cannot reach with the user’s fingers, it is possible to occur for a case where the user is not able to touch a specific area on the screen accurately. This results in some issues that the mobile device does not carry out the user’s desired function and the execution time is delayed due to the wrong implementation. Therefore, there is a need to distinguish the area where the user can stably input the touch interface from the area where the users cannot and to overcome the problems of the unstable touch area. Furthermore, when the size of the screen increases, these issues will become more serious because of an increase in the unstable touch areas. Especially, an interface that receives position and force data like 3D-touch requires the stable area setting different from the conventional 2D-touch. In this paper, we search and analyze the stable touch areas on the large screen where the user can do 3D-touch inputs.</p

    Defining Stable Touch Area Based on a Large-Screen Smart Device in 3D-Touch Interface

    No full text

    Human Motion Prediction by Combining Spatial and Temporal Information With Independent Global Orientation

    No full text
    In this study, we address the challenge of 3D human motion prediction from motion capture data, which has become critical in various applications such as autonomous vehicles and human-robot interaction. Previous deep learning-based methods have improved prediction accuracy, but require significant network parameters and do not effectively consider independent joint movements. To overcome the limitations, we propose two lightweight network structures for human motion prediction: LG-Net and LGT-Net, which focus on the individual movements of distinct human limbs and their inter-dependencies. The LG-Net comprises local and global networks, while the LGT-Net combines the proposed LG-Net structure with Long and Short Term Memory (LSTM) cells to exploit temporal information. Our networks, designed to be extremely lightweight with only 0.08M and 0.5M parameters, achieve higher prediction performance compared to state-of-the-art methods. In addition, this study is the first to consider the root joint to improve motion prediction performance. The proposed approach demonstrates the potential for efficient and accurate human motion prediction in various applications

    Estimating Gaze Depth Using Multi-Layer Perceptron

    No full text
    In this paper we describe a new method for determining gaze depth in a head mounted eye-tracker. Eye-trackers are being incorporated into head mounted displays (HMDs), and eye-gaze is being used for interaction in Virtual and Augmented Reality. For some interaction methods, it is important to accurately measure the x-and y-direction of the eye-gaze and especially the focal depth information. Generally, eye tracking technology has a high accuracy in x-and y-directions, but not in depth. We used a binocular gaze tracker with two eye cameras, and the gaze vector was input to an MLP neural network for training and estimation. For the performance evaluation, data was obtained from 13 people gazing at fixed points at distances from 1m to 5m. The gaze classification into fixed distances produced an average classification error of nearly 10%, and an average error distance of 0.42m. This is sufficient for some Augmented Reality applications, but more research is needed to provide an estimate of a user's gaze moving in continuous space
    corecore