18,506 research outputs found
Recommended from our members
Human-display interaction technology: Emerging remote interfaces for pervasive display environments
This is the author's accepted manuscript. The final published article is available from the link below. Copyright @ 2010 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.We're living in a world where information processing isn't confined to desktop computers - it's being integrated into everyday objects and activities. Pervasive computation is human centered: it permeates our physical world, helping us achieve goals and fulfill our needs with minimum effort by exploiting natural interaction styles. Remote interaction with screen displays requires a sensor-based, multimodal, touchless approach. For example, by processing user hand gestures, this paradigm removes constraints requiring physical contact and permits natural interaction with tangible digital information. Such touchless interaction can be multimodal, exploiting the visual, auditory, and olfactory senses.Ministerio de Educación y Ciencia and Amper Sistemas, SA
Challenges in mobile multi-device ecosystems
BACKGROUND Coordinated multi-display environments from the desktop, second-screen to gigapixel display walls are increasingly common. Personal and intimate mobile and wearable devices such as head-mounted displays, smartwatches, smartphones and tablets are rarely part of such multi-device ecosystems. METHODS We conducted a literature research and an expert survey to identify challenges in mobile multi-device ecosystems. RESULTS We present grounded challenges relevant for the design, development and use of mobile multi-device environments as well as opportunities for future research. While our surveys indicated that a large number of challenges have been identified, there seems to be little agreement among experts on the importance of individual challenges. CONCLUSION By presenting the identified challenges, we contribute to a better understanding about factors that impede the creation and use of mobile multi-device ecosystems and hope to contribute to shaping the research agenda on interacting with those systems.Publisher PDFPeer reviewe
Multi-person Spatial Interaction in a Large Immersive Display Using Smartphones as Touchpads
In this paper, we present a multi-user interaction interface for a large
immersive space that supports simultaneous screen interactions by combining (1)
user input via personal smartphones and Bluetooth microphones, (2) spatial
tracking via an overhead array of Kinect sensors, and (3) WebSocket interfaces
to a webpage running on the large screen. Users are automatically, dynamically
assigned personal and shared screen sub-spaces based on their tracked location
with respect to the screen, and use a webpage on their personal smartphone for
touchpad-type input. We report user experiments using our interaction framework
that involve image selection and placement tasks, with the ultimate goal of
realizing display-wall environments as viable, interactive workspaces with
natural multimodal interfaces.Comment: 8 pages with reference
Designing for Cross-Device Interactions
Driven by technological advancements, we now own and operate an ever-growing number of digital devices, leading to an increased amount of digital data we produce, use, and maintain. However, while there is a substantial increase in computing power and availability of devices and data, many tasks we conduct with our devices are not well connected across multiple devices. We conduct our tasks sequentially instead of in parallel, while collaborative work across multiple devices is cumbersome to set up or simply not possible. To address these limitations, this thesis is concerned with cross-device computing. In particular it aims to conceptualise, prototype, and study interactions in cross-device computing. This thesis contributes to the field of Human-Computer Interaction (HCI)—and more specifically to the area of cross-device computing—in three ways: first, this work conceptualises previous work through a taxonomy of cross-device computing resulting in an in-depth understanding of the field, that identifies underexplored research areas, enabling the transfer of key insights into the design of interaction techniques. Second, three case studies were conducted that show how cross-device interactions can support curation work as well as augment users’ existing devices for individual and collaborative work. These case studies incorporate novel interaction techniques for supporting cross-device work. Third, through studying cross-device interactions and group collaboration, this thesis provides insights into how researchers can understand and evaluate multi- and cross-device interactions for individual and collaborative work. We provide a visualization and querying tool that facilitates interaction analysis of spatial measures and video recordings to facilitate such evaluations of cross-device work. Overall, the work in this thesis advances the field of cross-device computing with its taxonomy guiding research directions, novel interaction techniques and case studies demonstrating cross-device interactions for curation, and insights into and tools for effective evaluation of cross-device systems
Login Authentication with Facial Gesture Recognition
Facial recognition has proven to be very useful and versatile, from Facebook photo tagging and Snapchat filters to modeling fluid dynamics and designing for augmented reality. However, facial recognition has only been used for user login services in conjunction with expensive and restrictive hardware technologies, such as in smart phone devices like the iPhone x. This project aims to apply machine learning techniques to reliably distinguish user accounts with only common cameras to make facial recognition logins more accessible to website and software developers. To show the feasibility of this idea, we created a web API that recognizes a users face to log them in to their account, and we will create a simple website to test the reliability of our system. In this paper, we discuss our database-centric architecture model, use cases and activity diagrams, technologies we used for the website, API, and machine learning algorithms. We also provide the screenshots of our system, the user manual, and our future plan
Supporting Device Discovery and Spontaneous Interaction with Spatial References
The RELATE interaction model is designed to support spontaneous interaction of mobile users with devices and services in their environment. The model is based on spatial references that capture the spatial relationship of a user’s device with other co-located devices. Spatial references are obtained by relative position sensing and integrated in the mobile user interface to spatially visualize the arrangement of discovered devices, and to provide direct access for interaction across devices. In this paper we discuss two prototype systems demonstrating the utility of the model in collaborative and mobile settings, and present a study on usability of spatial list and map representations for device selection
- …