18,234 research outputs found
Recommended from our members
Prototyping a Context-Aware Framework for Pervasive Entertainment Applications
A Gesture-based Recognition System for Augmented Reality
With the geometrical improvement in Information Technology, current conventional input devices are becoming increasingly obsolete and lacking. Experts in Human Computer Interaction (HCI) are convinced that input devices remain the bottleneck of information acquisition specifically in when using Augmented Reality (AR) technology. Current input mechanisms are unable to compete with this trend towards naturalness and expressivity which allows users to perform natural gestures or operations and convert them as input. Hence, a more natural and intuitive input device is imperative, specifically gestural inputs that have been widely perceived by HCI experts as the next big input device. To address this gap, this project is set to develop a prototype of hand gesture recognition system based on computer vision in modeling basic human-computer interactions. The main motivation in this work is a technology that requires no outfitting of additional equipment whatsoever by the users. The gesture-based had recognition system was implemented using the Rapid Application Development (RAD) methodology and was evaluated in terms of its usability and performance through five levels of testing, which are unit testing, integration testing, system testing, recognition accuracy testing, and user acceptance testing. The test results of unit, integration, system testing as well as user acceptance testing produced favorable results. In conclusion, current conventional input devices will continue to bottleneck this advancement in technology; therefore, a better alternative input technique should be looked into, in particularly, gesture-based input technique which offers user a more natural and intuitive control
Implicit 3D Orientation Learning for 6D Object Detection from RGB Images
We propose a real-time RGB-based pipeline for object detection and 6D pose
estimation. Our novel 3D orientation estimation is based on a variant of the
Denoising Autoencoder that is trained on simulated views of a 3D model using
Domain Randomization. This so-called Augmented Autoencoder has several
advantages over existing methods: It does not require real, pose-annotated
training data, generalizes to various test sensors and inherently handles
object and view symmetries. Instead of learning an explicit mapping from input
images to object poses, it provides an implicit representation of object
orientations defined by samples in a latent space. Our pipeline achieves
state-of-the-art performance on the T-LESS dataset both in the RGB and RGB-D
domain. We also evaluate on the LineMOD dataset where we can compete with other
synthetically trained approaches. We further increase performance by correcting
3D orientation estimates to account for perspective errors when the object
deviates from the image center and show extended results.Comment: Code available at: https://github.com/DLR-RM/AugmentedAutoencode
Towards a framework for investigating tangible environments for learning
External representations have been shown to play a key role in mediating cognition. Tangible environments offer the opportunity for novel representational formats and combinations, potentially increasing representational power for supporting learning. However, we currently know little about the specific learning benefits of tangible environments, and have no established framework within which to analyse the ways that external representations work in tangible environments to support learning. Taking external representation as the central focus, this paper proposes a framework for investigating the effect of tangible technologies on interaction and cognition. Key artefact-action-representation relationships are identified, and classified to form a structure for investigating the differential cognitive effects of these features. An example scenario from our current research is presented to illustrate how the framework can be used as a method for investigating the effectiveness of differential designs for supporting science learning
Markerless assisted rehabilitation system
The project focuses on the use of modern technology to analyze human movement. This analysis turns out to be useful aid for physicians in rehabilitation of patients with limb injuries. This method is more precise than simple observation of the patient through the organ of sight. The proposed system allows markerless determination of deviations between the selected bones and joints, and as a result do not require specialized and expensive equipment. The implemented application presents instructional animation of the exercises and verify the correctness of its performance in real time. The equipment that meets the requirements of the project is the Microsoft Kinect, which is nowadays widely used in the medical ļ¬eld
Agrootics. A semiotic cubic model description for meaning interpretation
Semiotics has meaning models that constitute forms of observation of realityās phenomenology. At current perspective of human reasoning, those models are insufficient reality interpreters before society and to the technology that accompanies it. In terms of meaning models of analysis in semiotics, it can be resumed as so: Saussureās dichotomy (a binary model), Peirce, Ogden-Richards and Morrisā trichotomies (a triadic model) and Greimasā square (a tetradic model). As we inhabit a three-dimensional reality, we assume that everything can be measured and observed in terms of distance and extension relativities, as to an emotion, a phenomenon, a social medium or an object. Thus, we propose an alternative meaning production and interpretation, through a conceptual cubic model rooted on Peirceās trichotomy. This cubic perspective, represented by the development of a perception emulator in form of a cube, will be grounded through sensibility of social and physical space notions.info:eu-repo/semantics/publishedVersio
A Framework for Psychophysiological Classification within a Cultural Heritage Context Using Interest
This article presents a psychophysiological construct of interest as a knowledge emotion and illustrates the importance of interest detection in a cultural heritage context. The objective of this work is to measure and classify psychophysiological reactivity in response to cultural heritage material presented as visual and audio. We present a data processing and classification framework for the classification of interest. Two studies are reported, adopting a subject-dependent approach to classify psychophysiological signals using mobile physiological sensors and the support vector machine learning algorithm. The results show that it is possible to reliably infer a state of interest from cultural heritage material using psychophysiological feature data and a machine learning approach, informing future work for the development of a real-time physiological computing system for use within an adaptive cultural heritage experience designed to adapt
the provision of information to sustain the interest of the visitor
- ā¦