6,346 research outputs found
A modular and interactive OLED-based lighting system
The concept of a flexible, large-area, organic light emitting diode (OLED)-based lighting system with a modular structure and built-in intelligent light management is introduced. Such a flexible, thin, portable lighting system with discreetly integrated electronics is important in order to allow the implementation of the lighting system into a variety of places, such as cars and temporary expedition areas. A modular construction of an OLED lighting panel makes it possible to control each OLED cell individually. This not only enables us to counteract aging or degradation effects in the OLED cells but it also allows individual OLED module brightness control to support human or ambient interaction based on integrated or centralized sensors. Moreover, integrating the driving electronics in the backplane of an OLED module improves the energy efficiency of operating large OLED panels. The thin, modular construction and individual, dynamic control are successfully demonstrated
Personalizing gesture recognition using hierarchical bayesian neural networks
Building robust classifiers trained on data susceptible to group or subject-specific variations is a challenging pattern recognition problem. We develop hierarchical Bayesian neural networks to capture subject-specific variations and share statistical strength across subjects. Leveraging recent work on learning Bayesian neural networks, we build fast, scalable algorithms for inferring the posterior distribution over all network weights in the hierarchy. We also develop methods for adapting our model to new subjects when a small number of subject-specific personalization data is available. Finally, we investigate active learning algorithms for interactively labeling personalization data in resource-constrained scenarios. Focusing on the problem of gesture recognition where inter-subject variations are commonplace, we demonstrate the effectiveness of our proposed techniques. We test our framework on three widely used gesture recognition datasets, achieving personalization performance competitive with the state-of-the-art.http://openaccess.thecvf.com/content_cvpr_2017/html/Joshi_Personalizing_Gesture_Recognition_CVPR_2017_paper.htmlhttp://openaccess.thecvf.com/content_cvpr_2017/html/Joshi_Personalizing_Gesture_Recognition_CVPR_2017_paper.htmlhttp://openaccess.thecvf.com/content_cvpr_2017/html/Joshi_Personalizing_Gesture_Recognition_CVPR_2017_paper.htmlPublished versio
Mapping Tasks to Interactions for Graph Exploration and Graph Editing on Interactive Surfaces
Graph exploration and editing are still mostly considered independently and
systems to work with are not designed for todays interactive surfaces like
smartphones, tablets or tabletops. When developing a system for those modern
devices that supports both graph exploration and graph editing, it is necessary
to 1) identify what basic tasks need to be supported, 2) what interactions can
be used, and 3) how to map these tasks and interactions. This technical report
provides a list of basic interaction tasks for graph exploration and editing as
a result of an extensive system review. Moreover, different interaction
modalities of interactive surfaces are reviewed according to their interaction
vocabulary and further degrees of freedom that can be used to make interactions
distinguishable are discussed. Beyond the scope of graph exploration and
editing, we provide an approach for finding and evaluating a mapping from tasks
to interactions, that is generally applicable. Thus, this work acts as a
guideline for developing a system for graph exploration and editing that is
specifically designed for interactive surfaces.Comment: 21 pages, minor corrections (typos etc.
An Immersive Telepresence System using RGB-D Sensors and Head Mounted Display
We present a tele-immersive system that enables people to interact with each
other in a virtual world using body gestures in addition to verbal
communication. Beyond the obvious applications, including general online
conversations and gaming, we hypothesize that our proposed system would be
particularly beneficial to education by offering rich visual contents and
interactivity. One distinct feature is the integration of egocentric pose
recognition that allows participants to use their gestures to demonstrate and
manipulate virtual objects simultaneously. This functionality enables the
instructor to ef- fectively and efficiently explain and illustrate complex
concepts or sophisticated problems in an intuitive manner. The highly
interactive and flexible environment can capture and sustain more student
attention than the traditional classroom setting and, thus, delivers a
compelling experience to the students. Our main focus here is to investigate
possible solutions for the system design and implementation and devise
strategies for fast, efficient computation suitable for visual data processing
and network transmission. We describe the technique and experiments in details
and provide quantitative performance results, demonstrating our system can be
run comfortably and reliably for different application scenarios. Our
preliminary results are promising and demonstrate the potential for more
compelling directions in cyberlearning.Comment: IEEE International Symposium on Multimedia 201
- …