13,715 research outputs found
Intimate interfaces in action: assessing the usability and subtlety of emg-based motionless gestures
Mobile communication devices, such as mobile phones and networked personal digital assistants (PDAs), allow users to be constantly connected and communicate anywhere and at any time, often resulting in personal and private communication taking place in public spaces. This private -- public contrast can be problematic. As a remedy, we promote intimate interfaces: interfaces that allow subtle and minimal mobile interaction, without disruption of the surrounding environment. In particular, motionless gestures sensed through the electromyographic (EMG) signal have been proposed as a solution to allow subtle input in a mobile context. In this paper we present an expansion of the work on EMG-based motionless gestures including (1) a novel study of their usability in a mobile context for controlling a realistic, multimodal interface and (2) a formal assessment of how noticeable they are to informed observers. Experimental results confirm that subtle gestures can be profitably used within a multimodal interface and that it is difficult for observers to guess when someone is performing a gesture, confirming the hypothesis of subtlety
MOSDEN: An Internet of Things Middleware for Resource Constrained Mobile Devices
The Internet of Things (IoT) is part of Future Internet and will comprise
many billions of Internet Connected Objects (ICO) or `things' where things can
sense, communicate, compute and potentially actuate as well as have
intelligence, multi-modal interfaces, physical/ virtual identities and
attributes. Collecting data from these objects is an important task as it
allows software systems to understand the environment better. Many different
hardware devices may involve in the process of collecting and uploading sensor
data to the cloud where complex processing can occur. Further, we cannot expect
all these objects to be connected to the computers due to technical and
economical reasons. Therefore, we should be able to utilize resource
constrained devices to collect data from these ICOs. On the other hand, it is
critical to process the collected sensor data before sending them to the cloud
to make sure the sustainability of the infrastructure due to energy
constraints. This requires to move the sensor data processing tasks towards the
resource constrained computational devices (e.g. mobile phones). In this paper,
we propose Mobile Sensor Data Processing Engine (MOSDEN), an plug-in-based IoT
middleware for mobile devices, that allows to collect and process sensor data
without programming efforts. Our architecture also supports sensing as a
service model. We present the results of the evaluations that demonstrate its
suitability towards real world deployments. Our proposed middleware is built on
Android platform
Real-time food intake classification and energy expenditure estimation on a mobile device
© 2015 IEEE.Assessment of food intake has a wide range of applications in public health and life-style related chronic disease management. In this paper, we propose a real-time food recognition platform combined with daily activity and energy expenditure estimation. In the proposed method, food recognition is based on hierarchical classification using multiple visual cues, supported by efficient software implementation suitable for realtime mobile device execution. A Fischer Vector representation together with a set of linear classifiers are used to categorize food intake. Daily energy expenditure estimation is achieved by using the built-in inertial motion sensors of the mobile device. The performance of the vision-based food recognition algorithm is compared to the current state-of-the-art, showing improved accuracy and high computational efficiency suitable for realtime feedback. Detailed user studies have also been performed to demonstrate the practical value of the software environment
Anti-social behavior detection in audio-visual surveillance systems
In this paper we propose a general purpose framework for
detection of unusual events. The proposed system is based on the unsupervised method for unusual scene detection in web{cam images that was introduced in [1]. We extend their algorithm to accommodate data from different modalities and introduce the concept of time-space blocks. In addition, we evaluate early and late fusion techniques for our audio-visual data features. The experimental results on 192 hours of data show that data fusion of audio and video outperforms using a single modality
The Fog Makes Sense: Enabling Social Sensing Services With Limited Internet Connectivity
Social sensing services use humans as sensor carriers, sensor operators and
sensors themselves in order to provide situation-awareness to applications.
This promises to provide a multitude of benefits to the users, for example in
the management of natural disasters or in community empowerment. However,
current social sensing services depend on Internet connectivity since the
services are deployed on central Cloud platforms. In many circumstances,
Internet connectivity is constrained, for instance when a natural disaster
causes Internet outages or when people do not have Internet access due to
economical reasons. In this paper, we propose the emerging Fog Computing
infrastructure to become a key-enabler of social sensing services in situations
of constrained Internet connectivity. To this end, we develop a generic
architecture and API of Fog-enabled social sensing services. We exemplify the
usage of the proposed social sensing architecture on a number of concrete use
cases from two different scenarios.Comment: Ruben Mayer, Harshit Gupta, Enrique Saurez, and Umakishore
Ramachandran. 2017. The Fog Makes Sense: Enabling Social Sensing Services
With Limited Internet Connectivity. In Proceedings of The 2nd International
Workshop on Social Sensing, Pittsburgh, PA, USA, April 21 2017
(SocialSens'17), 6 page
- …