993 research outputs found
PlaceRaider: Virtual Theft in Physical Spaces with Smartphones
As smartphones become more pervasive, they are increasingly targeted by
malware. At the same time, each new generation of smartphone features
increasingly powerful onboard sensor suites. A new strain of sensor malware has
been developing that leverages these sensors to steal information from the
physical environment (e.g., researchers have recently demonstrated how malware
can listen for spoken credit card numbers through the microphone, or feel
keystroke vibrations using the accelerometer). Yet the possibilities of what
malware can see through a camera have been understudied. This paper introduces
a novel visual malware called PlaceRaider, which allows remote attackers to
engage in remote reconnaissance and what we call virtual theft. Through
completely opportunistic use of the camera on the phone and other sensors,
PlaceRaider constructs rich, three dimensional models of indoor environments.
Remote burglars can thus download the physical space, study the environment
carefully, and steal virtual objects from the environment (such as financial
documents, information on computer monitors, and personally identifiable
information). Through two human subject studies we demonstrate the
effectiveness of using mobile devices as powerful surveillance and virtual
theft platforms, and we suggest several possible defenses against visual
malware
Comparison and Characterization of Android-Based Fall Detection Systems
Falls are a foremost source of injuries and hospitalization for seniors.
The adoption of automatic fall detection mechanisms can noticeably reduce the response
time of the medical staff or caregivers when a fall takes place. Smartphones are being
increasingly proposed as wearable, cost-effective and not-intrusive systems for fall detection.
The exploitation of smartphones’ potential (and in particular, the Android Operating System)
can benefit from the wide implantation, the growing computational capabilities and the
diversity of communication interfaces and embedded sensors of these personal devices.
After revising the state-of-the-art on this matter, this study develops an experimental
testbed to assess the performance of different fall detection algorithms that ground their
decisions on the analysis of the inertial data registered by the accelerometer of the
smartphone. Results obtained in a real testbed with diverse individuals indicate that the
accuracy of the accelerometry-based techniques to identify the falls depends strongly on
the fall pattern. The performed tests also show the difficulty to set detection acceleration
thresholds that allow achieving a good trade-off between false negatives (falls that remain
unnoticed) and false positives (conventional movements that are erroneously classified as
falls). In any case, the study of the evolution of the battery drain reveals that the extra
power consumption introduced by the Android monitoring applications cannot be neglected
when evaluating the autonomy and even the viability of fall detection systems.Ministerio de EconomĂa y Competitividad TEC2009-13763-C02-0
A Light Weight Smartphone Based Human Activity Recognition System with High Accuracy
With the pervasive use of smartphones, which contain numerous sensors, data for modeling human activity is readily available. Human activity recognition is an important area of research because it can be used in context-aware applications. It has significant influence in many other research areas and applications including healthcare, assisted living, personal fitness, and entertainment. There has been a widespread use of machine learning techniques in wearable and smartphone based human activity recognition. Despite being an active area of research for more than a decade, most of the existing approaches require extensive computation to extract feature, train model, and recognize activities. This study presents a computationally efficient smartphone based human activity recognizer, based on dynamical systems and chaos theory. A reconstructed phase space is formed from the accelerometer sensor data using time-delay embedding. A single accelerometer axis is used to reduce memory and computational complexity. A Gaussian mixture model is learned on the reconstructed phase space. A maximum likelihood classifier uses the Gaussian mixture model to classify ten different human activities and a baseline. One public and one collected dataset were used to validate the proposed approach. Data was collected from ten subjects. The public dataset contains data from 30 subjects. Out-of-sample experimental results show that the proposed approach is able to recognize human activities from smartphones’ one-axis raw accelerometer sensor data. The proposed approach achieved 100% accuracy for individual models across all activities and datasets. The proposed research requires 3 to 7 times less amount of data than the existing approaches to classify activities. It also requires 3 to 4 times less amount of time to build reconstructed phase space compare to time and frequency domain features. A comparative evaluation is also presented to compare proposed approach with the state-of-the-art works
VNect: Real-time 3D Human Pose Estimation with a Single RGB Camera
We present the first real-time method to capture the full global 3D skeletal
pose of a human in a stable, temporally consistent manner using a single RGB
camera. Our method combines a new convolutional neural network (CNN) based pose
regressor with kinematic skeleton fitting. Our novel fully-convolutional pose
formulation regresses 2D and 3D joint positions jointly in real time and does
not require tightly cropped input frames. A real-time kinematic skeleton
fitting method uses the CNN output to yield temporally stable 3D global pose
reconstructions on the basis of a coherent kinematic skeleton. This makes our
approach the first monocular RGB method usable in real-time applications such
as 3D character control---thus far, the only monocular methods for such
applications employed specialized RGB-D cameras. Our method's accuracy is
quantitatively on par with the best offline 3D monocular RGB pose estimation
methods. Our results are qualitatively comparable to, and sometimes better
than, results from monocular RGB-D approaches, such as the Kinect. However, we
show that our approach is more broadly applicable than RGB-D solutions, i.e. it
works for outdoor scenes, community videos, and low quality commodity RGB
cameras.Comment: Accepted to SIGGRAPH 201
A Novel Approach to Complex Human Activity Recognition
Human activity recognition is a technology that offers automatic recognition of what a person is doing with respect to body motion and function. The main goal is to recognize a person\u27s activity using different technologies such as cameras, motion sensors, location sensors, and time. Human activity recognition is important in many areas such as pervasive computing, artificial intelligence, human-computer interaction, health care, health outcomes, rehabilitation engineering, occupational science, and social sciences. There are numerous ubiquitous and pervasive computing systems where users\u27 activities play an important role. The human activity carries a lot of information about the context and helps systems to achieve context-awareness. In the rehabilitation area, it helps with functional diagnosis and assessing health outcomes. Human activity recognition is an important indicator of participation, quality of life and lifestyle. There are two classes of human activities based on body motion and function. The first class, simple human activity, involves human body motion and posture, such as walking, running, and sitting. The second class, complex human activity, includes function along with simple human activity, such as cooking, reading, and watching TV. Human activity recognition is an interdisciplinary research area that has been active for more than a decade. Substantial research has been conducted to recognize human activities, but, there are many major issues still need to be addressed. Addressing these issues would provide a significant improvement in different aspects of the applications of the human activity recognition in different areas. There has been considerable research conducted on simple human activity recognition, whereas, a little research has been carried out on complex human activity recognition. However, there are many key aspects (recognition accuracy, computational cost, energy consumption, mobility) that need to be addressed in both areas to improve their viability. This dissertation aims to address the key aspects in both areas of human activity recognition and eventually focuses on recognition of complex activity. It also addresses indoor and outdoor localization, an important parameter along with time in complex activity recognition. This work studies accelerometer sensor data to recognize simple human activity and time, location and simple activity to recognize complex activity
ZOE: A cloud-less dialog-enabled continuous sensing wearable exploiting heterogeneous computation
The wearable revolution, as a mass-market phenomenon, has finally
arrived. As a result, the question of how wearables should evolve
over the next 5 to 10 years is assuming an increasing level of societal
and commercial importance. A range of open design and
system questions are emerging, for instance: How can wearables
shift from being largely health and fitness focused to tracking a
wider range of life events? What will become the dominant methods
through which users interact with wearables and consume the
data collected? Are wearables destined to be cloud and/or smartphone
dependent for their operation?
Towards building the critical mass of understanding and experience
necessary to tackle such questions, we have designed and
implemented ZOE – a match-box sized (49g) collar- or lapel-worn
sensor that pushes the boundary of wearables in an important set of
new directions. First, ZOE aims to perform multiple deep sensor
inferences that span key aspects of everyday life (viz. personal, social
and place information) on continuously sensed data; while also
offering this data not only within conventional analytics but also
through a speech dialog system that is able to answer impromptu
casual questions from users. (Am I more stressed this week than
normal?) Crucially, and unlike other rich-sensing or dialog supporting
wearables, ZOE achieves this without cloud or smartphone
support – this has important side-effects for privacy since all user
information can remain on the device. Second, ZOE incorporates
the latest innovations in system-on-a-chip technology together with
a custom daughter-board to realize a three-tier low-power processor
hierarchy. We pair this hardware design with software techniques
that manage system latency while still allowing ZOE to remain energy
efficient (with a typical lifespan of 30 hours), despite its high
sensing workload, small form-factor, and need to remain responsive to user dialog requests.This work was supported by Microsoft Research through its PhD
Scholarship Program. We would also like to thank the anonymous
reviewers and our shepherd, Jeremy Gummeson, for helping us improve
the paper.This is the author accepted manuscript. The final version is available from ACM at http://dl.acm.org/citation.cfm?doid=2742647.2742672
- …