189,034 research outputs found
A framework for mining lifestyle profiles through multi-dimensional and high-order mobility feature clustering
Human mobility demonstrates a high degree of regularity, which facilitates
the discovery of lifestyle profiles. Existing research has yet to fully utilize
the regularities embedded in high-order features extracted from human mobility
records in such profiling. This study proposes a progressive feature extraction
strategy that mines high-order mobility features from users' moving trajectory
records from the spatial, temporal, and semantic dimensions. Specific features
are extracted such as travel motifs, rhythms decomposed by discrete Fourier
transform (DFT) of mobility time series, and vectorized place semantics by
word2vec, respectively to the three dimensions, and they are further clustered
to reveal the users' lifestyle characteristics. An experiment using a
trajectory dataset of over 500k users in Shenzhen, China yields seven user
clusters with different lifestyle profiles that can be well interpreted by
common sense. The results suggest the possibility of fine-grained user
profiling through cross-order trajectory feature engineering and clustering
Profiling Users by Modeling Web Transactions
Users of electronic devices, e.g., laptop, smartphone, etc. have
characteristic behaviors while surfing the Web. Profiling this behavior can
help identify the person using a given device. In this paper, we introduce a
technique to profile users based on their web transactions. We compute several
features extracted from a sequence of web transactions and use them with
one-class classification techniques to profile a user. We assess the efficacy
and speed of our method at differentiating 25 users on a dataset representing 6
months of web traffic monitoring from a small company network.Comment: Extended technical report of an IEEE ICDCS 2017 publicatio
Task Runtime Prediction in Scientific Workflows Using an Online Incremental Learning Approach
Many algorithms in workflow scheduling and resource provisioning rely on the
performance estimation of tasks to produce a scheduling plan. A profiler that
is capable of modeling the execution of tasks and predicting their runtime
accurately, therefore, becomes an essential part of any Workflow Management
System (WMS). With the emergence of multi-tenant Workflow as a Service (WaaS)
platforms that use clouds for deploying scientific workflows, task runtime
prediction becomes more challenging because it requires the processing of a
significant amount of data in a near real-time scenario while dealing with the
performance variability of cloud resources. Hence, relying on methods such as
profiling tasks' execution data using basic statistical description (e.g.,
mean, standard deviation) or batch offline regression techniques to estimate
the runtime may not be suitable for such environments. In this paper, we
propose an online incremental learning approach to predict the runtime of tasks
in scientific workflows in clouds. To improve the performance of the
predictions, we harness fine-grained resources monitoring data in the form of
time-series records of CPU utilization, memory usage, and I/O activities that
are reflecting the unique characteristics of a task's execution. We compare our
solution to a state-of-the-art approach that exploits the resources monitoring
data based on regression machine learning technique. From our experiments, the
proposed strategy improves the performance, in terms of the error, up to
29.89%, compared to the state-of-the-art solutions.Comment: Accepted for presentation at main conference track of 11th IEEE/ACM
International Conference on Utility and Cloud Computin
FastDeepIoT: Towards Understanding and Optimizing Neural Network Execution Time on Mobile and Embedded Devices
Deep neural networks show great potential as solutions to many sensing
application problems, but their excessive resource demand slows down execution
time, pausing a serious impediment to deployment on low-end devices. To address
this challenge, recent literature focused on compressing neural network size to
improve performance. We show that changing neural network size does not
proportionally affect performance attributes of interest, such as execution
time. Rather, extreme run-time nonlinearities exist over the network
configuration space. Hence, we propose a novel framework, called FastDeepIoT,
that uncovers the non-linear relation between neural network structure and
execution time, then exploits that understanding to find network configurations
that significantly improve the trade-off between execution time and accuracy on
mobile and embedded devices. FastDeepIoT makes two key contributions. First,
FastDeepIoT automatically learns an accurate and highly interpretable execution
time model for deep neural networks on the target device. This is done without
prior knowledge of either the hardware specifications or the detailed
implementation of the used deep learning library. Second, FastDeepIoT informs a
compression algorithm how to minimize execution time on the profiled device
without impacting accuracy. We evaluate FastDeepIoT using three different
sensing-related tasks on two mobile devices: Nexus 5 and Galaxy Nexus.
FastDeepIoT further reduces the neural network execution time by to
and energy consumption by to compared with the
state-of-the-art compression algorithms.Comment: Accepted by SenSys '1
- …