2,031 research outputs found
Surveying human habit modeling and mining techniques in smart spaces
A smart space is an environment, mainly equipped with Internet-of-Things (IoT) technologies, able to provide services to humans, helping them to perform daily tasks by monitoring the space and autonomously executing actions, giving suggestions and sending alarms. Approaches suggested in the literature may differ in terms of required facilities, possible applications, amount of human intervention required, ability to support multiple users at the same time adapting to changing needs. In this paper, we propose a Systematic Literature Review (SLR) that classifies most influential approaches in the area of smart spaces according to a set of dimensions identified by answering a set of research questions. These dimensions allow to choose a specific method or approach according to available sensors, amount of labeled data, need for visual analysis, requirements in terms of enactment and decision-making on the environment. Additionally, the paper identifies a set of challenges to be addressed by future research in the field
Unsupervised Discretization by Two-dimensional MDL-based Histogram
Unsupervised discretization is a crucial step in many knowledge discovery
tasks. The state-of-the-art method for one-dimensional data infers locally
adaptive histograms using the minimum description length (MDL) principle, but
the multi-dimensional case is far less studied: current methods consider the
dimensions one at a time (if not independently), which result in
discretizations based on rectangular cells of adaptive size. Unfortunately,
this approach is unable to adequately characterize dependencies among
dimensions and/or results in discretizations consisting of more cells (or bins)
than is desirable. To address this problem, we propose an expressive model
class that allows for far more flexible partitions of two-dimensional data. We
extend the state of the art for the one-dimensional case to obtain a model
selection problem based on the normalised maximum likelihood, a form of refined
MDL. As the flexibility of our model class comes at the cost of a vast search
space, we introduce a heuristic algorithm, named PALM, which partitions each
dimension alternately and then merges neighbouring regions, all using the MDL
principle. Experiments on synthetic data show that PALM 1) accurately reveals
ground truth partitions that are within the model class (i.e., the search
space), given a large enough sample size; 2) approximates well a wide range of
partitions outside the model class; 3) converges, in contrast to its closest
competitor IPD; and 4) is self-adaptive with regard to both sample size and
local density structure of the data despite being parameter-free. Finally, we
apply our algorithm to two geographic datasets to demonstrate its real-world
potential.Comment: 30 pages, 9 figure
Applying MDL to Learning Best Model Granularity
The Minimum Description Length (MDL) principle is solidly based on a provably
ideal method of inference using Kolmogorov complexity. We test how the theory
behaves in practice on a general problem in model selection: that of learning
the best model granularity. The performance of a model depends critically on
the granularity, for example the choice of precision of the parameters. Too
high precision generally involves modeling of accidental noise and too low
precision may lead to confusion of models that should be distinguished. This
precision is often determined ad hoc. In MDL the best model is the one that
most compresses a two-part code of the data set: this embodies ``Occam's
Razor.'' In two quite different experimental settings the theoretical value
determined using MDL coincides with the best value found experimentally. In the
first experiment the task is to recognize isolated handwritten characters in
one subject's handwriting, irrespective of size and orientation. Based on a new
modification of elastic matching, using multiple prototypes per character, the
optimal prediction rate is predicted for the learned parameter (length of
sampling interval) considered most likely by MDL, which is shown to coincide
with the best value found experimentally. In the second experiment the task is
to model a robot arm with two degrees of freedom using a three layer
feed-forward neural network where we need to determine the number of nodes in
the hidden layer giving best modeling performance. The optimal model (the one
that extrapolizes best on unseen examples) is predicted for the number of nodes
in the hidden layer considered most likely by MDL, which again is found to
coincide with the best value found experimentally.Comment: LaTeX, 32 pages, 5 figures. Artificial Intelligence journal, To
appea
- …