57,151 research outputs found
Distributed and adaptive location identification system for mobile devices
Indoor location identification and navigation need to be as simple, seamless,
and ubiquitous as its outdoor GPS-based counterpart is. It would be of great
convenience to the mobile user to be able to continue navigating seamlessly as
he or she moves from a GPS-clear outdoor environment into an indoor environment
or a GPS-obstructed outdoor environment such as a tunnel or forest. Existing
infrastructure-based indoor localization systems lack such capability, on top
of potentially facing several critical technical challenges such as increased
cost of installation, centralization, lack of reliability, poor localization
accuracy, poor adaptation to the dynamics of the surrounding environment,
latency, system-level and computational complexities, repetitive
labor-intensive parameter tuning, and user privacy. To this end, this paper
presents a novel mechanism with the potential to overcome most (if not all) of
the abovementioned challenges. The proposed mechanism is simple, distributed,
adaptive, collaborative, and cost-effective. Based on the proposed algorithm, a
mobile blind device can potentially utilize, as GPS-like reference nodes,
either in-range location-aware compatible mobile devices or preinstalled
low-cost infrastructure-less location-aware beacon nodes. The proposed approach
is model-based and calibration-free that uses the received signal strength to
periodically and collaboratively measure and update the radio frequency
characteristics of the operating environment to estimate the distances to the
reference nodes. Trilateration is then used by the blind device to identify its
own location, similar to that used in the GPS-based system. Simulation and
empirical testing ascertained that the proposed approach can potentially be the
core of future indoor and GPS-obstructed environments
Efficient Benchmarking of Algorithm Configuration Procedures via Model-Based Surrogates
The optimization of algorithm (hyper-)parameters is crucial for achieving
peak performance across a wide range of domains, ranging from deep neural
networks to solvers for hard combinatorial problems. The resulting algorithm
configuration (AC) problem has attracted much attention from the machine
learning community. However, the proper evaluation of new AC procedures is
hindered by two key hurdles. First, AC benchmarks are hard to set up. Second
and even more significantly, they are computationally expensive: a single run
of an AC procedure involves many costly runs of the target algorithm whose
performance is to be optimized in a given AC benchmark scenario. One common
workaround is to optimize cheap-to-evaluate artificial benchmark functions
(e.g., Branin) instead of actual algorithms; however, these have different
properties than realistic AC problems. Here, we propose an alternative
benchmarking approach that is similarly cheap to evaluate but much closer to
the original AC problem: replacing expensive benchmarks by surrogate benchmarks
constructed from AC benchmarks. These surrogate benchmarks approximate the
response surface corresponding to true target algorithm performance using a
regression model, and the original and surrogate benchmark share the same
(hyper-)parameter space. In our experiments, we construct and evaluate
surrogate benchmarks for hyperparameter optimization as well as for AC problems
that involve performance optimization of solvers for hard combinatorial
problems, drawing training data from the runs of existing AC procedures. We
show that our surrogate benchmarks capture overall important characteristics of
the AC scenarios, such as high- and low-performing regions, from which they
were derived, while being much easier to use and orders of magnitude cheaper to
evaluate
Prospects for large-scale financial systems simulation
As the 21st century unfolds, we find ourselves having to control, support, manage or otherwise cope with large-scale complex adaptive systems to an extent that is unprecedented in human history. Whether we are concerned with issues of food security, infrastructural resilience, climate change, health care, web science, security, or financial stability, we face problems that combine scale, connectivity, adaptive dynamics, and criticality. Complex systems simulation is emerging as the key scientific tool for dealing with such complex adaptive systems. Although a relatively new paradigm, it is one that has already established a track record in fields as varied as ecology (Grimm and Railsback, 2005), transport (Nagel et al., 1999), neuroscience (Markram, 2006), and ICT (Bullock and Cliff, 2004). In this report, we consider the application of simulation methodologies to financial systems, assessing the prospects for continued progress in this line of research
User-centred design of flexible hypermedia for a mobile guide: Reflections on the hyperaudio experience
A user-centred design approach involves end-users from the very beginning. Considering users at the early stages compels designers to think in terms of utility and usability and helps develop the system on what is actually needed. This paper discusses the case of HyperAudio, a context-sensitive adaptive and mobile guide to museums developed in the late 90s. User requirements were collected via a survey to understand visitors’ profiles and visit styles in Natural Science museums. The knowledge acquired supported the specification of system requirements, helping defining user model, data structure and adaptive behaviour of the system. User requirements guided the design decisions on what could be implemented by using simple adaptable triggers and what instead needed more sophisticated adaptive techniques, a fundamental choice when all the computation must be done on a PDA. Graphical and interactive environments for developing and testing complex adaptive systems are discussed as a further
step towards an iterative design that considers the user interaction a central point. The paper discusses
how such an environment allows designers and developers to experiment with different system’s behaviours and to widely test it under realistic conditions by simulation of the actual context evolving over time. The understanding gained in HyperAudio is then considered in the perspective of the
developments that followed that first experience: our findings seem still valid despite the passed time
- …