6,067 research outputs found
Recommended from our members
Technical Review of Residential Programmable Communicating Thermostat Implementation for Title 24-2008
zCap: a zero configuration adaptive paging and mobility management mechanism
Today, cellular networks rely on fixed collections of cells (tracking areas) for user equipment localisation. Locating users within these areas involves broadcast search (paging), which consumes radio bandwidth but reduces the user equipment signalling required for mobility management. Tracking areas are today manually configured, hard to adapt to local mobility and influence the load on several key resources in the network. We propose a decentralised and self-adaptive approach to mobility management based on a probabilistic model of local mobility. By estimating the parameters of this model from observations of user mobility collected online, we obtain a dynamic model from which we construct local neighbourhoods of cells where we are most likely to locate user equipment. We propose to replace the static tracking areas of current systems with neighbourhoods local to each cell. The model is also used to derive a multi-phase paging scheme, where the division of neighbourhood cells into consecutive phases balances response times and paging cost. The complete mechanism requires no manual tracking area configuration and performs localisation efficiently in terms of signalling and response times. Detailed simulations show that significant potential gains in localisation effi- ciency are possible while eliminating manual configuration of mobility management parameters. Variants of the proposal can be implemented within current (LTE) standards
Simple optimality proofs for Least Recently Used in the presence of locality of reference
It is well known that competitive analysis yields results that do not reflect the observed performance of online paging algorithms. Many deterministic paging algorithms achieve the same competitive ratio, ranging from inefficient strategies as flush-when-full to the well-performing least-recently-used (LRU). In this paper, we study this fundamental online problem from the viewpoint of stochastic dominance. We give simple proofs that whensequences are drawn from distributions modelling locality of reference, LRU stochastically dominates any other online paging algorithm. As a byproduct, we obtain simple proofs of some earlier results.operations research and management science;
Putting Instruction Sequences into Effect
An attempt is made to define the concept of execution of an instruction
sequence. It is found to be a special case of directly putting into effect of
an instruction sequence. Directly putting into effect of an instruction
sequences comprises interpretation as well as execution. Directly putting into
effect is a special case of putting into effect with other special cases
classified as indirectly putting into effect
Concept-based Interactive Query Expansion Support Tool (CIQUEST)
This report describes a three-year project (2000-03) undertaken in the Information Studies
Department at The University of Sheffield and funded by Resource, The Council for
Museums, Archives and Libraries. The overall aim of the research was to provide user
support for query formulation and reformulation in searching large-scale textual resources
including those of the World Wide Web. More specifically the objectives were: to investigate
and evaluate methods for the automatic generation and organisation of concepts derived from
retrieved document sets, based on statistical methods for term weighting; and to conduct
user-based evaluations on the understanding, presentation and retrieval effectiveness of
concept structures in selecting candidate terms for interactive query expansion.
The TREC test collection formed the basis for the seven evaluative experiments conducted in
the course of the project. These formed four distinct phases in the project plan. In the first
phase, a series of experiments was conducted to investigate further techniques for concept
derivation and hierarchical organisation and structure. The second phase was concerned with
user-based validation of the concept structures. Results of phases 1 and 2 informed on the
design of the test system and the user interface was developed in phase 3. The final phase
entailed a user-based summative evaluation of the CiQuest system.
The main findings demonstrate that concept hierarchies can effectively be generated from
sets of retrieved documents and displayed to searchers in a meaningful way. The approach
provides the searcher with an overview of the contents of the retrieved documents, which in
turn facilitates the viewing of documents and selection of the most relevant ones. Concept
hierarchies are a good source of terms for query expansion and can improve precision. The
extraction of descriptive phrases as an alternative source of terms was also effective. With
respect to presentation, cascading menus were easy to browse for selecting terms and for
viewing documents. In conclusion the project dissemination programme and future work are
outlined
Performance measurement and evaluation of time-shared operating systems
Time-shared, virtual memory systems
are very complex and changes in their performance may
be caused by many factors - by variations in the
workload as well as changes in system configuration.
The evaluation of these systems can thus best be
carried out by linking results obtained from a
planned programme of measurements, taken on the
system, to some model of it. Such a programme of
measurements is best carried out under conditions in
which all the parameters likely to affect the system's
performance are reproducible, and under the control of
the experimenter. In order that this be possible the
workload used must be simulated and presented to the
target system through some form of automatic
workload driver. A case study of such a methodology
is presented in which the system (in this case the
Edinburgh Multi-Access System) is monitored during a
controlled experiment (designed and analysed using
standard techniques in common use in many other branches
of experimental science) and the results so obtained
used to calibrate and validate a simple simulation
model of the system. This model is then used in
further investigation of the effect of certain system parameters upon the system performance. The
factors covered by this exercise include the effect
of varying: main memory size, process loading
algorithm and secondary memory characteristics
On Competitive On-Line Paging with Lookahead
This paper studies two methods for improving the competitive efficiency of on-line paging algorithms: in the first, the on-line algorithm canuse more pages; in the second, it is allowed to have a look-ahead, or inother words, some partial knowledge of the future. The paper considers anew measure for the look-ahead size as well as Young's resource-boundedlook-ahead and proves that both measures have the attractive propertythat the competitive efficiency of an on-line algorithm with k extra pagesand look-ahead l depends on k+l. Hence, under these measures, an on-linealgorithm has the same benefit from using an extra page or knowing anextra bit of the future
Adaptive Analysis of On-line Algorithms
On-line algorithms are usually analyzed using competitive analysis, in which the performance
of on-line algorithm on a sequence is normalized by the performance of the optimal on-line
algorithm on that sequence. In this paper we introduce adaptive/cooperative analysis as an
alternative general framework for the analysis of on-line algorithms. This model gives promising
results when applied to two well known on-line problems, paging and list update. The idea is
to normalize the performance of an on-line algorithm by a measure other than the performance
of the on-line optimal algorithm OPT. We show that in many instances the perform of OPT
on a sequence is a coarse approximation of the difficulty or complexity of a given input. Using
a finer, more natural measure we can separate paging and list update algorithms which were
otherwise undistinguishable under the classical model. This createas a performance hierarchy of
algorithms which better reflects the intuitive relative strengths between them. Lastly, we show
that, surprisingly, certain randomized algorithms which are superior to MTF in the classical
model are not so in the adaptive case. This confirms that the ability of the on-line adaptive
algorithm to ignore pathological worst cases can lead to algorithms that are more efficient in
practice
- …