17 research outputs found
Predicting Plan Failure by Monitoring Action Sequences and Duration
Anticipating failures in agent plan execution is important to enable an agent to develop strategies to avoid or circumvent such failures, allowing the agent to achieve its goal. Plan recognition can be used to infer which plans are being executed from observations of sequences of activities being performed by an agent. In this work, we use this symbolic plan recognition algorithm to find out which plan the agent is performing and develop a failure prediction system, based on plan library information and in a simplified calendar that manages the goals the agent has to achieve. This failure predictor is able to monitor the sequence of agent actions and detects if an action is taking too long or does not match the plan that the agent was expected to perform. We showcase this approach successfully in a health-care prototype system
Distributed Technology-Sustained Pervasive Applications
Technology-sustained pervasive games, contrary to technology-supported
pervasive games, can be understood as computer games interfacing with the
physical world. Pervasive games are known to make use of 'non-standard input
devices' and with the rise of the Internet of Things (IoT), pervasive
applications can be expected to move beyond games. This dissertation is
requirements- and development-focused Design Science research for distributed
technology-sustained pervasive applications, incorporating knowledge from the
domains of Distributed Computing, Mixed Reality, Context-Aware Computing,
Geographical Information Systems and IoT. Computer video games have existed for
decades, with a reusable game engine to drive them. If pervasive games can be
understood as computer games interfacing with the physical world, can computer
game engines be used to stage pervasive games? Considering the use of
non-standard input devices in pervasive games and the rise of IoT, how will
this affect the architectures supporting the broader set of pervasive
applications? The use of a game engine can be found in some existing pervasive
game projects, but general research into how the domain of pervasive games
overlaps with that of video games is lacking. When an engine is used, a
discussion of, what type of engine is most suitable and what properties are
being fulfilled by the engine, is often not part of the discourse. This
dissertation uses multiple iterations of the method framework for Design
Science for the design and development of three software system architectures.
In the face of IoT, the problem of extending pervasive games into a fourth
software architecture, accommodating a broader set of pervasive applications,
is explicated. The requirements, for technology-sustained pervasive games, are
verified through the design, development and demonstration of the three
software system architectures. The ...Comment: 64 pages, 13 figure
A Design Rationale for Pervasive Computing - User Experience, Contextual Change, and Technical Requirements
The vision of pervasive computing promises a shift from information
technology per se to what can be accomplished by using it, thereby
fundamentally changing the relationship between people and information
technology. In order to realize this vision, a large number of issues
concerning user experience, contextual change, and technical
requirements should be addressed. We provide a design rationale for
pervasive computing that encompasses these issues, in which we argue
that a prominent aspect of user experience is to provide user control,
primarily founded in human values. As one of the more significant
aspects of the user experience, we provide an extended discussion about
privacy. With contextual change, we address the fundamental change in
previously established relationships between the practices of
individuals, social institutions, and physical environments that
pervasive computing entails. Finally, issues of technical requirements
refer to technology neutrality and openness--factors that we argue are
fundamental for realizing pervasive computing.
We describe a number of empirical and technical studies, the results of
which have helped to verify aspects of the design rationale as well as
shaping new aspects of it. The empirical studies include an
ethnographic-inspired study focusing on information technology support
for everyday activities, a study based on structured interviews
concerning relationships between contexts of use and everyday planning
activities, and a focus group study of laypeople’s interpretations of
the concept of privacy in relation to information technology. The first
technical study concerns the model of personal service environments as a
means for addressing a number of challenges concerning user experience,
contextual change, and technical requirements. Two other technical
studies relate to a model for device-independent service development and
the wearable server as a means to address issues of continuous usage
experience and technology neutrality respectively
Recognising Human Plans: Issues for Plan Recognition in Human-Computer Interaction
Plan recognition is the task of ascribing intentions about plans to an actor, based on observing the agent's actions or utterances. The plan recognition problem appears in three different forms: Plan recognition when the actor is aware and actively co-operating to the recognition, for example by choosing actions that make the task easier (intended plan recognition), plan recognition when the actor is unaware of or indifferent to the plan recognition process (keyhole plan recognition), or plan recognition when the actor is aware of and actively obstructs the plan recognition process (obstructed plan recognition) . I consider a specific application of plan recognition: that when a computer system ascribes intended plans to human users interacting with the system. In computer interfaces, intended plan recognition becomes an almost trivial task, similar to the task of interpreting a command language, whereas keyhole plan recognition can be hard, or even impossible, to achieve. In this thes..
Generic Distribution Support for Programming Systems
This dissertation provides constructive proof, through the implementation of a middleware, that distribution transparency is practical, generic, and extensible. Fault tolerant distributed services can be developed by using the failure detection abilities of the middleware. By generic we mean that the middleware can be used for many different programming languages and paradigms. Distribution for each kind of language entity is done in terms of consistency protocols, which guarantee that the semantics of the entities are preserved in a distributed setting. The middleware allows new consistency protocols to be added easily. The efficiency of the middleware and the ease of integration are shown by coupling the middleware to a programming system, which encompasses the object oriented, the functional, and the concurrent-declarative programming paradigms. Our measurements show that the distribution middleware is competitive with the most popular distributed programming systems (JavaRMI, .NET, IBM CORBA)
The Word-Space Model: using distributional analysis to represent syntagmatic and paradigmatic relations between words in high-dimensional vector spaces
The word-space model is a computational model of word meaning that utilizes the distributional patterns of words collected over large text data to represent semantic similarity between words in terms of spatial proximity. The model has been used for over a decade, and has demonstrated its mettle in numerous experiments and applications. It is now on the verge of moving from research environments to practical deployment in commercial systems. Although extensively used and intensively investigated, our theoretical understanding of the word-space model remains unclear. The question this dissertation attempts to answer is: what kind of semantic information does the word-space model acquire and represent? The answer is derived through an identification and discussion of the three main theoretical cornerstones of the word-space model: the geometric metaphor of meaning, the distributional methodology, and the structuralist meaning theory. It is argued that the word-space model acquires and represents two different types of relations between words – syntagmatic and paradigmatic relations – depending on how the distributional patterns of words are used to accumulate word spaces. The difference between syntagmatic and paradigmatic word spaces is empirically demonstrated in a number of experiments, including comparisons with thesaurus entries, association norms, a synonym test, a list of antonym pairs, and a record of part-of-speech assignments.För att köpa boken skicka en beställning till [email protected]/ To order the book send an e-mail to [email protected]
Network overload avoidance by traffic engineering and content caching
The Internet traffic volume continues to grow at a great rate, now driven by video and TV distribution. For network operators it is important to avoid congestion in the network, and to meet service level agreements with their customers. This thesis presents work on two methods operators can use to reduce links loads in their networks: traffic engineering and content caching.
This thesis studies access patterns for TV and video and the potential for caching. The investigation is done both using simulation and by analysis of logs from a large TV-on-Demand system over four months.
The results show that there is a small set of programs that account for a large fraction of the requests and that a comparatively small local cache can be used to significantly reduce the peak link loads during prime time. The investigation also demonstrates how the popularity of programs changes over time and shows that the access pattern in a TV-on-Demand system very much depends on the content type.
For traffic engineering the objective is to avoid congestion in the network and to make better use of available resources by adapting the routing to the current traffic situation. The main challenge for traffic engineering in IP networks is to cope with the dynamics of Internet traffic demands.
This thesis proposes L-balanced routings that route the traffic on the shortest paths possible but make sure that no link is utilised to more than a given level L. L-balanced routing gives efficient routing of traffic and controlled spare capacity to handle unpredictable changes in traffic. We present an L-balanced routing algorithm and a heuristic search method for finding L-balanced weight settings for the legacy routing protocols OSPF and IS-IS. We show that the search and the resulting weight settings work well in real network scenarios
Live Streaming in P2P and Hybrid P2P-Cloud Environments for the Open Internet
Peer-to-Peer (P2P) live media streaming is an emerging technology that reduces the barrier to stream live events over the Internet. However, providing a high quality media stream using P2P overlay networks is challenging and gives raise to a number of issues: (i) how to guarantee quality of the service (QoS) in the presence of dynamism, (ii) how to incentivize nodes to participate in media distribution, (iii) how to avoid bottlenecks in the overlay, and (iv) how to deal with nodes that reside behind Network Address Translators gateways (NATs).
In this thesis, we answer the above research questions in form of new algorithms and systems. First of all, we address problems (i) and (ii) by presenting our P2P live media streaming solutions: Sepidar, which is a multiple-tree overlay, and GLive, which is a mesh overlay. In both models, nodes with higher upload bandwidth are positioned closer to the media source. This structure reduces the playback latency and increases the playback continuity at nodes, and also incentivizes the nodes to provide more upload bandwidth.
We use a reputation model to improve participating nodes in media distribution in Sepidar and GLive. In both systems, nodes audit the behaviour of their directly connected nodes by getting feedback from other nodes. Nodes who upload more of the stream get a relatively higher reputation, and proportionally higher quality streams. To construct our streaming overlay, we present a distributed market model inspired by Bertsekas auction algorithm, although our model does not rely on a central server with global knowledge. In our model, each node has only partial information about the system. Nodes acquire knowledge of the system by sampling nodes using the Gradient overlay, where it facilitates the discovery of nodes with similar upload bandwidth.
We address the bottlenecks problem, problem (iii), by presenting CLive that satisfies real-time constraints on delay between the generation of the stream and its actual delivery to users. We resolve this problem by borrowing some resources (helpers) from the cloud, upon need. In our approach, helpers are added on demand to the overlay, to increase the amount of total available bandwidth, thus increasing the probability of receiving the video on time. As the use of cloud resources costs money, we model the problem as the minimization of the economical cost, provided that a set of constraints on QoS is satisfied.
Finally, we solve the NAT problem, problem (iv), by presenting two NAT-aware peer sampling services (PSS): Gozar and Croupier. Traditional gossip-based PSS breaks down, where a high percentage of nodes are behind NATs. We overcome this problem in Gozar using one-hop relaying to communicate with the nodes behind NATs. Croupier similarly implements a gossip-based PSS, but without the use of relaying
Designs and Analyses in Structured Peer-To-Peer Systems
Peer-to-Peer (P2P) computing is a recent hot topic in the areas of networking and distributed systems. Work on P2P computing was triggered by a number of ad-hoc systems that made the concept popular. Later, academic research efforts started to investigate P2P computing issues based on scientific principles. Some of that research produced a number of structured P2P systems that were collectively referred to by the term "Distributed Hash Tables" (DHTs). However, the research occurred in a diversified way leading to the appearance of similar concepts yet lacking a common perspective and not heavily analyzed. In this thesis we present a number of papers representing our research results in the area of structured P2P systems grouped as two sets labeled respectively "Designs" and "Analyses".
The contribution of the first set of papers is as follows. First, we present the princi- ple of distributed k-ary search and argue that it serves as a framework for most of the recent P2P systems known as DHTs. That is, given this framework, understanding existing DHT systems is done simply by seeing how they are instances of that frame- work. We argue that by perceiving systems as instances of that framework, one can optimize some of them. We illustrate that by applying the framework to the Chord system, one of the most established DHT systems. Second, we show how the frame- work helps in the design of P2P algorithms by two examples: (a) The DKS(n; k; f) system which is a system designed from the beginning on the principles of distributed k-ary search. (b) Two broadcast algorithms that take advantage of the distributed k-ary search tree.
The contribution of the second set of papers is as follows. We account for two approaches that we used to evaluate the performance of a particular class of DHTs, namely the one adopting periodic stabilization for topology maintenance. The first approach was of an intrinsic empirical nature. In this approach, we tried to perceive a DHT as a physical system and account for its properties in a size-independent manner. The second approach was of a more analytical nature. In this approach, we applied the technique of Master Equations, which is a widely used technique in the analysis of natural systems. The application of the technique lead to a highly accurate description of the behavior of structured overlays. Additionally, the thesis contains a primer on structured P2P systems that tries to capture the main ideas prevailing in the field
Aspects of proactive traffic engineering in IP networks
To deliver a reliable communication service over the Internet
it is essential for
the network operator to manage the traffic situation in the network.
The traffic situation is controlled by
the routing function which determines what path traffic follows from source
to destination.
Current practices for setting routing parameters in IP networks are
designed to be simple to manage. This can lead to congestion in
parts of the network while other parts of the network are
far from fully utilized. In this thesis we explore issues related
to optimization of the routing function to balance load in the network
and efficiently deliver a reliable communication service to the users.
The optimization takes into account not only the traffic situation under
normal operational conditions, but also traffic situations that appear
under a wide variety of circumstances deviating from the nominal case.
In order to balance load in the network knowledge of the traffic
situations is needed. Consequently, in this thesis
we investigate methods for efficient derivation of the
traffic situation. The derivation is based on estimation of
traffic demands from link load measurements. The advantage
of using link load measurements is that they are easily obtained and consist
of a limited amount of data that need to be processed. We evaluate and demonstrate how estimation
based on link counts gives the operator a fast and accurate description
of the traffic demands. For the evaluation we have access to a unique data
set of complete traffic demands from an operational
IP backbone.
However, to honor service level agreements at all times the variability
of the traffic needs to be accounted for in the load balancing.
In addition, optimization techniques are often sensitive to errors and
variations in input data. Hence, when an optimized routing setting is
subjected to real traffic demands in the network, performance often
deviate from what can be anticipated from the optimization. Thus,
we identify and model different traffic uncertainties and describe
how the routing setting can be optimized, not only for a nominal case,
but for a wide range of different traffic situations that might appear
in the network.
Our results can be applied in MPLS enabled networks as well as in
networks using link state routing protocols such as the widely used
OSPF and IS-IS protocols. Only minor changes may be needed in current
networks to implement our algorithms.
The contributions of this thesis is that we: demonstrate that it is
possible to estimate the traffic matrix with acceptable precision, and
we develop methods and models for common traffic uncertainties to
account for these uncertainties in the optimization of the routing
configuration. In addition, we identify important properties in the
structure of the traffic to successfully balance uncertain and
varying traffic demands