18 research outputs found

    User-Centered Navigation Re-Design for Web-Based Information Systems

    Get PDF
    Navigation design for web-based information systems (e.g. e-commerce sites, intranet solutions) that ignores user-participation reduces the system’s value and can even lead to system failure. In this paper we introduce a user-centered, explorative approach to re-designing navigation structures of web-based information systems, and describe how it can be implemented in order to provide flexibility and reduce maintenance costs. We conclude with lessons learned from the navigation redesign project at the Vienna University of Economics and Business Administration

    Doctor of Philosophy

    Get PDF
    dissertationServing as a record of what happened during a scientific process, often computational, provenance has become an important piece of computing. The importance of archiving not only data and results but also the lineage of these entities has led to a variety of systems that capture provenance as well as models and schemas for this information. Despite significant work focused on obtaining and modeling provenance, there has been little work on managing and using this information. Using the provenance from past work, it is possible to mine common computational structure or determine differences between executions. Such information can be used to suggest possible completions for partial workflows, summarize a set of approaches, or extend past work in new directions. These applications require infrastructure to support efficient queries and accessible reuse. In order to support knowledge discovery and reuse from provenance information, the management of those data is important. One component of provenance is the specification of the computations; workflows provide structured abstractions of code and are commonly used for complex tasks. Using change-based provenance, it is possible to store large numbers of similar workflows compactly. This storage also allows efficient computation of differences between specifications. However, querying for specific structure across a large collection of workflows is difficult because comparing graphs depends on computing subgraph isomorphism which is NP-Complete. Graph indexing methods identify features that help distinguish graphs of a collection to filter results for a subgraph containment query and reduce the number of subgraph isomorphism computations. For provenance, this work extends these methods to work for more exploratory queries and collections with significant overlap. However, comparing workflow or provenance graphs may not require exact equality; a match between two graphs may allow paired nodes to be similar yet not equivalent. This work presents techniques to better correlate graphs to help summarize collections. Using this infrastructure, provenance can be reused so that users can learn from their own and others' history. Just as textual search has been augmented with suggested completions based on past or common queries, provenance can be used to suggest how computations can be completed or which steps might connect to a given subworkflow. In addition, provenance can help further science by accelerating publication and reuse. By incorporating provenance into publications, authors can more easily integrate their results, and readers can more easily verify and repeat results. However, reusing past computations requires maintaining stronger associations with any input data and underlying code as well as providing paths for migrating old work to new hardware or algorithms. This work presents a framework for maintaining data and code as well as supporting upgrades for workflow computations

    Markovian Model for Data-Driven P2P Video Streaming Applications

    Get PDF
    The purpose of this study is to propose a Markovian model to evaluate general P2P streaming applications with the assumption of chunk-delivery approach similar to Bit-Torrent file sharing applications. The state of the system was defined as the number of useful pieces in a peer's buffer. The model was numerically solved to find out the probability distribution of the number of useful pieces. The central theme of this study revolved around answering the question: what is the probability that a peer can play the stream continuously? This is one of the most important metrics to evaluate the performance of a streaming application. By finding the numerical solution of the Markov chain, we found that increasing the number of neighbours enhances the continuity to a certain threshold, after which the continuity improvement is marginal which complies with empirical results conducted with DONet, a data-driven overlay network for media streaming. We also found that increasing the buffer length increases the continuity but there is a trade-off because peers exchange information about the buffer map, hence increasing the buffer length increases the overhead. We discussed the continuity for both homogeneous and heterogeneous peers regarding the uploading bandwidth. Then we discussed the case when the first chunk is downloaded, but not played out because the playtime deadline was missed. We suggested a general approach for freezing and skipping the playback pointer, that can be used to take advantage of the available delay tolerance, finally given a specific configuration we measured the probability of sliding action, that could be used to initiate peers' adaptation process

    Towards a unified method to synthesising scenarios and solvers in combinatorial optimisation via graph-based approaches

    Get PDF
    Hyper-heuristics is a collection of search methods for selecting, combining and generating heuristics used to solve combinatorial optimisation problems. The primary objective of hyper-heuristics research is to develop more generally applicable search procedures that can be easily applied to a wide variety of problems. However, current hyper-heuristic architectures assume the existence of a domain barrier that does not allow low-level heuristics or operators to be applied outside their designed problem domain. Additionally the representation used to encode solvers differs from the one used to encode solutions. This means that hyper-heuristic internal components can not be optimised by the system itself. In this thesis we address these issues by using graph reformulations of selected problems and search in the space of operators by using Grammatical Evolution techniques to evolve new perturbative and constructive heuristics. The low-level heuristics (representing graph transformations) are evolved using a single grammar which is capable of adapting to multiple domains. We test our generators of heuristics on instances of the Travelling Salesman Problem, Knapsack Problem and Load Balancing Problem and show that the best evolved heuristics can compete with human written heuristics and representations designed for each problem domain. Further we propose a conceptual framework for the production and combination of graph structures. We show how these concepts can be used to describe and provide a representation for problems in combinatorics and the inner mechanics of hyper-heuristic systems. The final contribution is a new benchmark that can generate problem instances for multiple problem domains that can be used for the assessment of multi-domain problem solvers
    corecore