12 research outputs found

    Hyperscale Data Processing With Network-Centric Designs

    Get PDF
    Today’s largest data processing workloads are hosted in cloud data centers. Due to unprecedented data growth and the end of Moore’s Law, these workloads have ballooned to the hyperscale level, encompassing billions to trillions of data items and hundreds to thousands of machines per query. Enabling and expanding with these workloads are highly scalable data center networks that connect up to hundreds of thousands of networked servers. These massive scales fundamentally challenge the designs of both data processing systems and data center networks, and the classic layered designs are no longer sustainable. Rather than optimize these massive layers in silos, we build systems across them with principled network-centric designs. In current networks, we redesign data processing systems with network-awareness to minimize the cost of moving data in the network. In future networks, we propose new interfaces and services that the cloud infrastructure offers to applications and codesign data processing systems to achieve optimal query processing performance. To transform the network to future designs, we facilitate network innovation at scale. This dissertation presents a line of systems work that covers all three directions. It first discusses GraphRex, a network-aware system that combines classic database and systems techniques to push the performance of massive graph queries in current data centers. It then introduces data processing in disaggregated data centers, a promising new cloud proposal. It details TELEPORT, a compute pushdown feature that eliminates data processing performance bottlenecks in disaggregated data centers, and Redy, which provides high-performance caches using remote disaggregated memory. Finally, it presents MimicNet, a fine-grained simulation framework that evaluates network proposals at datacenter scale with machine learning approximation. These systems demonstrate that our ideas in network-centric designs achieve orders of magnitude higher efficiency compared to the state of the art at hyperscale

    The development and evaluation of a prototyping environment for context-sensitive mobile computing interaction

    Get PDF
    Recent developments in wireless communication, mobile computing, and sensor technologies have prompted a new vision of the world in which we live. As witnesses the effects of Moore's law, which are evident in many aspects of innovative technical opportunity, such as cost, size, capacity, bandwidth, etc. These advances allow us to build new types of human-computer-environment interaction in augmented physical spaces. Ideally, mobile computing devices can go with people so that they can access information on the move as being constantly connected to the digital space. Sensor technologies enable mobile computing devices to sense their users and environments. This increases the interaction bandwidth between a human and a mobile computing device. The development of context-sensitive mobile computing systems requires considerable engineering skills. None of the existing approaches provides an effective means of obtaining location and environmental information using "standard" hardware and software. This raises the entry level of discovering more about this type of interaction to the designers. In addition, it is important to stress that relatively little is known about the usability problems that might arise from interaction with these different context-sensitive mobile computing applications. The focus of this thesis is on the development of a prototyping environment for context-sensitive mobile computing. This thesis makes two contributions. The most significant contribution is the presentation of the Glasgow Context Server (GCS). It has been specifically designed to address the concerns mentioned above. It successfully integrates an off-the-shelf radio Local Area Network (LAN) with the infrared sensors that have been a feature of many previous context-sensitive mobile computing applications. The GCS is intended to help interface designers validate the claimed benefits of location sensing, location disclosing and environment sensing applications. The second contribution is the working applications, in particular, a web-based annotation system for physical objects and a shopping assistant built upon the GCS environment. These demonstrations are used to evaluate the GCS approach and point out the challenging issues in computing technology as well as usability concern. The hope is that this research can provide interface designers with an in-depth reference to a prototyping environment for context-sensitive mobile computing applications

    Context-Aware Software

    Get PDF
    With the advent of PDAs (Personal Digital Assistants), smart phones, and other forms of mobile and ubiquitous computers, our computing resources are increasingly moving off of our desktops and into our everyday lives. However, the software and user interfaces for these devices are generally very similar to that of their desktop counterparts, despite the radically different and dynamic environments that they face. We propose that to better assist their users, such devices should be able to sense, react to, and utilise, the user's current environment or context. That is, they should become context-aware. In this thesis we investigate context-awareness at three levels: user interfaces, applications, and supporting architectures/frameworks. To promote the use of context-awareness, and to aid its deployment in software, we have developed two supporting frameworks. The first is an application-oriented framework called stick-e notes. Based on an electronic version of the common Post-It Note, stick-e notes enable the attachment of any electronic resource (e.g. a text file, movie, Java program, etc.) to any type of context (e.g. location, temperature, time, etc.). The second framework we devised seeks to provide a more universal support for the capture, manipulation, and representation of context information. We call it the Context Information Service (CIS). It fills a similar role in context-aware software development as GUI libraries do in user interface development. Our applications research explored how context-awareness can be exploited in real environments with real users. In particular, we developed a suite of PDA-based context-aware tools for fieldworkers. These were used extensively by a group of ecologists in Africa to record observations of giraffe and rhinos in a remote Kenyan game reserve. These tools also provided the foundations for our HCI work, in which we developed the concept of the Minimal Attention User Interface (MAUI). The aim of the MAUI is to reduce the attention required by the user in operating a device by carefully selecting input/output modes that are harmonious to their tasks and environment. To evaluate our ideas and applications a field study was conducted in which over forty volunteers used our system for data collection activities over the course of a summer season at the Kenyan game reserve. The PDA-based tools were unanimously preferred to the paper-based alternatives, and the context-aware features were cited as particular reasons for preferring them. In summary, this thesis presents two frameworks to support context-aware software, a set of applications demonstrating how context-awareness can be utilised in the ''real world'', and a set of HCI guidelines and principles that help in creating user interfaces that fit to their context of use

    Research on Efficiency and Security for Emerging Distributed Applications

    Get PDF
    Distributed computing has never stopped its advancement since the early years of computer systems. In recent years, edge computing has emerged as an extension of cloud computing. The main idea of edge computing is to provide hardware resources in proximity to the end devices, thereby offering low network latency and high network bandwidth. However, as an emerging distributed computing paradigm, edge computing currently lacks effective system support. To this end, this dissertation studies the ways of building system support for edge computing. We first study how to support the existing, non-edge-computing applications in edge computing environments. This research leads to the design of a platform called SMOC that supports executing mobile applications on edge servers. We consider mobile applications in this project because there are a great number of mobile applications in the market and we believe that mobile-edge computing will become an important edge computing paradigm in the future. SMOC supports executing ARM-based mobile applications on x86 edge servers by establishing a running environment identical to that of the mobile device at the edge. It also exploits hardware virtualization on the mobile device to protect user input. Next, we investigate how to facilitate the development of edge applications with system support. This study leads to the design of an edge computing framework called EdgeEngine, which consists of a middleware running on top of the edge computing infrastructure and a powerful, concise programming interface. Developers can implement edge applications with minimal programming effort through the programming interface, and the middleware automatically fulfills the routine tasks, such as data dispatching, task scheduling, lock management, etc., in a highly efficient way. Finally, we envision that consensus will be an important building block for many edge applications, because we consider the consensus problem to be the most important fundamental problem in distributed computing while edge computing is an emerging distributed computing paradigm. Therefore, we investigate how to support the edge applications that rely on consensus, helping them achieve good performance. This study leads to the design of a novel, Paxos-based consensus protocol called Nomad, which rapidly orders the messages received by the edge. Nomad can quickly adapt to the workload changes across the edge computing system, and it incorporates a backend cloud to resolve the conflicts in a timely manner. By doing so, Nomad reduces the user-perceived latency as much as possible, outperforming the existing consensus protocols

    Context-aware software

    Get PDF
    With the advent of PDAs (Personal Digital Assistants), smart phones, and other forms of mobile and ubiquitous computers, our computing resources are increasingly moving off of our desktops and into our everyday lives. However, the software and user interfaces for these devices are generally very similar to that of their desktop counterparts, despite the radically different and dynamic environments that they face. We propose that to better assist their users, such devices should be able to sense, react to, and utilise, the user's current environment or context. That is, they should become context-aware. In this thesis we investigate context-awareness at three levels: user interfaces, applications, and supporting architectures/frameworks. To promote the use of context-awareness, and to aid its deployment in software, we have developed two supporting frameworks. The first is an application-oriented framework called stick-e notes. Based on an electronic version of the common Post-It Note, stick-e notes enable the attachment of any electronic resource (e.g. a text file, movie, Java program, etc.) to any type of context (e.g. location, temperature, time, etc.). The second framework we devised seeks to provide a more universal support for the capture, manipulation, and representation of context information. We call it the Context Information Service (CIS). It fills a similar role in context-aware software development as GUI libraries do in user interface development. Our applications research explored how context-awareness can be exploited in real environments with real users. In particular, we developed a suite of PDA-based context-aware tools for fieldworkers. These were used extensively by a group of ecologists in Africa to record observations of giraffe and rhinos in a remote Kenyan game reserve. These tools also provided the foundations for our HCI work, in which we developed the concept of the Minimal Attention User Interface (MAUI). The aim of the MAUI is to reduce the attention required by the user in operating a device by carefully selecting input/output modes that are harmonious to their tasks and environment. To evaluate our ideas and applications a field study was conducted in which over forty volunteers used our system for data collection activities over the course of a summer season at the Kenyan game reserve. The PDA-based tools were unanimously preferred to the paper-based alternatives, and the context-aware features were cited as particular reasons for preferring them. In summary, this thesis presents two frameworks to support context-aware software, a set of applications demonstrating how context-awareness can be utilised in the ''real world'', and a set of HCI guidelines and principles that help in creating user interfaces that fit to their context of use

    Intelligent Load Balancing in Cloud Computer Systems

    Get PDF
    Cloud computing is an established technology allowing users to share resources on a large scale, never before seen in IT history. A cloud system connects multiple individual servers in order to process related tasks in several environments at the same time. Clouds are typically more cost-effective than single computers of comparable computing performance. The sheer physical size of the system itself means that thousands of machines may be involved. The focus of this research was to design a strategy to dynamically allocate tasks without overloading Cloud nodes which would result in system stability being maintained at minimum cost. This research has added the following new contributions to the state of knowledge: (i) a novel taxonomy and categorisation of three classes of schedulers, namely OS-level, Cluster and Big Data, which highlight their unique evolution and underline their different objectives; (ii) an abstract model of cloud resources utilisation is specified, including multiple types of resources and consideration of task migration costs; (iii) a virtual machine live migration was experimented with in order to create a formula which estimates the network traffic generated by this process; (iv) a high-fidelity Cloud workload simulator, based on a month-long workload traces from Google's computing cells, was created; (v) two possible approaches to resource management were proposed and examined in the practical part of the manuscript: the centralised metaheuristic load balancer and the decentralised agent-based system. The project involved extensive experiments run on the University of Westminster HPC cluster, and the promising results are presented together with detailed discussions and a conclusion

    Models, methods, and tools for developing MMOG backends on commodity clouds

    Get PDF
    Online multiplayer games have grown to unprecedented scales, attracting millions of players worldwide. The revenue from this industry has already eclipsed well-established entertainment industries like music and films and is expected to continue its rapid growth in the future. Massively Multiplayer Online Games (MMOGs) have also been extensively used in research studies and education, further motivating the need to improve their development process. The development of resource-intensive, distributed, real-time applications like MMOG backends involves a variety of challenges. Past research has primarily focused on the development and deployment of MMOG backends on dedicated infrastructures such as on-premise data centers and private clouds, which provide more flexibility but are expensive and hard to set up and maintain. A limited set of works has also focused on utilizing the Infrastructure-as-a-Service (IaaS) layer of public clouds to deploy MMOG backends. These clouds can offer various advantages like a lower barrier to entry, a larger set of resources, etc. but lack resource elasticity, standardization, and focus on development effort, from which MMOG backends can greatly benefit. Meanwhile, other research has also focused on solving various problems related to consistency, performance, and scalability. Despite major advancements in these areas, there is no standardized development methodology to facilitate these features and assimilate the development of MMOG backends on commodity clouds. This thesis is motivated by the results of a systematic mapping study that identifies a gap in research, evident from the fact that only a handful of studies have explored the possibility of utilizing serverless environments within commodity clouds to host these types of backends. These studies are mostly vision papers and do not provide any novel contributions in terms of methods of development or detailed analyses of how such systems could be developed. Using the knowledge gathered from this mapping study, several hypotheses are proposed and a set of technical challenges is identified, guiding the development of a new methodology. The peculiarities of MMOG backends have so far constrained their development and deployment on commodity clouds despite rapid advancements in technology. To explore whether such environments are viable options, a feasibility study is conducted with a minimalistic MMOG prototype to evaluate a limited set of public clouds in terms of hosting MMOG backends. Foli lowing encouraging results from this study, this thesis first motivates toward and then presents a set of models, methods, and tools with which scalable MMOG backends can be developed for and deployed on commodity clouds. These are encapsulated into a software development framework called Athlos which allows software engineers to leverage the proposed development methodology to rapidly create MMOG backend prototypes that utilize the resources of these clouds to attain scalable states and runtimes. The proposed approach is based on a dynamic model which aims to abstract the data requirements and relationships of many types of MMOGs. Based on this model, several methods are outlined that aim to solve various problems and challenges related to the development of MMOG backends, mainly in terms of performance and scalability. Using a modular software architecture, and standardization in common development areas, the proposed framework aims to improve and expedite the development process leading to higher-quality MMOG backends and a lower time to market. The models and methods proposed in this approach can be utilized through various tools during the development lifecycle. The proposed development framework is evaluated qualitatively and quantitatively. The thesis presents three case study MMOG backend prototypes that validate the suitability of the proposed approach. These case studies also provide a proof of concept and are subsequently used to further evaluate the framework. The propositions in this thesis are assessed with respect to the performance, scalability, development effort, and code maintainability of MMOG backends developed using the Athlos framework, using a variety of methods such as small and large-scale simulations and more targeted experimental setups. The results of these experiments uncover useful information about the behavior of MMOG backends. In addition, they provide evidence that MMOG backends developed using the proposed methodology and hosted on serverless environments can: (a) support a very high number of simultaneous players under a given latency threshold, (b) elastically scale both in terms of processing power and memory capacity and (c) significantly reduce the amount of development effort. The results also show that this methodology can accelerate the development of high-performance, distributed, real-time applications like MMOG backends, while also exposing the limitations of Athlos in terms of code maintainability. Finally, the thesis provides a reflection on the research objectives, considerations on the hypotheses and technical challenges, and outlines plans for future work in this domain

    Building blocks for semantic data organization on the desktop

    Get PDF
    Die Organisation von (Multimedia-) Daten auf Desktop-Systemen wird derzeit hauptsächlich durch das Einordnen von Dateien in ein hierarchisches Dateisystem bewerkstelligt. Zusätzlich werden gewisse Inhalte (z.B. Musik oder Fotos) von spezialisierter Software mit Hilfe Datei-bezogener Metadaten verwaltet. Diese Metadaten werden meist direkt im Dateikopf in einer Unzahl verschiedener, vorwiegend proprietärer Formate gespeichert. Allgemein nehmen Metadaten und Links die Schlüsselrollen in fortgeschrittenen Datenorganisationskonzepten ein, ihre eingeschränkte Unterstützung in vorherrschenden Dateisystemen macht die Einführung solcher Konzepte auf dem Desktop jedoch schwierig: Erstens müssen Anwendungen sowohl Dateiformat als auch Metadatenschema verstehen um auf Metadaten zugreifen zu können; zweitens ist ein getrennter Zugriff auf Daten und Metadaten nicht möglich und drittens kann man solche Metadaten nicht mit mehreren Dateien oder mit Dateiordnern assoziieren obgleich letztere die derzeit wichtigsten Konstrukte für die Dateiorganisation darstellen. Dies bedeutet in weiterer Folge: (i) eingeschränkte Möglichkeiten der Datenorganisation, (ii) eingeschränkte Navigationsmöglichkeiten, (iii) schlechte Auffindbarkeit der gespeicherten Daten, und (iv) Fragmentierung von Metadaten. Obschon es Versuche gab, diese Situation (zum Beispiel mit Hilfe semantischer Dateisysteme) zu verbessern, wurden die meisten dieser Probleme bisher vor allem im Web und im Speziellen im semantischen Web adressiert und gelöst. Das Anwenden dort entwickelter Lösungen auf dem Desktop, einer zentralen Plattform der Daten- und Metadatenmanipulation, wäre zweifellos von Vorteil. In der vorliegenden Arbeit wird ein neues, rückwärts-kompatibles Metadatenmodell als Lösungsversuch für die oben genannten Probleme präsentiert. Dieses Modell basiert auf stabilen Datei-Identifikatoren und externen, semantischen, Datei- bezogenen Metadatenbeschreibungen welche im RDF Graphenmodell repräsentiert werden. Diese Beschreibungen sind durch eine einheitliche Linked-Data- Schnittstelle zugänglich und können mit anderen Beschreibungen und Ressourcen verlinkt werden. Im Speziellen erlaubt dieses Modell semantische Links zwischen lokalen Dateisystemobjekten und Netzressourcen im Web sowie im entstehenden “Daten Web” und ermöglicht somit die Integration dieser Datenräume. Das Modell hängt entscheidend von der Stabilität dieser Links ab weshalb zwei Algorithmen präsentiert werden, welche deren Integrität in lokalen und vernetzten Umgebungen erhalten können. Dies bedeutet, dass Links zwischen Dateisystemobjekten, Metadatenbeschreibungen und Netzressourcen nicht brechen wenn sich deren Adressen ändern, z.B. wenn Dateien verschoben oder Linked-Data Ressourcen unter geänderten URIs publiziert werden. Schließlich wird eine prototypische Implementierung des vorgeschlagenen Metadatenmodells präsentiert, welche demonstriert wie die Summe dieser Bausteine eine Metadatenschicht bildet die als Grundlage für semantische Datenorganisation auf dem Desktop verwendet werden kann.The organization of (multimedia) data on current desktop systems is done to a large part by arranging files in hierarchical file systems, but also by specialized applications (e.g., music or photo organizing software) that make use of file-related metadata for this task. These metadata are predominantly stored in embedded file headers, using a magnitude of mainly proprietary formats. Generally, metadata and links play the key roles in advanced data organization concepts. Their limited support in prevalent file system implementations, however, hinders the adoption of such concepts on the desktop: First, non-uniform access interfaces require metadata consuming applications to understand both a file’s format and its metadata scheme; second, separate data/metadata access is not possible, and third, metadata cannot be attached to multiple files or to file folders although the latter are the primary constructs for file organization. As a consequence of this, current desktops suffer, inter alia, from (i) limited data organization possibilities, (ii) limited navigability, (iii) limited data findability, and (iv) metadata fragmentation. Although there were attempts to improve this situation, e.g., by introducing semantic file systems, most of these issues were successfully addressed and solved in the Web and in particular in the Semantic Web and reusing these solutions on the desktop, a central hub of data and metadata manipulation, is clearly desirable. In this thesis a novel, backwards-compatible metadata model that addresses the above-mentioned issues is introduced. This model is based on stable file identifiers and external, file-related, semantic metadata descriptions that are represented using the generic RDF graph model. Descriptions are accessible via a uniform Linked Data interface and can be linked with other descriptions and resources. In particular, this model enables semantic linking between local file system objects and remote resources on the Web or the emerging Web of Data, thereby enabling the integration of these data spaces. As the model crucially relies on the stability of these links, we contribute two algorithms that preserve their integrity in local and in remote environments. This means that links between file system objects, metadata descriptions and remote resources do not break even if their addresses change, e.g., when files are moved or Linked Data resources are re-published using different URIs. Finally, we contribute a prototypical implementation of the proposed metadata model that demonstrates how these building blocks sum up to constitute a metadata layer that may act as a foundation for semantic data organization on the desktop

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access two-volume set constitutes the proceedings of the 26th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2020, which took place in Dublin, Ireland, in April 2020, and was held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020. The total of 60 regular papers presented in these volumes was carefully reviewed and selected from 155 submissions. The papers are organized in topical sections as follows: Part I: Program verification; SAT and SMT; Timed and Dynamical Systems; Verifying Concurrent Systems; Probabilistic Systems; Model Checking and Reachability; and Timed and Probabilistic Systems. Part II: Bisimulation; Verification and Efficiency; Logic and Proof; Tools and Case Studies; Games and Automata; and SV-COMP 2020

    Distance: a framework for improving spatial cognition within digital architectural models

    Get PDF
    This research investigates the need for improvements to navigation tools and locational awareness within digital architectural models so that users’ spatial cognition can be enhanced. Evidence shows that navigation and disorientation are common problems within digital architectural models, often impairing spatial cognition. When a designer or contractor explores a completed digital architectural model for the first time, it can be a progressively frustrating experience, often leading to the creation of an incorrect cognitive map of the building design. A reflective practice research method across three project-based design investigations is used drawing on aspects of architectural communication, digital interaction, and spatial cognition. The first investigation, Translation projects, explores the transformation of two- dimensional drawing conventions into three-dimensional interactive digital models, exposing the need for improved navigation and wayfinding. The second investigation, a series of artificial intelligence navigation projects, explores navigation methods to aid spatial cognition by providing tools that help to visualise the navigation process, paths to travel, and paths travelled. The third and final investigation, Distance projects, demonstrates the benefits of productive transition in the creation of cognitive maps. During the transition, assistance is given to aid the estimation of distance. The original contribution to knowledge that this research establishes is a framework for navigation tools and wayshowing strategies for improving spatial cognition within digital architectural models. The consideration of wayshowing methods, focusing on spatial transitions beyond predefined views of the digital model, provides a strong method for aiding users to construct comprehensive cognitive maps. This research addresses the undeveloped field of aiding distance estimation inside digital architectural models.There is a need to improve spatial cognition by understanding distance, detail, data, and design when reviewing digital architectural models
    corecore