61 research outputs found
Programmable overlays via OpenOverlayRouter
Among the different options to instantiate overlays, the Locator/ID Separation Protocol (LISP) [7] has gained significant traction among industry and academia [5], [6], [8]–[11], [14], [15]. Interestingly, LISP offers a standard, inter-domain, and dynamic overlay that enables low capital expenditure (CAPEX) innovation at the network layer [8]. LISP follows a map-and-encap approach where overlay identifiers are mapped to underlay locators. Overlay traffic is encapsulated into locator-based packets and routed through the underlay. LISP leverages a public database to store overlay-to-underlay mappings and on a pull mechanism to retrieve those mappings on demand from the data plane. Therefore, LISP effectively decouples the control and data planes, since control plane policies are pushed to the database rather than to the data plane. Forwarding elements reflect control policies on the data plane by pulling them from the database. In that sense, LISP can be used as an SDN southbound protocol to enable programmable overlay networks [5].Peer ReviewedPostprint (published version
Improving Pan-African research and education networks through traffic engineering: A LISP/SDN approach
The UbuntuNet Alliance, a consortium of National Research and Education Networks (NRENs) runs an exclusive data network for education and research in east and southern Africa. Despite a high degree of route redundancy in the Alliance's topology, a large portion of Internet traffic between the NRENs is circuitously routed through Europe. This thesis proposes a performance-based strategy for dynamic ranking of inter-NREN paths to reduce latencies. The thesis makes two contributions: firstly, mapping Africa's inter-NREN topology and quantifying the extent and impact of circuitous routing; and, secondly, a dynamic traffic engineering scheme based on Software Defined Networking (SDN), Locator/Identifier Separation Protocol (LISP) and Reinforcement Learning. To quantify the extent and impact of circuitous routing among Africa's NRENs, active topology discovery was conducted. Traceroute results showed that up to 75% of traffic from African sources to African NRENs went through inter-continental routes and experienced much higher latencies than that of traffic routed within Africa. An efficient mechanism for topology discovery was implemented by incorporating prior knowledge of overlapping paths to minimize redundancy during measurements. Evaluation of the network probing mechanism showed a 47% reduction in packets required to complete measurements. An interactive geospatial topology visualization tool was designed to evaluate how NREN stakeholders could identify routes between NRENs. Usability evaluation showed that users were able to identify routes with an accuracy level of 68%. NRENs are faced with at least three problems to optimize traffic engineering, namely: how to discover alternate end-to-end paths; how to measure and monitor performance of different paths; and how to reconfigure alternate end-to-end paths. This work designed and evaluated a traffic engineering mechanism for dynamic discovery and configuration of alternate inter-NREN paths using SDN, LISP and Reinforcement Learning. A LISP/SDN based traffic engineering mechanism was designed to enable NRENs to dynamically rank alternate gateways. Emulation-based evaluation of the mechanism showed that dynamic path ranking was able to achieve 20% lower latencies compared to the default static path selection. SDN and Reinforcement Learning were used to enable dynamic packet forwarding in a multipath environment, through hop-by-hop ranking of alternate links based on latency and available bandwidth. The solution achieved minimum latencies with significant increases in aggregate throughput compared to static single path packet forwarding. Overall, this thesis provides evidence that integration of LISP, SDN and Reinforcement Learning, as well as ranking and dynamic configuration of paths could help Africa's NRENs to minimise latencies and to achieve better throughputs
Adaptive heterogeneous parallelism for semi-empirical lattice dynamics in computational materials science.
With the variability in performance of the multitude of parallel environments available today, the conceptual overhead created by the need to anticipate runtime information to make design-time decisions has become overwhelming. Performance-critical applications and libraries carry implicit assumptions based on incidental metrics that are not portable to emerging computational platforms or even alternative contemporary architectures. Furthermore, the significance of runtime concerns such as makespan, energy efficiency and fault tolerance depends on the situational context. This thesis presents a case study in the application of both Mattsons prescriptive pattern-oriented approach and the more principled structured parallelism formalism to the computational simulation of inelastic neutron scattering spectra on hybrid CPU/GPU platforms. The original ad hoc implementation as well as new patternbased and structured implementations are evaluated for relative performance and scalability. Two new structural abstractions are introduced to facilitate adaptation by lazy optimisation and runtime feedback. A deferred-choice abstraction represents a unified space of alternative structural program variants, allowing static adaptation through model-specific exhaustive calibration with regards to the extrafunctional concerns of runtime, average instantaneous power and total energy usage. Instrumented queues serve as mechanism for structural composition and provide a representation of extrafunctional state that allows realisation of a market-based decentralised coordination heuristic for competitive resource allocation and the Lyapunov drift algorithm for cooperative scheduling
Towards effective live cloud migration on public cloud IaaS.
Cloud computing allows users to access shared, online computing resources. However, providers often offer their own proprietary applications, APIs and infrastructures, resulting in a heterogeneous cloud environment. This environment makes it difficult for users to change cloud service providers and to explore capabilities to support the automated migration from one provider to another. Many standards bodies (IEEE, NIST, DMTF and SNIA), industry (middleware) and academia have been pursuing standards and approaches to reduce the impact of vendor lock-in. Cloud providers offer their Infrastructure as a Service (IaaS) based on virtualization to enable multi-tenant and isolated environments for users. Because, each provider has its own proprietary virtual machine (VM) manager, called the hypervisor, VMs are usually tightly coupled to the underlying hardware, thus hindering live migration of VMs to different providers. A number of user-centric approaches have been proposed from both academia and industry to solve this coupling issue. However, these approaches suffer limitations in terms of flexibility (decoupling VMs from underlying hardware), performance (migration downtime) and security (secure live migration). These limitations are identified using our live cloud migration criteria which are rep- resented by flexibility, performance and security. These criteria are not only used to point out the gap in the previous approaches, but are also used to design our live cloud migration approach, LivCloud. This approach aims to live migration of VMs across various cloud IaaS with minimal migration downtime, with no extra cost and without user’s intervention and awareness. This aim has been achieved by addressing different gaps identified in the three criteria: the flexibility gap is improved by considering a better virtualization platform to support a wider hardware range, supporting various operating system and taking into account the migrated VMs’ hardware specifications and layout; the performance gap is enhanced by improving the network connectivity, providing extra resources required by the migrated VMs during the migration and predicting any potential failure to roll back the system to its initial state if required; finally, the security gap is clearly tackled by protecting the migration channel using encryption and authentication. This thesis presents: (i) A clear identification of the key challenges and factors to successfully perform live migration of VMs across different cloud IaaS. This has resulted in a rigorous comparative analysis of the literature on live migration of VMs at the cloud IaaS based on our live cloud migration criteria; (ii) A rigorous analysis to distil the limitations of existing live cloud migration approaches and how to design efficient live cloud migration using up-to-date technologies. This has led to design a novel live cloud migration approach, called LivCloud, that overcomes key limitations in currently available approaches, is designed into two stages, the basic design stage and the enhancement of the basic design stage; (iii) A systematic approach to assess LivCloud on different public cloud IaaS. This has been achieved by using a combination of up-to-date technologies to build LivCloud taking the interoperability challenge into account, implementing and discussing the results of the basic design stage on Amazon IaaS, and implementing both stages of the approach on Packet bare metal cloud. To sum up, the thesis introduces a live cloud migration approach that is systematically designed and evaluated on uncontrolled environments, Amazon and Packet bare metal. In contrast to other approaches, it clearly highlights how to perform and secure the migration between our local network and the mentioned environments
Software-defined wide-area networks for distributed microgrid power systems
Cyber-physical systems have increasingly taken advantage of packet-switching networks for control and data acquisition. A major example is the realization of the smart grid. On the networking side, software-defined networking (SDN) has been trending for the past decade. With the help of SDN, we are moving towards power grids that have both intelligence and security. In this thesis, we focus on providing a versatile SDN infrastructure for power-system applications in the environment of microgrids. We conduct simulations and collect statistics to demonstrate that the SDN approach facilitates communications and enhances security for certain microgrid applications
Text-based Sentiment Analysis and Music Emotion Recognition
Nowadays, with the expansion of social media, large amounts of user-generated
texts like tweets, blog posts or product reviews are shared online. Sentiment polarity
analysis of such texts has become highly attractive and is utilized in recommender
systems, market predictions, business intelligence and more. We also witness deep
learning techniques becoming top performers on those types of tasks. There are
however several problems that need to be solved for efficient use of deep neural
networks on text mining and text polarity analysis.
First of all, deep neural networks are data hungry. They need to be fed with
datasets that are big in size, cleaned and preprocessed as well as properly labeled.
Second, the modern natural language processing concept of word embeddings as a
dense and distributed text feature representation solves sparsity and dimensionality
problems of the traditional bag-of-words model. Still, there are various uncertainties
regarding the use of word vectors: should they be generated from the same dataset
that is used to train the model or it is better to source them from big and popular
collections that work as generic text feature representations? Third, it is not easy for
practitioners to find a simple and highly effective deep learning setup for various
document lengths and types. Recurrent neural networks are weak with longer texts
and optimal convolution-pooling combinations are not easily conceived. It is thus
convenient to have generic neural network architectures that are effective and can
adapt to various texts, encapsulating much of design complexity.
This thesis addresses the above problems to provide methodological and practical
insights for utilizing neural networks on sentiment analysis of texts and achieving
state of the art results. Regarding the first problem, the effectiveness of various
crowdsourcing alternatives is explored and two medium-sized and emotion-labeled
song datasets are created utilizing social tags. One of the research interests of Telecom
Italia was the exploration of relations between music emotional stimulation and
driving style. Consequently, a context-aware music recommender system that aims
to enhance driving comfort and safety was also designed. To address the second
problem, a series of experiments with large text collections of various contents and
domains were conducted. Word embeddings of different parameters were exercised
and results revealed that their quality is influenced (mostly but not only) by the
size of texts they were created from. When working with small text datasets, it is
thus important to source word features from popular and generic word embedding
collections. Regarding the third problem, a series of experiments involving convolutional
and max-pooling neural layers were conducted. Various patterns relating
text properties and network parameters with optimal classification accuracy were
observed. Combining convolutions of words, bigrams, and trigrams with regional
max-pooling layers in a couple of stacks produced the best results. The derived
architecture achieves competitive performance on sentiment polarity analysis of
movie, business and product reviews.
Given that labeled data are becoming the bottleneck of the current deep learning
systems, a future research direction could be the exploration of various data programming
possibilities for constructing even bigger labeled datasets. Investigation
of feature-level or decision-level ensemble techniques in the context of deep neural
networks could also be fruitful. Different feature types do usually represent complementary
characteristics of data. Combining word embedding and traditional text
features or utilizing recurrent networks on document splits and then aggregating the
predictions could further increase prediction accuracy of such models
Design and semantics of form and movement (DeSForM 2006)
Design and Semantics of Form and Movement (DeSForM) grew from applied research exploring emerging design methods and practices to support new generation product and interface design. The products and interfaces are concerned with: the context of ubiquitous computing and ambient technologies and the need for greater empathy in the pre-programmed behaviour of the ‘machines’ that populate our lives. Such explorative research in the CfDR has been led by Young, supported by Kyffin, Visiting Professor from Philips Design and sponsored by Philips Design over a period of four years (research funding £87k). DeSForM1 was the first of a series of three conferences that enable the presentation and debate of international work within this field: • 1st European conference on Design and Semantics of Form and Movement (DeSForM1), Baltic, Gateshead, 2005, Feijs L., Kyffin S. & Young R.A. eds. • 2nd European conference on Design and Semantics of Form and Movement (DeSForM2), Evoluon, Eindhoven, 2006, Feijs L., Kyffin S. & Young R.A. eds. • 3rd European conference on Design and Semantics of Form and Movement (DeSForM3), New Design School Building, Newcastle, 2007, Feijs L., Kyffin S. & Young R.A. eds. Philips sponsorship of practice-based enquiry led to research by three teams of research students over three years and on-going sponsorship of research through the Northumbria University Design and Innovation Laboratory (nuDIL). Young has been invited on the steering panel of the UK Thinking Digital Conference concerning the latest developments in digital and media technologies. Informed by this research is the work of PhD student Yukie Nakano who examines new technologies in relation to eco-design textiles
31th International Conference on Information Modelling and Knowledge Bases
Information modelling is becoming more and more important topic for researchers, designers, and users of information systems.The amount and complexity of information itself, the number of abstractionlevels of information, and the size of databases and knowledge bases arecontinuously growing. Conceptual modelling is one of the sub-areas ofinformation modelling. The aim of this conference is to bring together experts from different areas of computer science and other disciplines, who have a common interest in understanding and solving problems on information modelling and knowledge bases, as well as applying the results of research to practice. We also aim to recognize and study new areas on modelling and knowledge bases to which more attention should be paid. Therefore philosophy and logic, cognitive science, knowledge management, linguistics and management science are relevant areas, too. In the conference, there will be three categories of presentations, i.e. full papers, short papers and position papers
- …