136 research outputs found

    Toward Open and Programmable Wireless Network Edge

    Get PDF
    Increasingly, the last hop connecting users to their enterprise and home networks is wireless. Wireless is becoming ubiquitous not only in homes and enterprises but in public venues such as coffee shops, hospitals, and airports. However, most of the publicly and privately available wireless networks are proprietary and closed in operation. Also, there is little effort from industries to move forward on a path to greater openness for the requirement of innovation. Therefore, we believe it is the domain of university researchers to enable innovation through openness. In this thesis work, we introduce and defines the importance of open framework in addressing the complexity of the wireless network. The Software Defined Network (SDN) framework has emerged as a popular solution for the data center network. However, the promise of the SDN framework is to make the network open, flexible and programmable. In order to deliver on the promise, SDN must work for all users and across all networks, both wired and wireless. Therefore, we proposed to create new modules and APIs to extend the standard SDN framework all the way to the end-devices (i.e., mobile devices, APs). Thus, we want to provide an extensible and programmable abstraction of the wireless network as part of the current SDN-based solution. In this thesis work, we design and develop a framework, weSDN (wireless extension of SDN), that extends the SDN control capability all the way to the end devices to support client-network interaction capabilities and new services. weSDN enables the control-plane of wireless networks to be extended to mobile devices and allows for top-level decisions to be made from an SDN controller with knowledge of the network as a whole, rather than device centric configurations. In addition, weSDN easily obtains user application information, as well as the ability to monitor and control application flows dynamically. Based on the weSDN framework, we demonstrate new services such as application-aware traffic management, WLAN virtualization, and security management

    Parallel and Flow-Based High Quality Hypergraph Partitioning

    Get PDF
    Balanced hypergraph partitioning is a classic NP-hard optimization problem that is a fundamental tool in such diverse disciplines as VLSI circuit design, route planning, sharding distributed databases, optimizing communication volume in parallel computing, and accelerating the simulation of quantum circuits. Given a hypergraph and an integer kk, the task is to divide the vertices into kk disjoint blocks with bounded size, while minimizing an objective function on the hyperedges that span multiple blocks. In this dissertation we consider the most commonly used objective, the connectivity metric, where we aim to minimize the number of different blocks connected by each hyperedge. The most successful heuristic for balanced partitioning is the multilevel approach, which consists of three phases. In the coarsening phase, vertex clusters are contracted to obtain a sequence of structurally similar but successively smaller hypergraphs. Once sufficiently small, an initial partition is computed. Lastly, the contractions are successively undone in reverse order, and an iterative improvement algorithm is employed to refine the projected partition on each level. An important aspect in designing practical heuristics for optimization problems is the trade-off between solution quality and running time. The appropriate trade-off depends on the specific application, the size of the data sets, and the computational resources available to solve the problem. Existing algorithms are either slow, sequential and offer high solution quality, or are simple, fast, easy to parallelize, and offer low quality. While this trade-off cannot be avoided entirely, our goal is to close the gaps as much as possible. We achieve this by improving the state of the art in all non-trivial areas of the trade-off landscape with only a few techniques, but employed in two different ways. Furthermore, most research on parallelization has focused on distributed memory, which neglects the greater flexibility of shared-memory algorithms and the wide availability of commodity multi-core machines. In this thesis, we therefore design and revisit fundamental techniques for each phase of the multilevel approach, and develop highly efficient shared-memory parallel implementations thereof. We consider two iterative improvement algorithms, one based on the Fiduccia-Mattheyses (FM) heuristic, and one based on label propagation. For these, we propose a variety of techniques to improve the accuracy of gains when moving vertices in parallel, as well as low-level algorithmic improvements. For coarsening, we present a parallel variant of greedy agglomerative clustering with a novel method to resolve cluster join conflicts on-the-fly. Combined with a preprocessing phase for coarsening based on community detection, a portfolio of from-scratch partitioning algorithms, as well as recursive partitioning with work-stealing, we obtain our first parallel multilevel framework. It is the fastest partitioner known, and achieves medium-high quality, beating all parallel partitioners, and is close to the highest quality sequential partitioner. Our second contribution is a parallelization of an n-level approach, where only one vertex is contracted and uncontracted on each level. This extreme approach aims at high solution quality via very fine-grained, localized refinement, but seems inherently sequential. We devise an asynchronous n-level coarsening scheme based on a hierarchical decomposition of the contractions, as well as a batch-synchronous uncoarsening, and later fully asynchronous uncoarsening. In addition, we adapt our refinement algorithms, and also use the preprocessing and portfolio. This scheme is highly scalable, and achieves the same quality as the highest quality sequential partitioner (which is based on the same components), but is of course slower than our first framework due to fine-grained uncoarsening. The last ingredient for high quality is an iterative improvement algorithm based on maximum flows. In the sequential setting, we first improve an existing idea by solving incremental maximum flow problems, which leads to smaller cuts and is faster due to engineering efforts. Subsequently, we parallelize the maximum flow algorithm and schedule refinements in parallel. Beyond the strive for highest quality, we present a deterministically parallel partitioning framework. We develop deterministic versions of the preprocessing, coarsening, and label propagation refinement. Experimentally, we demonstrate that the penalties for determinism in terms of partition quality and running time are very small. All of our claims are validated through extensive experiments, comparing our algorithms with state-of-the-art solvers on large and diverse benchmark sets. To foster further research, we make our contributions available in our open-source framework Mt-KaHyPar. While it seems inevitable, that with ever increasing problem sizes, we must transition to distributed memory algorithms, the study of shared-memory techniques is not in vain. With the multilevel approach, even the inherently slow techniques have a role to play in fast systems, as they can be employed to boost quality on coarse levels at little expense. Similarly, techniques for shared-memory parallelism are important, both as soon as a coarse graph fits into memory, and as local building blocks in the distributed algorithm

    How To Touch a Running System

    Get PDF
    The increasing importance of distributed and decentralized software architectures entails more and more attention for adaptive software. Obtaining adaptiveness, however, is a difficult task as the software design needs to foresee and cope with a variety of situations. Using reconfiguration of components facilitates this task, as the adaptivity is conducted on an architecture level instead of directly in the code. This results in a separation of concerns; the appropriate reconfiguration can be devised on a coarse level, while the implementation of the components can remain largely unaware of reconfiguration scenarios. We study reconfiguration in component frameworks based on formal theory. We first discuss programming with components, exemplified with the development of the cmc model checker. This highly efficient model checker is made of C++ components and serves as an example for component-based software development practice in general, and also provides insights into the principles of adaptivity. However, the component model focuses on high performance and is not geared towards using the structuring principle of components for controlled reconfiguration. We thus complement this highly optimized model by a message passing-based component model which takes reconfigurability to be its central principle. Supporting reconfiguration in a framework is about alleviating the programmer from caring about the peculiarities as much as possible. We utilize the formal description of the component model to provide an algorithm for reconfiguration that retains as much flexibility as possible, while avoiding most problems that arise due to concurrency. This algorithm is embedded in a general four-stage adaptivity model inspired by physical control loops. The reconfiguration is devised to work with stateful components, retaining their data and unprocessed messages. Reconfiguration plans, which are provided with a formal semantics, form the input of the reconfiguration algorithm. We show that the algorithm achieves perceived atomicity of the reconfiguration process for an important class of plans, i.e., the whole process of reconfiguration is perceived as one atomic step, while minimizing the use of blocking of components. We illustrate the applicability of our approach to reconfiguration by providing several examples like fault-tolerance and automated resource control

    MATLAB

    Get PDF
    This excellent book represents the final part of three-volumes regarding MATLAB-based applications in almost every branch of science. The book consists of 19 excellent, insightful articles and the readers will find the results very useful to their work. In particular, the book consists of three parts, the first one is devoted to mathematical methods in the applied sciences by using MATLAB, the second is devoted to MATLAB applications of general interest and the third one discusses MATLAB for educational purposes. This collection of high quality articles, refers to a large range of professional fields and can be used for science as well as for various educational purposes

    Einheitliche Gütemaße für Clusterings, Layouts und Orderings von Graphen, und deren Anwendung als Software-Entwurfskriterien

    Get PDF
    How good is a given graph clustering, graph layout, or graph ordering --specifically, how well does it group densely connected vertices and separate sparsely connected vertices? How good is a given software design -- specifically, how well does it minimize the interdependence of the subsystems? This work introduces and validates simple and uniform measures for these two properties. Together with existing optimization algorithms, the introduced measures enable the automatic computation e.g. of communities in social networks and of design flaws in software systems. The first part derives, validates, and unifies quality measures for graph clusterings, graph layouts, and graph orderings, with the following results: - Identical quality measures can be applied to clusterings, layouts, and orderings; this enables the computation of consistent clusterings, layouts, and orderings. - Diverse existing and new measures can be unified into few general measures; this facilitates their comparison and validation. - Many existing measures are biased towards certain clusterings, layouts, or orderings, even for graphs without particularly dense or sparse subgraphs, and thus do not (only) measure quality in the above sense. - For example graphs, the minimization of new, unbiased (or weakly biased) measures reveals nonobvious groups, e.g. communities in social networks, subject areas in hypertexts, or closely interlocked countries in international trade. The second part derives, validates, and unifies dependency-based indicators of software design quality. It applies two quality measures for graph clusterings as measures for the coupling of software subsystems -- specifically for the coupling indicated by common changes and for the coupling indicated by references -- and shows: - The measures quantify the dependency-caused development costs, under well-defined simplifying assumptions. - The minimization of the measures conforms to existing dependency-related design principles (like locality of change, acyclicity of references, and stability of references), design rules, and design patterns. - For example software systems, the incremental minimization of the measures reveals nonobvious design flaws, like the distribution of coherent responsibilities over several subsystems, or references from low-level to high-level subsystems. In summary, this work shows that - simple measures can suffice to capture important aspects of graph clustering quality, graph layout quality, graph ordering quality, and software design quality, and - the optimization of simple measures can suffice to detect nonobvious and often useful structure in various real-world systems.Wie gut ist ein Graph-Clustering, Graph-Layout oder Graph-Ordering -- insbesondere, wie gut gruppiert es dicht verbundene Knoten? Wie gut ist ein Software-Entwurf -- insbesondere, wie gut minimiert er die Abhängigkeiten zwischen Subsystemen? Für diese beiden Eigenschaften definiert und validiert die vorliegende Arbeit einfache und einheitliche Maße. Zusammen mit existierenden Optimierungsalgorithmen ermöglichen diese Maße die automatische Entdeckung z.B. von kohäsiven Communities in sozialen Netzwerken und von Entwurfsfehlern in Software-Systemen. Der erste Teil definiert, validiert und vereinheitlicht Gütemaße für Graph-Clusterings, Graph-Layouts und Graph-Orderings, mit folgenden Ergebnissen: - Identische Gütemaße können auf Clusterings, Layouts und Orderings angewendet werden. Dies ermöglicht die Berechnung von konsistenten Clusterings, Layouts und Orderings. - Viele existierende und neue Gütemaße können zu wenigen allgemeinen Maßen vereinheitlicht werden; dies erleichtert ihren Vergleich und ihre Validierung. - Viele existierende Maße messen nicht (nur) Güte im obigen Sinne, da sie selbst für Graphen ohne ungewöhnlich dichte oder dünne Teilgraphen bestimmte Clusterings, Layouts oder Orderings bevorzugen. - Durch Optimierung verbesserter Maße lassen sich nicht-offensichtliche Gruppen in vielen realen Systemen finden, z.B. Communities in sozialen Netzwerken, Themengebiete in Hypertexten, und Integrationsräume in der Weltwirtschaft. Der zweite Teil definiert, validiert und vereinheitlicht abhängigkeitsbasierte Indikatoren für Software-Entwurfsqualität. Er verwendet zwei Gütemaße für Graph-Clusterings als Maße für die Kopplung von Software-Subsystemen -- insbesondere für Kopplung, deren Symptom gemeinsame Änderungen sind und für Kopplung, deren Ursache Referenzen sind -- und zeigt: - Die Maße quantifizieren die durch Abhängigkeiten verursachten Entwicklungskosten, unter vereinfachenden Annahmen. - Die Optimierung der Maße impliziert anerkannte Entwurfsprinzipien (z.B. Lokalität von Änderungen, Azyklizität von Referenzen, und Stabilität von Referenzen), Entwurfsregeln und Entwurfsmuster. - Durch Optimierung der Maße lassen sich nicht-offensichtliche Entwurfsfehler finden, z.B. die Verteilung kohärenter Verantwortlichkeiten über mehrere Subsysteme, oder Referenzen von allgemeinen zu speziellen Subsystemen. Zusammenfassend zeigt die Arbeit, dass - einfache Maße ausreichen, um wichtige Aspekte der Qualität von Graph-Clusterings, Graph-Layouts, Graph-Orderings und Software-Entwürfen zu formalisieren, und - die Optimierung einfacher Maße ausreicht, um nicht-offensichtliche und nützliche Struktur in verschiedensten Systemen zu finden

    Combining SOA and BPM Technologies for Cross-System Process Automation

    Get PDF
    This paper summarizes the results of an industry case study that introduced a cross-system business process automation solution based on a combination of SOA and BPM standard technologies (i.e., BPMN, BPEL, WSDL). Besides discussing major weaknesses of the existing, custom-built, solution and comparing them against experiences with the developed prototype, the paper presents a course of action for transforming the current solution into the proposed solution. This includes a general approach, consisting of four distinct steps, as well as specific action items that are to be performed for every step. The discussion also covers language and tool support and challenges arising from the transformation

    Application of service composition mechanisms to Future Networks architectures and Smart Grids

    Get PDF
    Aquesta tesi gira entorn de la hipòtesi de la metodologia i mecanismes de composició de serveis i com es poden aplicar a diferents camps d'aplicació per a orquestrar de manera eficient comunicacions i processos flexibles i sensibles al context. Més concretament, se centra en dos camps d'aplicació: la distribució eficient i sensible al context de contingut multimèdia i els serveis d'una xarxa elèctrica intel·ligent. En aquest últim camp es centra en la gestió de la infraestructura, cap a la definició d'una Software Defined Utility (SDU), que proposa una nova manera de gestionar la Smart Grid amb un enfocament basat en programari, que permeti un funcionament molt més flexible de la infraestructura de xarxa elèctrica. Per tant, revisa el context, els requisits i els reptes, així com els enfocaments de la composició de serveis per a aquests camps. Fa especial èmfasi en la combinació de la composició de serveis amb arquitectures Future Network (FN), presentant una proposta de FN orientada a serveis per crear comunicacions adaptades i sota demanda. També es presenten metodologies i mecanismes de composició de serveis per operar sobre aquesta arquitectura, i posteriorment, es proposa el seu ús (en conjunció o no amb l'arquitectura FN) en els dos camps d'estudi. Finalment, es presenta la investigació i desenvolupament realitzat en l'àmbit de les xarxes intel·ligents, proposant diverses parts de la infraestructura SDU amb exemples d'aplicació de composició de serveis per dissenyar seguretat dinàmica i flexible o l'orquestració i gestió de serveis i recursos dins la infraestructura de l'empresa elèctrica.Esta tesis gira en torno a la hipótesis de la metodología y mecanismos de composición de servicios y cómo se pueden aplicar a diferentes campos de aplicación para orquestar de manera eficiente comunicaciones y procesos flexibles y sensibles al contexto. Más concretamente, se centra en dos campos de aplicación: la distribución eficiente y sensible al contexto de contenido multimedia y los servicios de una red eléctrica inteligente. En este último campo se centra en la gestión de la infraestructura, hacia la definición de una Software Defined Utility (SDU), que propone una nueva forma de gestionar la Smart Grid con un enfoque basado en software, que permita un funcionamiento mucho más flexible de la infraestructura de red eléctrica. Por lo tanto, revisa el contexto, los requisitos y los retos, así como los enfoques de la composición de servicios para estos campos. Hace especial hincapié en la combinación de la composición de servicios con arquitecturas Future Network (FN), presentando una propuesta de FN orientada a servicios para crear comunicaciones adaptadas y bajo demanda. También se presentan metodologías y mecanismos de composición de servicios para operar sobre esta arquitectura, y posteriormente, se propone su uso (en conjunción o no con la arquitectura FN) en los dos campos de estudio. Por último, se presenta la investigación y desarrollo realizado en el ámbito de las redes inteligentes, proponiendo varias partes de la infraestructura SDU con ejemplos de aplicación de composición de servicios para diseñar seguridad dinámica y flexible o la orquestación y gestión de servicios y recursos dentro de la infraestructura de la empresa eléctrica.This thesis revolves around the hypothesis the service composition methodology and mechanisms and how they can be applied to different fields of application in order to efficiently orchestrate flexible and context-aware communications and processes. More concretely, it focuses on two fields of application that are the context-aware media distribution and smart grid services and infrastructure management, towards a definition of a Software-Defined Utility (SDU), which proposes a new way of managing the Smart Grid following a software-based approach that enable a much more flexible operation of the power infrastructure. Hence, it reviews the context, requirements and challenges of these fields, as well as the service composition approaches. It makes special emphasis on the combination of service composition with Future Network (FN) architectures, presenting a service-oriented FN proposal for creating context-aware on-demand communication services. Service composition methodology and mechanisms are also presented in order to operate over this architecture, and afterwards, proposed for their usage (in conjunction or not with the FN architecture) in the deployment of context-aware media distribution and Smart Grids. Finally, the research and development done in the field of Smart Grids is depicted, proposing several parts of the SDU infrastructure, with examples of service composition application for designing dynamic and flexible security for smart metering or the orchestration and management of services and data resources within the utility infrastructure

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Ambidexterity in large-scale software engineering

    Get PDF
    Software is pervading our environment with products that become smarter and smarter every day. In order to follow this trend, software companies deliver continuously new features, in order to anticipate their competitors and to gain market share. For this reason, they need to adopt processes and organization solutions that allow them to deliver continuously. A key challenge for software organizations is to balance the resources in order to deliver enough new features in the short-term but also to support the delivery of new features in the long-term. In one word, companies need to be ambidextrous. In this thesis we investigate what ambidexterity is, what are the factors that hinder large software companies to be ambidextrous, and we provide initial solutions for the mitigation of such challenges. The research process consists of an empirical investigation based on the Grounded Theory approach, in which we conducted several case studies based on continuous interaction with 7 large software organizations developing embedded software. The results in this thesis are grounded in a large number of data collected, and corroborated by a combination of exploratory and confirmatory, as well as qualitative and quantitative data collection. The contributions of this thesis include a comprehensive understanding of the factors influencing ambidexterity, the current challenges and a proposed solution, CAFFEA. In particular, we found that three main challenges where hampering the achievement of ambidexterity for large software companies. The first one is the conflict between Agile Software Development and software reuse. The second one is the complexity of balancing short-term and long-term goals among a large number of stakeholders with different views and expertize. The third challenge is the risky tendency, in practice, of developing systems that does not sustain long-term delivery of new features: this is caused by the unbalanced focus on short-term deliveries rather than on the system architecture quality. This phenomenon is referred to as Architectural Technical Debt, which is a financial theoretical framework that relates the implementation of suboptimal architectural solutions to taking a debt. Even though such sub-optimal solutions might bring benefits in the short-term, a debt might have an interest associated with it, which consists of a negative impact on the ability of the software company to deliver new features in the long-term. If the interest becomes too costly, then the software company suffers delays and development crises. It is therefore important to avoid accumulation, in the system, of Architectural Technical Debt with a high interest associated with it. The solution proposed in this thesis is a comprehensive framework, CAFFEA, which includes the management of Architectural Technical Debt as a spanning activity (i.e., a practice shared by stakeholders belonging to different groups inside the organization). We have recognized and evaluated the strategic information required to manage Architectural Technical Debt. Then, we have developed an organizational framework, including roles, teams and practices, which are needed by the involved stakeholders. This solutions have been empirically developed and evaluated, and companies report initial benefits of applying the results in practice
    corecore