10 research outputs found
Deep Packet Inspection and its Effects On Net Neutrality
Deep packet inspection (DPI) is becoming increasingly important as a means to classify and control Internet traffic based on the content, applications, and users. Rather than just using packet header information, Internet Service Providers are using DPI for traffic management, routing and security. But by being able to control traffic by content, a growing number of public policy makers and users fear ISPs may discriminately charge more for faster delivery of their data, slow down applications or even deny access. They cite such practices as endangering the principle of net neutrality; the premise that all data on the Internet should be treated equally. The existing literature on DPI and net neutrality is sizeable, but little exists on the relationship between DPI and net neutrality. This study examines the literature, develops a research methodology and presents results from a study on the challenges of DPI in regards to privacy and net neutrality. The findings show that although most users are unaware of DPI technology, they feel strongly that it places their privacy at risk
Profit-oriented resource allocation using online scheduling in flexible heterogeneous networks
In this paper, we discuss a generalized measurement-based adaptive scheduling framework for dynamic resource allocation in flexible heterogeneous networks, in order to ensure efficient service level performance under inherently variable traffic conditions. We formulate our generalized optimization model based on the notion of a “profit center” with an arbitrary number of service classes, nonlinear revenue and cost functions and general performance constraints. Subsequently, and under the assumption of a linear pricing model and average queue delay requirements, we develop a fast, low complexity algorithm for online dynamic resource allocation, and examine its properties. Finally, the proposed scheme is validated through an extensive simulation study.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/47990/1/11235_2006_Article_6525.pd
Efficient Network QoS Provisioning Based on per Node Traffic Shaping
This paper addresaea the problem of providing perconnection end-to-end delay guarantees in a high-speed network. We assume that the network is connection oriented and enforces some admission control which ensurea that the source traffic conforms to specitied traftic characteristics. We concentrate on the class of rate-controlled service (RCS) disciplbw, in which traffic from each connection is reshaped at every hop, and develop end-to-end delay bounds for the general case where different reshapers are used at each hop. In addition, we establish that these bounds can also be achieved when the shapers at each hop have the same "minimal" envelope. The main disadvantage of this class of service discipline is that the end-to-end delay guarantees are obtained as the sum of the worst-case delays at each node, but we show that this problem can be alleviated through "proper" reshaping of the t-c. We illustrate the impact of this reshaping by demonstrating its use in designing RCS disciplines that outperform service disciplines that are baaed on generalized processor sharing (GPS). Furthermore, we show that we can restrict the space of "good" shapers to a family which is characterized by only one parameter. We also describe extensions to the service discip~me that make it work conserving and as a rasdt reduce the average end-to-end delays
QoS-aware predictive workflow scheduling
This research places the basis of QoS-aware predictive workflow scheduling. This research novel contributions will open up prospects for future research in handling complex big workflow applications with high uncertainty and dynamism. The results from the proposed workflow scheduling algorithm shows significant improvement in terms of the performance and reliability of the workflow applications
Recommended from our members
Localised Routing Algorithms in Communication Networks with Quality of Service Constraints. Performance Evaluation and Enhancement of New Localised Routing Approaches to Provide Quality of Service for Computer and Communication Networks.
The Quality of Service (QoS) is a profound concept which is gaining increasing attention in the Internet industry. Best-effort applications are now no longer acceptable in certain situations needing high bandwidth provisioning, low loss and streaming of multimedia applications. New emerging multimedia applications are requiring new levels of quality of services beyond those supported by best-effort networks. Quality of service routing is an essential part in any QoS architecture in communication networks. QoS routing aims to select a path among the many possible choices that has sufficient resources to accommodate the QoS requirements. QoS routing can significantly improve the network performance due to its awareness of the network QoS state. Most QoS routing algorithms require maintenance of the global network¿s state information to make routing decisions. Global state information needs to be periodically exchanged among routers since the efficiency of a routing algorithm depends on link-state information accuracy. However, most QoS routing algorithms suffer from scalability due to the high communication overhead and the high computation effort associated with maintaining accurate link state information and distributing global state information to each node in the network. The ultimate goal of this thesis is to contribute towards enhancing the scalability of QoS routing algorithms. Towards this goal, the thesis is focused on Localised QoS routing algorithms proposed to overcome the problems of using global network state information. Using such an approach, the source node makes routing decisions based on the local state information for each node in the path.
Localised QoS routing algorithms avoid the problems associated in the global network state, like high communication and processing overheads. In Localised QoS routing algorithms each source node maintains a predetermined set of candidate paths for each destination and avoids the problems associated with the
maintenance of a global network state by using locally collected flow statistics and flow blocking probabilities.Libya's higher educatio
Intelligent Web Services Architecture Evolution Via An Automated Learning-Based Refactoring Framework
Architecture degradation can have fundamental impact on software quality and productivity, resulting in inability to support new features, increasing technical debt and leading to significant losses. While code-level refactoring is widely-studied and well supported by tools, architecture-level refactorings, such as repackaging to group related features into one component, or retrofitting files into patterns, remain to be expensive and risky. Serval domains, such as Web services, heavily depend on complex architectures to design and implement interface-level operations, provided by several companies such as FedEx, eBay, Google, Yahoo and PayPal, to the end-users. The objectives of this work are: (1) to advance our ability to support complex architecture refactoring by explicitly defining Web service anti-patterns at various levels of abstraction, (2) to enable complex refactorings by learning from user feedback and creating reusable/personalized refactoring strategies to augment intelligent designers’ interaction that will guide low-level refactoring automation with high-level abstractions, and (3) to enable intelligent architecture evolution by detecting, quantifying, prioritizing, fixing and predicting design technical debts. We proposed various approaches and tools based on intelligent computational search techniques for (a) predicting and detecting multi-level Web services antipatterns, (b) creating an interactive refactoring framework that integrates refactoring path recommendation, design-level human abstraction, and code-level refactoring automation with user feedback using interactive mutli-objective search, and (c) automatically learning reusable and personalized refactoring strategies for Web services by abstracting recurring refactoring patterns from Web service releases. Based on empirical validations performed on both large open source and industrial services from multiple providers (eBay, Amazon, FedEx and Yahoo), we found that the proposed approaches advance our understanding of the correlation and mutual impact between service antipatterns at different levels, revealing when, where and how architecture-level anti-patterns the quality of services. The interactive refactoring framework enables, based on several controlled experiments, human-based, domain-specific abstraction and high-level design to guide automated code-level atomic refactoring steps for services decompositions. The reusable refactoring strategy packages recurring refactoring activities into automatable units, improving refactoring path recommendation and further reducing time-consuming and error-prone human intervention.Ph.D.College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttps://deepblue.lib.umich.edu/bitstream/2027.42/142810/1/Wang Final Dissertation.pdfDescription of Wang Final Dissertation.pdf : Dissertatio
Configuration of service oriented architectures with semantic technologies based on non-functional requirements
Ова дисертација је фокусирана на примену семантичких технологија за
решавање проблема оптималне конфигурације сервисно-оријентисаних
архитектура (енгл. Service Oriented Architecture – SOA) на основу
нефункционалних захтева корисника. Решење је базирано на проширењу АHP
алгоритма за рад са различитим врстама захтева и развоју хеуристичког приступа
заснованог на генетичким алгоритмима за решавање проблема оптималне
конфигурације. Постојећа решења у овој области су показала изузетно мали ниво
персонализације, тј корисницима није дозвољено дефинисање разних
софистициранијих врста захтева који осликавају њихове жеље, очекивања и
строге захтеве за које захтевају потпуно испуњење. Такође, постојећа решења су
била перманентно фокусирана на испуњење захтева функционалности, након чега
се врши одабир конфигурације сходно захтевима о смањењу вредности
карактеристика које имају тенденцију раста (нпр., цена и време извршавања),
односно повећању вредности карактеристика које имају тенденцију опадања
(нпр., поузданост и доступност). Међутим, када се посматрају целе фамилије
SOA, од посебног значаја постаје проблем конструкције конфигурације при
истовременом задовољењу функционалних и нефункционалних захтева.
Предложено интегрално решење под називом OptConfSOAFобезбеђује
представљање различитих врста захтева (безусловни, условни, захтеви о
лексикографском поретку) о нефункционалним карактеристикама и оптималну
конфигурацију фамилија SOA на основу дефинисаних захтева. Приступ који се
предлаже обезбеђује истовремено задовољење захтева који се тичу
функционалности система као и нефункционалних захтева који могу бити
различитог нивоа приоритета, односити се на поједине делове или сервисно-
оријентисану архитектуру у целости.
Предложено решење је опште и није ограничено само на веб сервисе, иако
се појам семантичких технологија обично везује за дати домен примене. Решење
се може применити у било ком домену у којем се SOA парадигма може
применити посматрањем сервиса као било које компоненте (необавезно
софтверске) дате функционалности...This dissertation is focused on the application of semantic technologies for solving the
problem of optimal configuration of service-oriented architectures (SOA) based on
stakeholders’ non-functional requirements. The proposed solution is developed as an
extension of the AHP algorithm to allow for processing of different kinds of
requirements. To address the problem of optimal configuration of SOA, a heuristic
approach based on genetic algorithms has also been proposed and validated.
Existing approaches in this field have shown low level of personalization, i.e.
stakeholders are neither enabled to define sophisticated requirements that reflect their
own expectations and attitudes, nor they are able to indicate hard requirements that have
to be fully satisfied. Furthermore, existing approaches were primarily addressing the
problem of fulfilling functional requirements, while the selection of an appropriate
configuration is driven by the goal of decreasing the values of monotonically decreasing
features (e.g., price and execution time) and simultaneous increasing the values of
monotonically increasing features (e.g., availability and reliability). By considering the
whole SOA families, the problem of configuration based on both functional and nonfunctional
requirements gets special importance for research and further applications.
The proposed solution, titled OptConfSOAF provides a framework for
specification and processing of different kinds of requirements (unconditional,
conditional, and requirements about lexicographical order) over non-functional features,
and further optimal configuration of SOA families. The proposed approach provides
simultaneous fulfillment of functional requirements (i.e., requirements related to the
system’s functionalities) and non-functional requirements, where the latter could be
defined with different level of importance, for specific parts of a SOA-based system or
the system in its entirety.
The proposed solution is general and is not bound to web services, even though
semantic technologies are often associated with that domain. Since the solution
considers a service as a component (no mandatory to be software component) with the
specified functionality, it is applicable and easily adaptable to any specific application
domain where SOA paradigm may be applied..
Design and implementation of WCET analyses : including a case study on multi-core processors with shared buses
For safety-critical real-time embedded systems, the worst-case execution time (WCET) analysis — determining an upper bound on the possible execution times of a program — is an important part of the system verification. Multi-core processors share resources (e.g. buses and caches) between multiple processor cores and, thus, complicate the WCET analysis as the execution times of a program executed on one processor core significantly depend on the programs executed in parallel on the concurrent cores. We refer to this phenomenon as shared-resource interference. This thesis proposes a novel way of modeling shared-resource interference during WCET analysis. It enables an efficient analysis — as it only considers one processor core at a time — and it is sound for hardware platforms exhibiting timing anomalies. Moreover, this thesis demonstrates how to realize a timing-compositional verification on top of the proposed modeling scheme. In this way, this thesis closes the gap between modern hardware platforms, which exhibit timing anomalies, and existing schedulability analyses, which rely on timing compositionality. In addition, this thesis proposes a novel method for calculating an upper bound on the amount of interference that a given processor core can generate in any time interval of at most a given length. Our experiments demonstrate that the novel method is more precise than existing methods.Die Analyse der maximalen Ausführungszeit (Worst-Case-Execution-Time-Analyse, WCET-Analyse) ist für eingebettete Echtzeit-Computer-Systeme in sicherheitskritischen Anwendungsbereichen unerlässlich. Mehrkernprozessoren erschweren die WCET-Analyse, da einige ihrer Hardware-Komponenten von mehreren Prozessorkernen gemeinsam genutzt werden und die Ausführungszeit eines Programmes somit vom Verhalten mehrerer Kerne abhängt. Wir bezeichnen dies als Interferenz durch gemeinsam genutzte Komponenten. Die vorliegende Arbeit schlägt eine neuartige Modellierung dieser Interferenz während der WCET-Analyse vor. Der vorgestellte Ansatz ist effizient und führt auch für Computer-Systeme mit Zeitanomalien zu korrekten Ergebnissen. Darüber hinaus zeigt diese Arbeit, wie ein zeitkompositionales Verfahren auf Basis der vorgestellten Modellierung umgesetzt werden kann. Auf diese Weise schließt diese Arbeit die Lücke zwischen modernen Mikroarchitekturen, die Zeitanomalien aufweisen, und den existierenden Planbarkeitsanalysen, die sich alle auf die Kompositionalität des Zeitverhaltens verlassen. Außerdem stellt die vorliegende Arbeit ein neues Verfahren zur Berechnung einer oberen Schranke der Menge an Interferenz vor, die ein bestimmter Prozessorkern in einem beliebigen Zeitintervall einer gegebenen Länge höchstens erzeugen kann. Unsere Experimente zeigen, dass das vorgestellte Berechnungsverfahren präziser ist als die existierenden Verfahren.Deutsche Forschungsgemeinschaft (DFG) as part of the Transregional Collaborative Research Centre SFB/TR 14 (AVACS