786 research outputs found
Predicting Internet Bandwidth in Educational Institutions using Langrage’S Interpolation
This paper addresses the solution to the problem of Internet Bandwidth optimization and prediction in the institution of higher learning in Nigeria. The operation of the link-load balancer which provides an efficient cost-effective and easy-to-use solution to maximize utilization and availability of internet access is extensively discussed. This enables enterprises to lease for two or three ISP links connecting the internal network to the internet. The paper also proposes the application of the Langrage’s method of interpolation for the predictability of internet bandwidth in the institutions. The analysis provides a unique graphical solution of effective actual bandwidth (Mbps) and the corresponding acceptable number of internet Users (‘000) in the institutions. The prediction allows us to view the actual internet bandwidth and the acceptable number of internet Users as the population of users’ increases. Keywords: Internet Bandwidth, Optimization, Link-Load Balancer, Prediction, Maximized Utilization, Availability of Internet access
Internet Data Bandwidth Optimization and Prediction in Higher Learning Institutions Using Lagrange’s Interpolation: A Case of Lagos State University of Science and Technology
This research work studies the performance of the internet services of institution of higher learning in Nigeria. Data was collated from Lagos State University of Science and Technology (LASUSTECH) as case study of this research work. The problem of Internet Bandwidth optimization in the institution of higher learning in Nigeria was extensively addressed in this paper. The operation of the Link-Load balancer which provides an efficient cost-effective and easy-to-use solution to maximize utilization and availability of Internet access is discussed. In this research work, the Lagrange’s method of interpolation was used to predict effective internet data bandwidth for significantly increasing number of internet users. The linear Lagrange’s interpolation model (LILAGRINT model) was proposed for LASUSTECH. The predictions allow us to view the effective internet data bandwidth with respect to the corresponding acceptable number of internet users as the number of user’s increases. The integrity of the model was examined, verified and validated at the ICT department of the institution. The LILAGRINT model was integrated into the management of ICT and tested. The result showed that the proposed LILAGRINT model proved to be highly effective and innovative in the area of internet data bandwidth predictability. Keywords:Internet Data Bandwidth, Optimization, Link-load balancer, Lagrange’s interpolation, Predictions, Management of ICT DOI: 10.7176/CEIS/10-1-04 Publication date:September 30th 202
Cost-aware multi data-center bulk transfers in the cloud from a customer-side perspective
Many cloud applications (e.g., data backup and replication, video distribution) require dissemination of large volumes of data from a source data-center to multiple geographically distributed data-centers. Given the high costs of wide-area bandwidth, the overall cost of inter-data-center communication is a major concern in such scenarios. While previous works have focused on optimizing the costs of bulk transfer, most of them use the charging models of Internet service providers, typically based on the 95th percentile of bandwidth consumption. However, public Cloud Service Providers (CSP) follow very different models to charge their customers. First, the cost for transmission is flat and depends on the location of the source and receiver data-centers. Second, CSPs offer discounts once customer transfers exceed certain volume thresholds per data-center. We present a systematic framework, CloudMPcast, that exploits these two aspects of cloud pricing schemes. CloudMPcast constructs overlay distribution trees for bulk-data transfer that both optimizes dollar costs of distribution, and ensures end-to-end data transfer times are not affected. CloudMPCast monitors TCP throughputs between data-centers and only proposes alternative trees that respect original transfer times. After an extensive measurement study, the cost savings range from 10 to 60 percent for both Azure and EC2 infrastructures, which potentially translates to millions of dollars a year assuming realistic demandsThis material is based upon work supported in part by the
National Science Foundation (NSF) under Award
No.1162333, . J. L.
Garc ıa-Dorado is thankful for the financial support of the
Jos e Castillejo Program (CAS12/00057
Recommended from our members
Bursting the broadband bubble
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Broadband has revolutionised the way the Internet is used and has become the critical enabling infrastructure of our modem and knowledge-based economy. Its widespread introduction has not only greatly enhanced the speed at which information online can be accessed, but also the range and sophistication of the content available. It is still penetrating the telecommunication market and is seen by some as the most significant evolutionary step since the emergence of the Internet. However in the rush to achieve market share, there is a risk that insufficient attention may be paid to quality issues, the central theme of this research.
The research addresses the issues of broadband quality with a stated objective of assessing broadband quality by means of an integrated framework that encompasses factors beyond strict technical characteristics of broadband networks. Indeed, the concept of quality is a multi-facetted one, for which various perspectives can be distinguished. In this work, broadband quality as perceived by users, ISP and Government in the United Kingdom (UK) is looked at and a survey report is given and analysed. The aim of this doctoral research was to provide much needed empirical broadband quality framework that would guide the service provider as well as the UK government in the provision of quality broadband to its consumers. It will also stand as a benchmark to countries wanting to provide quality broadband to its citizens.
A survey research approach was employed to achieve the overall aim and objective of this research. This was conducted using the response of 133 participants located in various boroughs in the UK. The results of the survey show that quality, though desired by many, has been short-changed by the desire to have access to the Internet via broadband at the lowest cost possible. However, this has not encouraged some consumers to switch to broadband from dial-up service despite continuous low prices being offered by service providers. Furthermore, the results also indicated that focusing on broadband quality will improve and promote investment in broadband capacity and decrease the uncertainty in consumer demand for applications such as multi-media content delivery, enhanced electronic commerce and telecommuting that exploit broadband access
Resource management for virtualized networks
Network Virtualization has emerged as a promising approach that can be employed to efficiently enhance the resource management technologies. In this work, the goal is to study how to automate the bandwidth resource management, while deploying a virtual partitioning scheme for the network bandwidth resources. Works that addressed the resource management in Virtual Networks are many, however, each has some limitations. Resource overwhelming, poor bandwidth utilization, low profits, exaggeration, and collusion are types of such sort of limitations. Indeed, the lack of adequate bandwidth allocation schemes encourages resource overwhelming, where one customer may overwhelm the resources that supposed to serve others. Static resource partitioning can resist overwhelming but at the same time it may result in poor bandwidth utilization, which means less profit rates for the Internet Service Providers (ISPs). However, deploying the technology of autonomic management can enhance the resource utilization, and maximize the customers’ satisfaction rates. It also provides the customers with a kind of privilege that should be somehow controlled as customers, always eager to maximize their payoffs, can use such a privilege to cheat. Hence, cheating actions like exaggeration and collusion can be expected. Solving the aforementioned limitations is addressed in this work.
In the first part, the work deals with overcoming the problems of low profits, poor utilization, and high blocking ratios of the traditional First Ask First Allocate (FAFA) algorithm. The proposed solution is based on an Autonomic Resource Management Mechanism (ARMM). This solution deploys a smarter allocation algorithm based on the auction mechanism. At this level, to reduce the tendency of exaggeration, the Vickrey-Clarke-Groves (VCG) is proposed to provide a threat model that penalizes the exaggerating customers, based on the inconvenience they cause to others in the system. To resist the collusion, the state-dependent shadow price is calculated, based on the Markov decision theory, to represent a selling price threshold for the bandwidth units at a given state.
Part two of the work solves an expanded version of the bandwidth allocation problem, but through a different methodology. In this part, the bandwidth allocation problem is expanded to a bandwidth partitioning problem. Such expansion allows dividing the link’s bandwidth resources based on the provided Quality of Service (QoS) classes, which provides better bandwidth utilization. In order to find the optimal management metrics, the problem is solved through Linear Programming (LP). A dynamic bandwidth partitioning scheme is also proposed to overcome the problems related to the static partitioning schemes, such as the poor bandwidth utilization, which can result in having under-utilized partitions. This dynamic partitioning model is deployed in a periodic manner. Periodic partitioning provides a new way to reduce the reasoning of exaggeration, when compared to the threat model, and eliminates the need of the further computational overhead.
The third part of this work proposes a decentralized management scheme to solve aforementioned problems in the context of networks that are managed by Virtual Network Operators (VNOs). Such decentralization allows deploying a higher level of autonomic management, through which, the management responsibilities are distributed over the network nodes, each responsible for managing its outgoing links. Compared to the centralized schemes, such distribution provides higher reliability and easier bandwidth dimensioning. Moreover, it creates a form of two-sided competition framework that allows a double-auction environment among the network players, both customers and node controllers. Such competing environment provides a new way to reduce the exaggeration beside the periodic and threat models mentioned before. More important, it can deliver better utilization rates, lower blocking, and consequently higher profits.
Finally, numerical experiments and empirical results are presented to support the proposed solutions, and to provide a comparison with other works from the literature
Revenge of the Bellheads: How the Telecommunications Mindset Will Reshape the Internet
Recent double digit billion dollar mergers of telecommunications firms consolidate both market share and market leadership by incumbent operators such as Verizon. These companies seek to exploit technological and market convergence by offering a triple play package of wired and wireless telephone service, video and Internet access. As well they need to develop new profit centers to compensate for declining revenues and market shares in traditional services such as wireline telephony.
While incumbent telecommunications operators have pursued new market opportunities, these ventures have not abandoned core management philosophies, operating assumptions and business strategies. Longstanding strategies for recovering investments, using a telecommunications template greatly contrast with the means by which information processing and content providers achieve profitability. Internet ventures have come up with many different business models including ones that offer free, subsidized or deliberately underpriced access as well as regularly increasing value propositions to consumers, e.g., more options for the same price. Incumbent telecommunications firms rarely deviate from a rigid cost recovery structure that identifies cost causers.
Internet and telecommunications business models rarely jibe, even though convergence and business transactions puts incumbent telecommunications firms in market leadership positions. Having such dominant market share now makes it likely that incumbent telecommunications firms will attempt to imprint their business models and their mindsets on Internet markets. Recently senior managers of several incumbent carriers have expressed displeasure with the apparent inability of their companies to recover the sizeable investment in broadband Internet access. With an eye toward recouping these investments, the companies have announced plans to replace, or offer alternatives to unmetered All You Can Eat Internet access and to oppose any initiative that restrains their pricing and operational flexibility.
The incumbent telecommunications companies characterize their new Internet pricing plans as offering “greater choice” to consumers. Different pricing points based on throughput caps makes sense to a “Bellhead” corporate officer who thinks he or she can identify cost causers and capture rents that otherwise would accrue to content providers. However, the Internet seamlessly blends content and conduit, making it difficult to identify the cost causer.
This article will examine Bellhead business models incorporating metering and other traditional cost recovery strategies with an eye toward determining what constitutes reasonable price discrimination and what represents an unfair trade practice or an anticompetitive strategy. The article will consider whether and how Bellhead management strategies will jeopardize the serendipity and positive networking externalities that have accrued when users can freely “surf the web” and content providers can bundle user sought content with advertising. Different pricing points based on throughput caps makes sense to Bellhead corporate officers who think they can capture rents that otherwise would accrue to content providers.
The article also will examine the clash of Bellhead and Nethead cultures with an eye toward identifying the stakes involved when Internet access pricing and interconnection primarily follows a telecommunications infrastructure cost recovery scheme in lieu of different commercial relationships favored by most Internet ventures. The article concludes that most Bellhead cost recovery models are lawful even though they will reduce for most consumers the real or perceived value proposition offered by an unmetered monthly Internet access subscription
Exploring traffic and QoS management mechanisms to support mobile cloud computing using service localisation in heterogeneous environments
In recent years, mobile devices have evolved to support an amalgam of multimedia applications and content. However, the small size of these devices poses a limit the amount of local computing resources. The emergence of Cloud technology has set the ground for an era of task offloading for mobile devices and we are now seeing the deployment of applications that make more extensive use of Cloud processing as a means of augmenting the capabilities of mobiles. Mobile Cloud Computing is the term used to describe the convergence of these technologies towards applications and mechanisms that offload tasks from mobile devices to the Cloud.
In order for mobile devices to access Cloud resources and successfully offload tasks there, a solution for constant and reliable connectivity is required. The proliferation of wireless technology ensures that networks are available almost everywhere in an urban environment and mobile devices can stay connected to a network at all times. However, user mobility is often the cause of intermittent connectivity that affects the performance of applications and ultimately degrades the user experience. 5th Generation Networks are introducing mechanisms that enable constant and reliable connectivity through seamless handovers between networks and provide the foundation for a tighter coupling between Cloud resources and mobiles.
This convergence of technologies creates new challenges in the areas of traffic management and QoS provisioning. The constant connectivity to and reliance of mobile devices on Cloud resources have the potential of creating large traffic flows between networks. Furthermore, depending on the type of application generating the traffic flow, very strict QoS may be required from the networks as suboptimal performance may severely degrade an application’s functionality.
In this thesis, I propose a new service delivery framework, centred on the convergence of Mobile Cloud Computing and 5G networks for the purpose of optimising service delivery in a mobile environment. The framework is used as a guideline for identifying different aspects of service delivery in a mobile environment and for providing a path for future research in this field. The focus of the thesis is placed on the service delivery mechanisms that are responsible for optimising the QoS and managing network traffic.
I present a solution for managing traffic through dynamic service localisation according to user mobility and device connectivity. I implement a prototype of the solution in a virtualised environment as a proof of concept and demonstrate the functionality and results gathered from experimentation.
Finally, I present a new approach to modelling network performance by taking into account user mobility. The model considers the overall performance of a persistent connection as the mobile node switches between different networks. Results from the model can be used to determine which networks will negatively affect application performance and what impact they will have for the duration of the user's movement. The proposed model is evaluated using an analytical approac
CASPR: Judiciously Using the Cloud for Wide-Area Packet Recovery
We revisit a classic networking problem -- how to recover from lost packets
in the best-effort Internet. We propose CASPR, a system that judiciously
leverages the cloud to recover from lost or delayed packets. CASPR supplements
and protects best-effort connections by sending a small number of coded packets
along the highly reliable but expensive cloud paths. When receivers detect
packet loss, they recover packets with the help of the nearby data center, not
the sender, thus providing quick and reliable packet recovery for
latency-sensitive applications. Using a prototype implementation and its
deployment on the public cloud and the PlanetLab testbed, we quantify the
benefits of CASPR in providing fast, cost effective packet recovery. Using
controlled experiments, we also explore how these benefits translate into
improvements up and down the network stack
cISP: A Speed-of-Light Internet Service Provider
Low latency is a requirement for a variety of interactive network
applications. The Internet, however, is not optimized for latency. We thus
explore the design of cost-effective wide-area networks that move data over
paths very close to great-circle paths, at speeds very close to the speed of
light in vacuum. Our cISP design augments the Internet's fiber with free-space
wireless connectivity. cISP addresses the fundamental challenge of
simultaneously providing low latency and scalable bandwidth, while accounting
for numerous practical factors ranging from transmission tower availability to
packet queuing. We show that instantiations of cISP across the contiguous
United States and Europe would achieve mean latencies within 5% of that
achievable using great-circle paths at the speed of light, over medium and long
distances. Further, we estimate that the economic value from such networks
would substantially exceed their expense
- …