325 research outputs found

    Opportunistic Third-Party Backhaul for Cellular Wireless Networks

    Full text link
    With high capacity air interfaces and large numbers of small cells, backhaul -- the wired connectivity to base stations -- is increasingly becoming the cost driver in cellular wireless networks. One reason for the high cost of backhaul is that capacity is often purchased on leased lines with guaranteed rates provisioned to peak loads. In this paper, we present an alternate \emph{opportunistic backhaul} model where third parties provide base stations and backhaul connections and lease out excess capacity in their networks to the cellular provider when available, presumably at significantly lower costs than guaranteed connections. We describe a scalable architecture for such deployments using open access femtocells, which are small plug-and-play base stations that operate in the carrier's spectrum but can connect directly into the third party provider's wired network. Within the proposed architecture, we present a general user association optimization algorithm that enables the cellular provider to dynamically determine which mobiles should be assigned to the third-party femtocells based on the traffic demands, interference and channel conditions and third-party access pricing. Although the optimization is non-convex, the algorithm uses a computationally efficient method for finding approximate solutions via dual decomposition. Simulations of the deployment model based on actual base station locations are presented that show that large capacity gains are achievable if adoption of third-party, open access femtocells can reach even a small fraction of the current market penetration of WiFi access points.Comment: 9 pages, 6 figure

    Unwarranted Fears Mask the Benefits of Network Diversity: An Argument against Mandating Network Neutrality

    Get PDF
    The rapid development of the Internet has necessitated an update to Federal telecommunications laws. Recent Congressional efforts to enact such an update, however, have spawned a fiery debate over a somewhat nebulous concept: network neutrality. The debate concerns the way that Internet access providers handle the data traffic being sent over their networks. These providers would like the option to offer some of their customers, web site hosting companies and similar entities, additional services that would essentially result in these customers’ content loading faster, more reliably, or more securely than others not receiving such priority treatment. Yet, this proposed “diversity” of content treatment has worried many who fear that the egalitarian nature of the Internet, under which substantial innovation has occurred, would be disturbed by the imposition of inherent traffic preferences. These individuals propose including a provision in new telecommunications legislation that would mandate a “neutral” Internet where preferences for data are prohibited from being implemented. In this Comment, Elvis Stumbergs sheds some light on details behind the network neutrality debate. Often glossed-over details of Internet architecture are described to illustrate the consequences of a diverse and neutral Internet, and the various arguments for and against network neutrality are summarized. Stumbergs then devotes the Comment primarily to examining present and potential competition in the provision of Internet access, along with regulatory, antitrust, and legislative options available to ensure the preservation of a vigorous Internet access marketplace. Stumbergs concludes that network neutrality proponents’ fears are largely unwarranted. Moreover, imposing network neutrality legislation could ironically hinder the innovation that network neutrality advocates seemingly seek to protect

    Problem-formulation in a South African organization. Executive summary

    Get PDF
    Complex Problem Solving is an area of cognitive science that has received a good amount of attention, but theories in the field have not progressed accordingly. In general, research of problem solving has focussed on identifying preferable methods rather than on what happens when human beings confront problems in an organizational context Queseda, Kirtsch and Gomez (2005) Existing literature recognises that most organizational problems are ill-defined. Some problems can become well-defined whereas others are and remain ill-structured. For problems that can become well-defined, failure to pay attention to the area of problem definition has the potential to jeopardise the effectiveness of problem-formulation and thus the entire problem solving activity. Problem defining, a fundamental part of the problem-formulation process, is seen as the best defence against a Type III Error (trying to solve the wrong problem). Existing literature addresses possible processes for problem-formulation and recognises the importance of applying problem domain knowledge within them. However, inadequate attention is given to the possible circumstances that, within an organization, the participants do not know enough about the problem domain and do not recognise the importance of applying adequate problem domain knowledge or experience to the problem-formulation process. A case study is conducted into exactly these circumstances as they occurred and were successfully addressed within Eskom Holdings Ltd (Eskom), the national electricity utility in South Africa. The case study is a fundamental part of this research project, which explores the gap in the existing body of knowledge related to the circumstances described above and specifically to problems that can become well-defined, and provides the basis for the innovation developed herein that addresses that gap

    Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud

    Full text link
    With the advent of cloud computing, organizations are nowadays able to react rapidly to changing demands for computational resources. Not only individual applications can be hosted on virtual cloud infrastructures, but also complete business processes. This allows the realization of so-called elastic processes, i.e., processes which are carried out using elastic cloud resources. Despite the manifold benefits of elastic processes, there is still a lack of solutions supporting them. In this paper, we identify the state of the art of elastic Business Process Management with a focus on infrastructural challenges. We conceptualize an architecture for an elastic Business Process Management System and discuss existing work on scheduling, resource allocation, monitoring, decentralized coordination, and state management for elastic processes. Furthermore, we present two representative elastic Business Process Management Systems which are intended to counter these challenges. Based on our findings, we identify open issues and outline possible research directions for the realization of elastic processes and elastic Business Process Management.Comment: Please cite as: S. Schulte, C. Janiesch, S. Venugopal, I. Weber, and P. Hoenisch (2015). Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud. Future Generation Computer Systems, Volume NN, Number N, NN-NN., http://dx.doi.org/10.1016/j.future.2014.09.00

    Fatias de rede fim-a-fim : da extração de perfis de funçÔes de rede a SLAs granulares

    Get PDF
    Orientador: Christian Rodolfo Esteve RothenbergTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia ElĂ©trica e de ComputaçãoResumo: Nos Ășltimos dez anos, processos de softwarização de redes vĂȘm sendo continuamente diversi- ficados e gradativamente incorporados em produção, principalmente atravĂ©s dos paradigmas de Redes Definidas por Software (ex.: regras de fluxos de rede programĂĄveis) e Virtualização de FunçÔes de Rede (ex.: orquestração de funçÔes virtualizadas de rede). Embasado neste processo o conceito de network slice surge como forma de definição de caminhos de rede fim- a-fim programĂĄveis, possivelmente sobre infrastruturas compartilhadas, contendo requisitos estritos de desempenho e dedicado a um modelo particular de negĂłcios. Esta tese investiga a hipĂłtese de que a desagregação de mĂ©tricas de desempenho de funçÔes virtualizadas de rede impactam e compĂ”e critĂ©rios de alocação de network slices (i.e., diversas opçÔes de utiliza- ção de recursos), os quais quando realizados devem ter seu gerenciamento de ciclo de vida implementado de forma transparente em correspondĂȘncia ao seu caso de negĂłcios de comu- nicação fim-a-fim. A verificação de tal assertiva se dĂĄ em trĂȘs aspectos: entender os graus de liberdade nos quais mĂ©tricas de desempenho de funçÔes virtualizadas de rede podem ser expressas; mĂ©todos de racionalização da alocação de recursos por network slices e seus re- spectivos critĂ©rios; e formas transparentes de rastrear e gerenciar recursos de rede fim-a-fim entre mĂșltiplos domĂ­nios administrativos. Para atingir estes objetivos, diversas contribuiçÔes sĂŁo realizadas por esta tese, dentre elas: a construção de uma plataforma para automatização de metodologias de testes de desempenho de funçÔes virtualizadas de redes; a elaboração de uma metodologia para anĂĄlises de alocaçÔes de recursos de network slices baseada em um algoritmo classificador de aprendizado de mĂĄquinas e outro algoritmo de anĂĄlise multi- critĂ©rio; e a construção de um protĂłtipo utilizando blockchain para a realização de contratos inteligentes envolvendo acordos de serviços entre domĂ­nios administrativos de rede. Por meio de experimentos e anĂĄlises sugerimos que: mĂ©tricas de desempenho de funçÔes virtualizadas de rede dependem da alocação de recursos, configuraçÔes internas e estĂ­mulo de trĂĄfego de testes; network slices podem ter suas alocaçÔes de recursos coerentemente classificadas por diferentes critĂ©rios; e acordos entre domĂ­nios administrativos podem ser realizados de forma transparente e em variadas formas de granularidade por meio de contratos inteligentes uti- lizando blockchain. Ao final deste trabalho, com base em uma ampla discussĂŁo as perguntas de pesquisa associadas Ă  hipĂłtese sĂŁo respondidas, de forma que a avaliação da hipĂłtese proposta seja realizada perante uma ampla visĂŁo das contribuiçÔes e trabalhos futuros desta teseAbstract: In the last ten years, network softwarisation processes have been continuously diversified and gradually incorporated into production, mainly through the paradigms of Software Defined Networks (e.g., programmable network flow rules) and Network Functions Virtualization (e.g., orchestration of virtualized network functions). Based on this process, the concept of network slice emerges as a way of defining end-to-end network programmable paths, possibly over shared network infrastructures, requiring strict performance metrics associated to a par- ticular business case. This thesis investigate the hypothesis that the disaggregation of network function performance metrics impacts and composes a network slice footprint incurring in di- verse slicing feature options, which when realized should have their Service Level Agreement (SLA) life cycle management transparently implemented in correspondence to their fulfilling end-to-end communication business case. The validation of such assertive takes place in three aspects: the degrees of freedom by which performance of virtualized network functions can be expressed; the methods of rationalizing the footprint of network slices; and transparent ways to track and manage network assets among multiple administrative domains. In order to achieve such goals, a series of contributions were achieved by this thesis, among them: the construction of a platform for automating methodologies for performance testing of virtual- ized network functions; an elaboration of a methodology for the analysis of footprint features of network slices based on a machine learning classifier algorithm and a multi-criteria analysis algorithm; and the construction of a prototype using blockchain to carry out smart contracts involving service level agreements between administrative systems. Through experiments and analysis we suggest that: performance metrics of virtualized network functions depend on the allocation of resources, internal configurations and test traffic stimulus; network slices can have their resource allocations consistently analyzed/classified by different criteria; and agree- ments between administrative domains can be performed transparently and in various forms of granularity through blockchain smart contracts. At the end of his thesis, through a wide discussion we answer all the research questions associated to the investigated hypothesis in such way its evaluation is performed in face of wide view of the contributions and future work of this thesisDoutoradoEngenharia de ComputaçãoDoutor em Engenharia ElĂ©tricaFUNCAM

    The Next Frontier for Network Neutrality

    Get PDF
    The challenge for policymakers evaluating calls to institute some form of network neutrality regulation is to bring reasoned analysis to bear on a topic that continues to generate more heat than light and that many telecommunications companies appear to believe will just fade away. Over the fall of 2007, the hopes of broadband providers that broadband networks could escape any form of regulatory oversight were dealt a blow when it was revealed that Comcast had degraded the experience of some users of Bittorent (a peer-to-peer application) and engaged in an undisclosed form of network management. This incident, as well as the polarized debate that followed it, underscores the need to reframe the policy and academic debate over broadband regulation and begin evaluating a blueprint for a next generation regulatory strategy that will focus on promoting innovation in the network itself and by applications developers. This Article seeks to do just that. This Article begins by explaining how the debate over network neutrality has all-too-often presented polarized perspectives and slogans where more nuanced analysis is called for. As Internet pioneer David Clark commented on the network neutrality debate, [m]ost of what we have seen so far (in my opinion) either greatly overreaches, or is so vague as to be nothing but a lawyer\u27s employment act. As the Article explains, any effort by Congress to develop a well-specified response to network neutrality concerns would be premature, as the Federal Communications Commission (FCC) and the Federal Trade Commission (FTC) should first be afforded an opportunity to develop an effective consumer protection and competition policy strategy. As the Article explains, the FTC has an important opportunity - and indeed a responsibility - to develop and implement a consumer protection strategy in this area, calling for effective disclosure of broadband terms of service and the enforcement of the commitments made in those policies. Moreover, as to the relevant competition policy issues, the Article calls on either the FTC or the FCC (or both) to develop and implement an effective institutional strategy to guard against anticompetitive refusals to provide access to quality of service assurances. In short, the appropriate response to network neutrality concerns is not to ban such quality of service assurances altogether - as that would stifle the Internet\u27s development - but to ensure that the offering of such assurances is not used to injure competition and harm consumers

    Iris: Deep Reinforcement Learning Driven Shared Spectrum Access Architecture for Indoor Neutral-Host Small Cells

    Get PDF
    We consider indoor mobile access, a vital use case for current and future mobile networks. For this key use case, we outline a vision that combines a neutral-host based shared small-cell infrastructure with a common pool of spectrum for dynamic sharing as a way forward to proliferate indoor small-cell deployments and open up the mobile operator ecosystem. Towards this vision, we focus on the challenges pertaining to managing access to shared spectrum (e.g., 3.5GHz US CBRS spectrum). We propose Iris, a practical shared spectrum access architecture for indoor neutral-host small-cells. At the core of Iris is a deep reinforcement learning based dynamic pricing mechanism that efficiently mediates access to shared spectrum for diverse operators in a way that provides incentives for operators and the neutral-host alike. We then present the Iris system architecture that embeds this dynamic pricing mechanism alongside cloud-RAN and RAN slicing design principles in a practical neutral-host design tailored for the indoor small-cell environment. Using a prototype implementation of the Iris system, we present extensive experimental evaluation results that not only offer insight into the Iris dynamic pricing process and its superiority over alternative approaches but also demonstrate its deployment feasibility

    Accelerating the Adoption of Cloud Technology by SMEs in Nigeria

    Get PDF
    The contentions for this study were to investigate the reason for the slow adoption of Cloud Computing by SME operators in Nigeria and to develop a suitable information model to guide the would-be users in making an informed decision regarding cloud adoption. A structured interview was conducted with a select number of SME operators and industry associates within the researcher’s domain, and a reasonable number of valid responses were obtained.  Technology Acceptance Model (TAM) was adapted as the research framework to qualitatively examine the conditions that affect the adoption of Cloud computing into microfinance business operations, within which a suitable model for improving the adoption of Cloud computing was recommended. The analysis of the study revealed that SMEs in Nigeria, with particular reference to microfinance subsector in Akwa Ibom State are yet to fully embrace Cloud technology.  It was discovered that most of the SMEs studied, has some level of reservation about cloud computing, arising from not having appropriate education and enlightenment about the cloud economic offerings and potentials. From the outcome of the research, the researcher identified that most people’s concerns are as a result of lack of knowledge about cloud computing and so the researcher concluded that appropriate enlightenment by industry stakeholders, cloud service providers, cloud enthusiasts and even the government on the risks and overwhelming economic incentives of cloud computing as well as the provision of a monitored free trial services will encourage the adoption of cloud technology by SMEs. Index Terms - Cloud Adoption, Cloud Computing, Cloud End-user, Cloud Service Providers, Data Security, Microfinance, Nigeria, SMEs, Vendors,
    • 

    corecore