250 research outputs found

    An Elasticity-aware Governance Platform for Cloud Service Delivery

    Get PDF
    In cloud service provisioning scenarios with a changing demand from consumers, it is appealing for cloud providers to leverage only a limited amount of the virtualized resources required to provide the service. However, it is not easy to determine how much resources are required to satisfy consumers expectations in terms of Quality of Service (QoS). Some existing frameworks provide mechanisms to adapt the required cloud resources in the service delivery, also called an elastic service, but only for consumers with the same QoS expectations. The problem arises when the service provider must deal with several consumers, each demanding a different QoS for the service. In such an scenario, cloud resources provisioning must deal with trade-offs between different QoS, while fulfilling these QoS, within the same service deployment. In this paper we propose an elasticity-aware governance platform for cloud service delivery that reacts to the dynamic service load introduced by consumers demand. Such a reaction consists of provisioning the required amount of cloud resources to satisfy the different QoS that is offered to the consumers by means of several service level agreements. The proposed platform aims to keep under control the QoS experienced by multiple service consumers while maintaining a controlled cost.Junta de Andalucía P12--TIC--1867Ministerio de Economía y Competitividad TIN2012-32273Agencia Estatal de Investigación TIN2014-53986-RED

    Handling Constraints in Cardinality-Based Feature Models: The Cloud Environment Case Study

    Get PDF
    Feature modeling is a well-known approach to describe variability in Software Product Lines. Cardinality-based Feature Models (FMs) is a type of FMs where features can be instantiated several times in the configuration, contrarily to boolean FMs where a feature is present or not. While boolean FMs configuration is easily handled by current approaches, there is still a lack of support regarding cardinality-based FMs. In particular, expressing constraints over the set of feature instances is not supported in current approaches, where cardinality involved in such constraints can not be specified. To face this limitation, we define in this paper cardinality-based expressions and provide the related formal syntax and semantics as well as the way to automate the underlying configuration. We study the need for such a support using cloud computing environment configurations as a motivating example. To evaluate the soundness of the proposed approach, we analyze a corpus of 10 cloud environments. Our empirical evaluation shows that constraints relying on our cardinality-based expressions are common and that our approach is effective and can provide an useful support to developers for modeling and reasoning about FMs with cardinalities.La modélisation à l'aide de caractéristiques est une approche très utilisée dans les lignes de produits logiciels. Les Modèles de Caractéristiques (MCs) étendus avec des cardinalités sont un des MCs dans lesquels une caractéristique peut être instanciée plusieurs fois lors de la configuration, contrairement au MCs booléens dans lesquels une caractéristique est présente ou non. Alors que la configuration de MCs booléens est aujourd'hui maitrisée par différentes approches, il reste cependant un manque en terme de support pour les MCs étendus avec des cardinalités. Notamment, pouvoir exprimé des contraintes sur le nombre d'instances requises n'est pas permis dans les approches existantes, puisque les contraintes ne peuvent être exprimées que sur des caractéristiques booléennes. Pour contrer cette limite, nous fournissons dans cet article une nouvelle notation pour exprimer ces contraintes, une définition formelle de leur syntaxe et de leur sémantique ainsi qu'un moyen d'automatiser la vérification des configurations associées. Pour illustrer notre approche, nous étudions le besoin pour un tel support dans le cadre de la configuration d'environnements d'informatique dans les nuages. Nous évaluons notre approche sur un ensemble de 10 environnements. Notre étude empirique montre que les besoins pour exprimer ce type de contraintes sont communs dans ces environnements et que notre approche est efficace pour les gérer

    Comprehensive Framework for Selecting Cloud Service Providers (CSPs) Using Meta synthesis Approach

    Get PDF
    IntroductionNowadays, cloud computing has attracted the attention of many organizations. So many of them tend to make their business more agile by using flexible cloud services. Currently, the number of cloud service providers is increasing. In this regard, choosing the most suitable cloud service provider based on the criteria according to the conditions of the service consumer will be considered one of the most important challenges. Relying on previous studies and using a meta-synthesis approach, this research comprehensively searches past researches and provides a comprehensive framework of factors affecting the choice of cloud service providers including 4 main categories and 10 sub-areas. Then, using the opinions of experts who were selected purposefully and using the snowball method, and using the Lawshe validation method, the framework is finalized.Research Question(s)This research aims to complete the results of previous studies and answer the following questions with a systematic review of the subject literature:-What are the components of the comprehensive framework for choosing cloud service providers?-What are the effective criteria to choose a cloud service provider?-What is the selected framework of effective factors? Literature ReviewMany researchers have looked at the problem of choosing the best CSP from different aspects and have tried to provide a solution in this field. In this regard, we can refer to "Tang and Liu" (2015) who proposed a model called "FAGI" which defines the choice of a trusted CSP through four dimensions: security functions, auditability, management capability, and Interactivity helps. "Kong et al." (2013) presented an optimization algorithm based on graph theory to facilitate CSP selection. Some researchers have also provided a framework for CSP selection, such as "Gash" (2015) who provides a framework called "SelCSP" with the combination of trustworthiness and competence to estimate the risk of interaction. "Brendvall and Vidyarthi" (2014) suggest that in order to choose the best cloud service provider, a customer must first identify the indicators related to the level of service quality related to him and then evaluate different providers. Some researchers have focused on using different techniques for selection. For example: "Supraya et al." (2016) use the MCDM method to rank based on infrastructure parameters (agility, financial, efficiency, security, and ease of use). They investigate the mechanisms of cloud service recommender systems and divide them into four main categories and their techniques in four features of scalability, accessibility, accuracy, and trustIn this research, it has been tried to use the models and variables of the subject literature in developing a comprehensive framework. The codes, concepts, and categories related to the choice of cloud service providers are extracted from previous studies, and a comprehensive framework of the factors influencing the choice of cloud service providers is presented using the meta-composite method. MethodologyIn this research, based on the "Sandusky and Barroso" meta-composite qualitative research method, which is more general, a systematic review of the research literature was conducted, and the codes in the research literature were extracted. Then the codes, categories, and finally the proposed model are formed. The seven-step method of "Sandusky and Barroso" consists of: formulation of the research question, systematic review of the subject literature, search and selection of suitable articles, extraction of article information, analysis and synthesis of qualitative findings, quality control, and presentation of findings. Lawshe validation method has been used to validate the research findings. ResultsIn the meta-synthesis method, all the factors extracted from previous studies are considered as codes and concepts are obtained from the collection of these codes. Using the opinion of experts and considering the concept of each of these codes, codes with similar concepts were placed next to each other and new concepts were formed. This procedure was repeated in converting the concepts into categories and the proposed framework was identified. This framework consists of 27 codes, 10 concepts, and 4 categories (Table 1).Table 1: Codes, concepts, and categories extracted from the sourcescategoryConceptCodeNo.TrustSecurityHardware Security1Network Security2Software Security3Confidentiality4Control5Guarantee and AssuranceAccessibility6Stability7Facing ThreatsTechnical Risk8Center for Security Measures9TechnologyEfficiencyService Delivery Efficiency10Interactivity11Hardware and Network InfrastructureConfiguration and Change12Capacity (Memory, CPU, Disk)13Functionality Flexibility14Usability15Accuracy16Service Response Time17Ease of use18ManagerialMaintenanceEducation and Awareness19Customer Communication Channels20StrategicLegal Issues21Data Analysis22Service Level Agreement23CommercialCustomer SatisfactionResponsiveness24Customer Feedback25CostSubscription Fee26Implementation Cost27The lack of a common framework for evaluating cloud service providers is compounded by the fact that no two providers are the same, so that this issue complicates the process of choosing the right provider for each organization. Figure 1 shows the proposed comprehensive framework including 4 categories and 10 concepts covering the issue of choosing cloud service providers. These factors are useful in determining the provider that best matches the personal and organizational needs of the service recipient. The main categories are: trust building, technology, management, and business, which will be explained in the following.Figure 1: Cloud service provider selection framework 5- ConclusionBy comprehensively examining the factors affecting the choice, this research introduces specific areas such as trust building, technology, management, and business as the main areas of cloud service provider selection and add to the previous areas. The category of building trust between the customer, and the cloud service provider is of particular importance. In this research, the concepts related to trust building are: security (including hardware security, network security, software security, confidentiality and control), (availability, stability and stability), and facing threats (technical risk). In 36% of the articles, the concept of trust is mentioned, but in each study, only a limited number of factors affecting this category are discussed. This research takes a comprehensive look at the category of technology, the concepts of productivity (including service delivery efficiency, interactivity), hardware and network infrastructure (including configuration and repair, capacity (memory, processor, disk)), and performance (including flexibility, usability, accuracy of operation, service response time, ease of use). Considering the variety of services on different cloud platforms, service recipients must ensure that the provision of services is managed easily and in the shortest possible time by the cloud provider. The commercial aspect of service delivery deals with the two concepts of customer satisfaction (including responsiveness, customer feedback) and service rates (including: subscription cost and implementation cost), which are of interest to many businesses. The results of this research will help the decision makers of using the cloud space (both organizational managers and cloud customers) in choosing the best cloud service provider to have a comprehensive view of the effective factors before choosing and plan according to their needs

    An analysis about the relationship between the cloud computing model and ITIL v3 2011

    Get PDF
    Cloud Computing is widely recognized as a recent computing paradigm of digital transformation in which scalable and elastic computational resources are delivered as a service through Internet technologies. Its characteristics made this business model increasingly adopted by organizations reaching business goals. Besides its benefits, some risks may impact organizations internally and, in the way they deliver their services to their clients. Therefore, it became important to understand the impacts of the Cloud model on the way companies organize their processes. The goal of this work is to investigate which are the main impacts arising from the Cloud Computing model currently impacting Information Technology Infrastructure Library framework processes. The methodology selected will be through semi-structured interviews with knowledgeable professionals to effectively collect practical information that, according to the Systematic Literature Review performed, could not be collected by the traditional literature. By analyzing the Systematic Literature Review results, several processes of this framework were affected, which may lead to a need for reframing it. Although the organization’s approach to this model must be enhanced and adapted to a new reality, the empirical insights collected from semi-structured interviews, suggest that the framework does not need to be reframed, and ITIL v3 2011 most impacted processes by the introduction of the Cloud-based model, are Change Management and Incident Management.A computação em nuvem é amplamente reconhecida como um paradigma de computação recente da transformação digital, no qual recursos computacionais escaláveis e elásticos são fornecidos como um serviço através de tecnologias na Internet. As suas características fizeram com que esse modelo de negócio fosse cada vez mais adotado por organizações que na prossecução dos seus objetivos de negócio. Além dos benefícios, também existem os riscos podem impactar as organizações internamente e na forma como entregam os seus serviços aos clientes. Portanto, tornou-se importante entender os impactos do modelo de Cloud na forma como as empresas organizam seus próprios processos e práticas. O objetivo deste trabalho é investigar quais são os principais impactos decorrentes do modelo de Cloud que impactam atualmente os processos da Information Technology Infrastructure Library. A metodologia selecionada será por meio de entrevistas semiestruturadas a profissionais capacitados para recolher informações decorrentes de experiências na prática que, de acordo com a Revisão Sistemática da Literatura realizada, não poderiam ser obtidas pela literatura tradicional. Ao analisar os resultados da Revisão Sistemática da Literatura, diversos processos desta framework foram afetados, o que pode levar à necessidade de reformulá-la. As considerações empíricas recolhidas nas entrevistas semiestruturadas, sugerem que a framework não necessita de ser reformulada e que os processos do ITIL v3 2011 mais impactados no modelo Cloud são o de Gestão de Incidentes e de Gestão da Mudança

    When Locality is not enough: Boosting Peer Selection of Hybrid CDN-P2P Live Streaming Systems using Machine Learning

    Get PDF
    International audienceLive streaming traffic represents an increasing part of the global IP traffic. Hybrid CDN-P2P architectures have been proposed as a way to build scalable systems with a good Quality of Experience (QoE) for users, in particular, using the WebRTC technology which enables real-time communication between browsers and, thus, facilitates the deployment of such systems. An important challenge to ensure the efficiency of P2P systems is the optimization of peer selection. Most existing systems address this problem using simple heuristics, e.g. favor peers in the same ISP or geographical region. We analysed 9 months of operation logs of a hybrid CDN-P2P system and demonstrate the sub-optimality of those classical strategies. Over those 9 months, over 18 million peers downloaded over 2 billion video chunks. We propose learning-based methods that enable the tracker to perform adaptive peer selection. Furthermore, we demonstrate that our best models, which turn out to be the neural network models can (i) improve the throughput by 22.7%, 14.5%, and 6.8% (reaching 89%, 20.4%, and 24.3% for low bandwidth peers) over random peer, same ISP, and geographical selection methods, respectively (ii) reduce by 18.6%, 18.3%, and 16% the P2P messaging delay and (iii) decrease by 29.9%, 29.5%, and 21.2% the chunk loss rate (video chunks not received before the timeout that triggers CDN downloads), respectively

    A Pattern-Based Approach to Scaffold the IT Infrastructure Design Process

    Get PDF
    Context. The design of Information Technology (IT) infrastructures is a challenging task since it implies proficiency in several areas that are rarely mastered by a single person, thus raising communication problems among those in charge of conceiving, deploying, operating and maintaining/managing them. Most IT infrastructure designs are based on proprietary models, known as blueprints or product-oriented architectures, defined by vendors to facilitate the configuration of a particular solution, based upon their services and products portfolio. Existing blueprints can be facilitators in the design of solutions for a particular vendor or technology. However, since organizations may have infrastructure components from multiple vendors, the use of blueprints aligned with commercial product(s) may cause integration problems among these components and can lead to vendor lock-in. Additionally, these blueprints have a short lifecycle, due to their association with product version(s) or a specific technology, which hampers their usage as a tool for the reuse of IT infrastructure knowledge. Objectives. The objectives of this dissertation are (i) to mitigate the inability to reuse knowledge in terms of best practices in the design of IT infrastructures and, (ii) to simplify the usage of this knowledge, making the IT infrastructure designs simpler, quicker and better documented, while facilitating the integration of components from different vendors and minimizing the communication problems between teams. Method. We conducted an online survey and performed a systematic literature review to support the state of the art and to provide evidence that this research was relevant and had not been conducted before. A model-driven approach was also used for the formalization and empirical validation of well-formedness rules to enhance the overall process of designing IT infrastructures. To simplify and support the design process, a modeling tool, including its abstract and concrete syntaxes was also extended to include the main contributions of this dissertation. Results. We obtained 123 responses to the online survey. Their majority were from people with more than 15 years experience with IT infrastructures. The respondents confirmed our claims regarding the lack of formality and documentation problems on knowledge transfer and only 19% considered that their current practices to represent IT Infrastructures are efficient. A language for modeling IT Infrastructures including an abstract and concrete syntax is proposed to address the problem of informality in their design. A catalog of IT Infrastructure patterns is also proposed to allow expressing best practices in their design. The modeling tool was also evaluated and according to 84% of the respondents, this approach decreases the effort associated with IT infrastructure design and 89% considered that the use of a repository with infrastructure patterns, will help to improve the overall quality of IT infrastructures representations. A controlled experiment was also performed to assess the effectiveness of both the proposed language and the pattern-based IT infrastructure design process supported by the tool. Conclusion. With this work, we contribute to improve the current state of the art in the design of IT infrastructures replacing the ad-hoc methods with more formal ones to address the problems of ambiguity, traceability and documentation, among others, that characterize most of IT infrastructure representations. Categories and Subject Descriptors:C.0 [Computer Systems Organization]: System architecture; D.2.10 [Software Engineering]: Design-Methodologies; D.2.11 [Software Engineering]: Software Architectures-Patterns

    Study of Fundamental Rights Limitations for Online Enforcement through Self-Regulation

    Get PDF
    The use of self-regulatory or privatized enforcement measures in the online environment can give rise to various legal issues that affect the fundamental rights of internet users. First, privatized enforcement by internet services, without state involvement, can interfere with the effective exercise of fundamental rights by internet users. Such interference may, on occasion, be disproportionate, but there are legal complexities involved in determining the precise circumstances in which this is the case. This is because, for instance, the private entities can themselves claim protection under the fundamental rights framework (e.g. the protection of property and the freedom to conduct business). Second, the role of public authorities in the development of self-regulation in view of certain public policy objectives can become problematic, but has to be carefully assessed. The fundamental rights framework puts limitations on government regulation that interferes with fundamental rights. Essentially, such limitations involve the (negative) obligation for States not to interfere with fundamental rights. Interferences have to be prescribed by law, pursue a legitimate aim and be necessary in a democratic society. At the same time, however, States are also under the (positive) obligation to take active measures in order to ensure the effective exercise of fundamental rights. In other words, States must do more than simply refrain from interference. These positive obligations are of specific interest in the context of private ordering impact on fundamental rights, but tend to be abstract and hard to operationalize in specific legal constellations. This study’s central research question is: What legal limitations follow from the fundamental rights framework for self-regulation and privatized enforcement online? It examines the circumstances in which State responsibility can be engaged as a result of selfregulation or privatized enforcement online. Part I of the study provides an overview and analysis of the relevant elements in the European and international fundamental rights framework that place limitations on privatized enforcement. Part II gives an assessment of specific instances of self-regulation or other instances of privatized enforcement in light of these elements

    Characterising and modeling the co-evolution of transportation networks and territories

    Full text link
    The identification of structuring effects of transportation infrastructure on territorial dynamics remains an open research problem. This issue is one of the aspects of approaches on complexity of territorial dynamics, within which territories and networks would be co-evolving. The aim of this thesis is to challenge this view on interactions between networks and territories, both at the conceptual and empirical level, by integrating them in simulation models of territorial systems.Comment: Doctoral dissertation (2017), Universit\'e Paris 7 Denis Diderot. Translated from French. Several papers compose this PhD thesis; overlap with: arXiv:{1605.08888, 1608.00840, 1608.05266, 1612.08504, 1706.07467, 1706.09244, 1708.06743, 1709.08684, 1712.00805, 1803.11457, 1804.09416, 1804.09430, 1805.05195, 1808.07282, 1809.00861, 1811.04270, 1812.01473, 1812.06008, 1908.02034, 2012.13367, 2102.13501, 2106.11996

    Cost based optimization for strategic mobile radio access network planning using metaheuristics

    Get PDF
    La evolución experimentada por las comunicaciones móviles a lo largo de las últimas décadas ha sido motivada por dos factores principales: el surgimiento de nuevas aplicaciones y necesidades por parte del usuario, así como los avances tecnológicos. Los servicios ofrecidos para términales móviles han evolucionado desde el clásico servicio de voz y mensajes cortos (SMS), a servicios más atractivos y por lo tanto con una rápida aceptación por parte de usuario final como, video telephony, video streaming, online gaming, and the internet broadband access (MBAS). Todos estos nuevos servicios se han convertido en una realidad gracias a los avances técnologicos, avances tales como nuevas técnicas de acceso al medio compartido, nuevos esquemas de codificiación y modulación de la información intercambiada, sistemas de transmisión y recepción basados en múltiples antenas (MIMO), etc. Un aspecto importante en esta evolución fue la liberación del sector a principios de los años 90, donde la función reguladora llevado a cabo por las autoridades regulatorias nacionales (NRA) se ha antojado fundamental. Uno de los principales problemas tratados por la NRA espcífica de cada nación es la determinación de los costes por servicios mayoristas, esto es los servicios entre operadores de servicios móvilles, entre los que cabe destacar el coste por terminación de llamada o de inteconexión. El servicio de interconexión hace posible la comunicación de usuarios de diferente operadores, así como el acceso a la totalidad de servicios, incluso a aquellos no prestados por un operador en concreto gracias al uso de una red perteneciente a otro operador, por parte de todos los usuarios. El objetivo principal de esta tesis es la minimización de los costes de inversión en equipamiento de red, lo cual repercute en el establecimiento de las tarifas de interconexión como se verá a lo largo de este trabajo. La consecución de dicho objetivo se divide en dos partes: en primer lugar, el desarrollo de un conjunto de algoritmos para el dimesionado óptimo de una red de acceso radio (RAN) para un sistema de comunicaciones móvilles. En segundo lugar, el diseño y aplicación de algoritmos de optimización para la distribución óptima de los servicios sobre el conjunto de tecnologías móviles existentes (OSDP). El modulo de diseño de red proporciona cuatro algoritmos diferenciados encargados del dimensionado y planificación de la red de acceso móvil. Estos algoritmos se aplican en un entorno multi-tecnología, considerando sistemas de segunda (2G), tercera (3G) y cuarta (4G) generación, multi-usuario, teniendo en cuenta diferentes perfiles de usuarios con su respectiva carga de tráfico, y multo-servicio, incluyendo voz, servicios de datos de baja velocidad (64-144 Kbps), y acceso a internet de banda ancha móvil. La segunda parte de la tesis se encarga de distribuir de una manera óptima el conjunto de servicios sobre las tecnologías a desplegar. El objetivo de esta parte es hacer un uso eficiente de las tecnologías existentes reduciendo los costes de inversión en equipamiento de red. Esto es posible gracias a las diferencias tecnológicas existente entre los diferentes sistemas móviles, que hacen que los sistemas de segunda generación sean adecuados para proporcionar el servicio de voz y mensajería corta, mientras que redes de tercera generación muestran un mejor rendimiento en la transmisión de servicios de datos. Por último, el servicio de banda ancha móvil es nativo de redes de última generadón, como High Speed Data Acces (HSPA) y 4G. Ambos módulos han sido aplicados a un extenso conjunto de experimentos para el desarrollo de análisis tecno-económicos tales como el estudio del rendimiento de las tecnologías de HSPA y 4G para la prestación del servicio de banda ancha móvil, así como el análisis de escenarios reales de despliegue para redes 4G que tendrán lugar a partir del próximo año coinicidiendo con la licitación de las frecuencias en la banda de 800 MHz. Así mismo, se ha llevado a cabo un estudio sobre el despliegue de redes de 4G en las bandas de 800 MHz, 1800 MHz y 2600 MHz, comparando los costes de inversión obtenidos tras la optimización. En todos los casos se ha demostrado la mejora, en términos de costes de inversión, obtenida tras la aplicación de ambos módulos, posibilitando una reducción en la determinación de los costes de provisión de servicios. Los estudios realizados en esta tesis se centran en la nación de España, sin embargo todos los algoritmos implementados son aplicables a cualquier otro país europeo, prueba de ello es que los algoritmos de diseño de red han sido utilizados en diversos proyectos de regulación

    Cost based optimization for strategic mobile radio access network planning using metaheuristics

    Get PDF
    La evolución experimentada por las comunicaciones móviles a lo largo de las últimas décadas ha sido motivada por dos factores principales: el surgimiento de nuevas aplicaciones y necesidades por parte del usuario, así como los avances tecnológicos. Los servicios ofrecidos para términales móviles han evolucionado desde el clásico servicio de voz y mensajes cortos (SMS), a servicios más atractivos y por lo tanto con una rápida aceptación por parte de usuario final como, video telephony, video streaming, online gaming, and the internet broadband access (MBAS). Todos estos nuevos servicios se han convertido en una realidad gracias a los avances técnologicos, avances tales como nuevas técnicas de acceso al medio compartido, nuevos esquemas de codificiación y modulación de la información intercambiada, sistemas de transmisión y recepción basados en múltiples antenas (MIMO), etc. Un aspecto importante en esta evolución fue la liberación del sector a principios de los años 90, donde la función reguladora llevado a cabo por las autoridades regulatorias nacionales (NRA) se ha antojado fundamental. Uno de los principales problemas tratados por la NRA espcífica de cada nación es la determinación de los costes por servicios mayoristas, esto es los servicios entre operadores de servicios móvilles, entre los que cabe destacar el coste por terminación de llamada o de inteconexión. El servicio de interconexión hace posible la comunicación de usuarios de diferente operadores, así como el acceso a la totalidad de servicios, incluso a aquellos no prestados por un operador en concreto gracias al uso de una red perteneciente a otro operador, por parte de todos los usuarios. El objetivo principal de esta tesis es la minimización de los costes de inversión en equipamiento de red, lo cual repercute en el establecimiento de las tarifas de interconexión como se verá a lo largo de este trabajo. La consecución de dicho objetivo se divide en dos partes: en primer lugar, el desarrollo de un conjunto de algoritmos para el dimesionado óptimo de una red de acceso radio (RAN) para un sistema de comunicaciones móvilles. En segundo lugar, el diseño y aplicación de algoritmos de optimización para la distribución óptima de los servicios sobre el conjunto de tecnologías móviles existentes (OSDP). El modulo de diseño de red proporciona cuatro algoritmos diferenciados encargados del dimensionado y planificación de la red de acceso móvil. Estos algoritmos se aplican en un entorno multi-tecnología, considerando sistemas de segunda (2G), tercera (3G) y cuarta (4G) generación, multi-usuario, teniendo en cuenta diferentes perfiles de usuarios con su respectiva carga de tráfico, y multo-servicio, incluyendo voz, servicios de datos de baja velocidad (64-144 Kbps), y acceso a internet de banda ancha móvil. La segunda parte de la tesis se encarga de distribuir de una manera óptima el conjunto de servicios sobre las tecnologías a desplegar. El objetivo de esta parte es hacer un uso eficiente de las tecnologías existentes reduciendo los costes de inversión en equipamiento de red. Esto es posible gracias a las diferencias tecnológicas existente entre los diferentes sistemas móviles, que hacen que los sistemas de segunda generación sean adecuados para proporcionar el servicio de voz y mensajería corta, mientras que redes de tercera generación muestran un mejor rendimiento en la transmisión de servicios de datos. Por último, el servicio de banda ancha móvil es nativo de redes de última generadón, como High Speed Data Acces (HSPA) y 4G. Ambos módulos han sido aplicados a un extenso conjunto de experimentos para el desarrollo de análisis tecno-económicos tales como el estudio del rendimiento de las tecnologías de HSPA y 4G para la prestación del servicio de banda ancha móvil, así como el análisis de escenarios reales de despliegue para redes 4G que tendrán lugar a partir del próximo año coinicidiendo con la licitación de las frecuencias en la banda de 800 MHz. Así mismo, se ha llevado a cabo un estudio sobre el despliegue de redes de 4G en las bandas de 800 MHz, 1800 MHz y 2600 MHz, comparando los costes de inversión obtenidos tras la optimización. En todos los casos se ha demostrado la mejora, en términos de costes de inversión, obtenida tras la aplicación de ambos módulos, posibilitando una reducción en la determinación de los costes de provisión de servicios. Los estudios realizados en esta tesis se centran en la nación de España, sin embargo todos los algoritmos implementados son aplicables a cualquier otro país europeo, prueba de ello es que los algoritmos de diseño de red han sido utilizados en diversos proyectos de regulación
    corecore