535 research outputs found

    Deliverable JRA1.1: Evaluation of current network control and management planes for multi-domain network infrastructure

    Get PDF
    This deliverable includes a compilation and evaluation of available control and management architectures and protocols applicable to a multilayer infrastructure in a multi-domain Virtual Network environment.The scope of this deliverable is mainly focused on the virtualisation of the resources within a network and at processing nodes. The virtualization of the FEDERICA infrastructure allows the provisioning of its available resources to users by means of FEDERICA slices. A slice is seen by the user as a real physical network under his/her domain, however it maps to a logical partition (a virtual instance) of the physical FEDERICA resources. A slice is built to exhibit to the highest degree all the principles applicable to a physical network (isolation, reproducibility, manageability, ...). Currently, there are no standard definitions available for network virtualization or its associated architectures. Therefore, this deliverable proposes the Virtual Network layer architecture and evaluates a set of Management- and Control Planes that can be used for the partitioning and virtualization of the FEDERICA network resources. This evaluation has been performed taking into account an initial set of FEDERICA requirements; a possible extension of the selected tools will be evaluated in future deliverables. The studies described in this deliverable define the virtual architecture of the FEDERICA infrastructure. During this activity, the need has been recognised to establish a new set of basic definitions (taxonomy) for the building blocks that compose the so-called slice, i.e. the virtual network instantiation (which is virtual with regard to the abstracted view made of the building blocks of the FEDERICA infrastructure) and its architectural plane representation. These definitions will be established as a common nomenclature for the FEDERICA project. Other important aspects when defining a new architecture are the user requirements. It is crucial that the resulting architecture fits the demands that users may have. Since this deliverable has been produced at the same time as the contact process with users, made by the project activities related to the Use Case definitions, JRA1 has proposed a set of basic Use Cases to be considered as starting point for its internal studies. When researchers want to experiment with their developments, they need not only network resources on their slices, but also a slice of the processing resources. These processing slice resources are understood as virtual machine instances that users can use to make them behave as software routers or end nodes, on which to download the software protocols or applications they have produced and want to assess in a realistic environment. Hence, this deliverable also studies the APIs of several virtual machine management software products in order to identify which best suits FEDERICA’s needs.Postprint (published version

    On the investigation of cloud-based mobile media environments with service-populating and QoS-aware mechanisms

    Get PDF
    Recent advances in mobile devices and network technologies have set new trends in the way we use computers and access networks. Cloud Computing, where processing and storage resources are residing on the network is one of these trends. The other is Mobile Computing, where mobile devices such as smartphones and tablets are believed to replace personal computers by combining network connectivity, mobility, and software functionality. In the future, these devices are expected to seamlessly switch between different network providers using vertical handover mechanisms in order to maintain network connectivity at all times. This will enable mobile devices to access Cloud Services without interruption as users move around. Using current service delivery models, mobile devices moving from one geographical location to another will keep accessing those services from the local Cloud of their previous network, which might lead to moving a large volume of data over the Internet backbone over long distances. This scenario highlights the fact that user mobility will result in more congestion on the Internet. This will degrade the Quality of Service and by extension, the Quality of Experience offered by the services in the Cloud and especially multimedia services that have very tight temporal constraints in terms of bandwidth and jitter. We believe that a different approach is required to manage resources more efficiently, while improving the Quality of Service and Media Service Delivery in which services run on localised public Clouds and are capable of populating other public Clouds in different geographical locations depending on service demands and network status. Using an analytical framework, this paper argues that as the demand for specific services increases in a location, it might be more efficient to move those services closer to that location. This will prevent the Internet backbone from experiencing high traffic loads due to multimedia streams and will offer service pr- viders an automated resource allocation and management mechanism for their services

    Transmissão de video melhorada com recurso a SDN em ambientes baseados em cloud

    Get PDF
    The great technological development of informatics has opened the way for provisioning various services and new online-based entertainment services, which have expanded significantly after the increase in social media applications and the number of users. This significant expansion has posed an additional challenge to Internet Service Providers (ISP)s in terms of management for network, equipment and the efficiency of service delivery. New notions and techniques have been developed to offer innovative solutions such as SDN for network management, virtualization for optimal resource utilization and others like cloud computing and network function virtualization. This dissertation aims to manage live video streaming in the network automatically by adding a design architecture to the virtual network environment that helps to filter video packets from the remaining ones into a certain tunnel and this tunnel will be handled as a higher priority to be able to provide better service for customers. With the dedicated architecture, side by side, a monitoring application integrated into the system was used to detect the video packets and notify the SDN server to the existence of the video through the networkOs grandes avanços tecnológicos em informática abriram o caminho para o fornecimento de vários serviços e novos aplicações de entretenimento baseadas na web, que expandiram significativamente com a explosão no número de aplicações e utilizadores das redes sociais. Esta expansão significativa colocou desafios adicionais aos fornecedores de serviços de rede, em termos de gestão de rede, equipamento e a eficácia do fornecimento de serviços. Novas noções e técnicas foram desenvolvidas para oferecer soluções inovadoras, tais como redes definidas por software (SDN) para a gestão de rede, virtualização para a optimização da utilização dos recursos e outros, tais como a computação em nuvem e as funções de rede virtualizadas. Esta dissertação pretende gerir automaticamente a emissão de vídeo ao vivo na rede, através da adição de uma arquitetura ao ambiente de rede virtualizado, que auxilie a filtragem de pacotes de vídeo dos do restante tráfego, para um túnel específico, que será gerido com uma prioridade maior, capaz de fornecer melhor serviço aos clientes. Além do desenho da arquitectura, scripts de Python foram usados para detectar os pacotes de vídeo e injetar novas regras no controlador SDN que monitoriza o tráfego ao longo da rede.Mestrado em Engenharia de Computadores e Telemátic

    Introducing Virtual Law offices in the Existing Judiciary

    Get PDF
    With the rise of omnipresent foundation of computing resources over the past years, every IT Setup is expanding their horizons in the Cloud services and related technologies. Cloud provides dynamically scalable virtualized computing resources as a service over the Internet and this key characteristic differentiates it from traditional computing paradigm. It is the application of Cloud and mobile computing technologies for improving communication between lawyers, their clients, and any other person involved. This framework basically digitalizes the existing judicial file system to form an e-library using Clouds infrastructure and using the internet services like GPRS or GSM / CDMA or 3G/4G etc for information retrieval from this e-library after the request has received proper authorization and authentication of the regulatory body. Data is received from both Web and mobile based applications so that each person can access the judicial data anytime and anywhere. For the realization of this system, a web interface is created with e-library serving as the main database and with user-friendly interface to do the above data acquisition and analysis which ultimately gives pace to the slow process of case management

    Energy-efficient Transitional Near-* Computing

    Get PDF
    Studies have shown that communication networks, devices accessing the Internet, and data centers account for 4.6% of the worldwide electricity consumption. Although data centers, core network equipment, and mobile devices are getting more energy-efficient, the amount of data that is being processed, transferred, and stored is vastly increasing. Recent computer paradigms, such as fog and edge computing, try to improve this situation by processing data near the user, the network, the devices, and the data itself. In this thesis, these trends are summarized under the new term near-* or near-everything computing. Furthermore, a novel paradigm designed to increase the energy efficiency of near-* computing is proposed: transitional computing. It transfers multi-mechanism transitions, a recently developed paradigm for a highly adaptable future Internet, from the field of communication systems to computing systems. Moreover, three types of novel transitions are introduced to achieve gains in energy efficiency in near-* environments, spanning from private Infrastructure-as-a-Service (IaaS) clouds, Software-defined Wireless Networks (SDWNs) at the edge of the network, Disruption-Tolerant Information-Centric Networks (DTN-ICNs) involving mobile devices, sensors, edge devices as well as programmable components on a mobile System-on-a-Chip (SoC). Finally, the novel idea of transitional near-* computing for emergency response applications is presented to assist rescuers and affected persons during an emergency event or a disaster, although connections to cloud services and social networks might be disturbed by network outages, and network bandwidth and battery power of mobile devices might be limited

    Literature Based Study On Cloud Computing For Health And Sustainability In View Of Covid19

    Get PDF
    The modern age of technology is trending with digitalization and reshaping the business around the world. Advancement of new technologies and innovations are transforming businesses in numerous ways and creating a whole new business computational ecosystem. Most often professionals are across the globe are talking about terms like digitization, Industry 4.0, Big Data, Block chain Technologies, cloud computing, 3D Printing, Machine Learning, Automation, Artificial Intelligence (AI), Internet of Things (IoT), Data mining etc. Amidst, all the major advanced technologies mentioned above, Cloud Computing, is one of the fast emerging into a large scale computing system with seamless access to virtually limitless resources. Just to apprise how the computing systems works before cloud computing ;technically to elucidate, Server-That was the past, back in the days say 20 years ago every company used to have its own server. In computing, a server is a computer program or a device that provides functionality for other programs or devices. A server is basically a very large computer (just like a mainframe) with its own set of hardware which includes a sophisticated processor capable of handling huge workloads from the office clients. On top of these servers the Operating System (OS) was installed and the applications were placed on these OS. The server will contain all the database of that particular organization and all the concerned in that organization will have access to that data stored in that server through WAN (Wide Area Network) or LAN (Local Area Network). The data stored in these servers can be anything ranging from stock record, records of all transactions or even application or email service. Typical servers are Database servers, CatLog server, File Servers, Print Servers, Sound Server, Media Server, Mail Server, Proxy servers, Web servers, Game Servers, Application servers etc. Hence, to maintain such server facility and proper functionality, it’s becoming daunting task for small and big companies too. As such huge investment, technical expertise, IT infrastructure, vendors, manpower, security , licensing, overall maintenance cost shall gradually become untenable. Then come “Revolutionizing Computing systems called ‘Cloud computing” which is supposed to be totally changed scenario of computational systems. Since it is cheap, no need to hire professional IT to maintain server, no wastage of money on acquiring server OS licensing, user friendly. Hence, it could be sustained in small and big companies/enterprises easily. Coronavirus, now declared as pandemic, is causing widespread shutdown and chaos. The rapid spread and global impact of COVID-19 can make people feel helpless and scared as the novel coronavirus escalates and forces them to change many aspects of their respective lives. It is clear that the world needs a quick & safe solution right now to combat further spread of coronavirus . What then is the best solution to this health crisis? This is where technologies such as cloud computing, AI & machine learning come into play. It will be very interesting to see how cloud computing will address and contribute towards these issues in the healthcare system & industry. The purpose of this paper is to explore the current state, status and trends of cloud computing in health system in view of covid-19

    A Process Framework for Managing Quality of Service in Private Cloud

    Get PDF
    As information systems leaders tap into the global market of cloud computing-based services, they struggle to maintain consistent application performance due to lack of a process framework for managing quality of service (QoS) in the cloud. Guided by the disruptive innovation theory, the purpose of this case study was to identify a process framework for meeting the QoS requirements of private cloud service users. Private cloud implementation was explored by selecting an organization in California through purposeful sampling. Information was gathered by interviewing 23 information technology (IT) professionals, a mix of frontline engineers, managers, and leaders involved in the implementation of private cloud. Another source of data was documents such as standard operating procedures, policies, and guidelines related to private cloud implementation. Interview transcripts and documents were coded and sequentially analyzed. Three prominent themes emerged from the analysis of data: (a) end user expectations, (b) application architecture, and (c) trending analysis. The findings of this study may help IT leaders in effectively managing QoS in cloud infrastructure and deliver reliable application performance that may help in increasing customer population and profitability of organizations. This study may contribute to positive social change as information systems managers and workers can learn and apply the process framework for delivering stable and reliable cloud-hosted computer applications

    Cloud-computing strategies for sustainable ICT utilization : a decision-making framework for non-expert Smart Building managers

    Get PDF
    Virtualization of processing power, storage, and networking applications via cloud-computing allows Smart Buildings to operate heavy demand computing resources off-premises. While this approach reduces in-house costs and energy use, recent case-studies have highlighted complexities in decision-making processes associated with implementing the concept of cloud-computing. This complexity is due to the rapid evolution of these technologies without standardization of approach by those organizations offering cloud-computing provision as a commercial concern. This study defines the term Smart Building as an ICT environment where a degree of system integration is accomplished. Non-expert managers are highlighted as key users of the outcomes from this project given the diverse nature of Smart Buildings’ operational objectives. This research evaluates different ICT management methods to effectively support decisions made by non-expert clients to deploy different models of cloud-computing services in their Smart Buildings ICT environments. The objective of this study is to reduce the need for costly 3rd party ICT consultancy providers, so non-experts can focus more on their Smart Buildings’ core competencies rather than the complex, expensive, and energy consuming processes of ICT management. The gap identified by this research represents vulnerability for non-expert managers to make effective decisions regarding cloud-computing cost estimation, deployment assessment, associated power consumption, and management flexibility in their Smart Buildings ICT environments. The project analyses cloud-computing decision-making concepts with reference to different Smart Building ICT attributes. In particular, it focuses on a structured programme of data collection which is achieved through semi-structured interviews, cost simulations and risk-analysis surveys. The main output is a theoretical management framework for non-expert decision-makers across variously-operated Smart Buildings. Furthermore, a decision-support tool is designed to enable non-expert managers to identify the extent of virtualization potential by evaluating different implementation options. This is presented to correlate with contract limitations, security challenges, system integration levels, sustainability, and long-term costs. These requirements are explored in contrast to cloud demand changes observed across specified periods. Dependencies were identified to greatly vary depending on numerous organizational aspects such as performance, size, and workload. The study argues that constructing long-term, sustainable, and cost-efficient strategies for any cloud deployment, depends on the thorough identification of required services off and on-premises. It points out that most of today’s heavy-burdened Smart Buildings are outsourcing these services to costly independent suppliers, which causes unnecessary management complexities, additional cost, and system incompatibility. The main conclusions argue that cloud-computing cost can differ depending on the Smart Building attributes and ICT requirements, and although in most cases cloud services are more convenient and cost effective at the early stages of the deployment and migration process, it can become costly in the future if not planned carefully using cost estimation service patterns. The results of the study can be exploited to enhance core competencies within Smart Buildings in order to maximize growth and attract new business opportunities

    Live-Migration in Cloud Computing Environment

    Get PDF
    O tráfego global de IP aumentou cinco vezes nos últimos cinco anos, e prevê-se que crescerá três vezes nos próximos cinco. Já para o período de 2013 a 2018, anteviu-se que o total do tráfego de IP iria aumentar a sua taxa composta de crescimento anual (CAGR) em, aproximadamente, 3.9 vezes. Assim, os Prestadores de Serviços estão a sofrer com este acréscimo exponencial, que é proveniente do número abismal de dispositivos e utilizadores que estão ligados à Internet, bem como das suas exigências por vários recursos e serviços de rede (como por exemplo, distribuição de conteúdo multimédia, segurança, mobilidade, etc.). Mais especificamente, estes estão com dificuldades em: introduzir novos serviços geradores de receitas; e otimizar e adaptar as suas infraestruturas mais caras, centros de processamento de dados, e redes empresariais e de longa distância (COMpuTIN, 2015). Estas redes continuam a ter sérios problemas (no que toca a agilidade, gestão, mobilidade e no tempo despendido para se adaptarem), que não foram corrigidos até ao momento. Portanto, foram propostos novos modelos de Virtualização de Funções da Rede (NFV) e tecnologias de Redes de Software Definidos (SDN) para solucionar gastos operacionais e de capital não otimizado, e limitações das redes (Lopez, 2014, Hakiri and Berthou, 2015). Para se ultrapassar tais adversidades, o Instituto Europeu de Normas de Telecomunicações (ETSI) e outras organizações propuseram novas arquiteturas de rede. De acordo com o ETSI, a NFV é uma técnica emergente e poderosa, com grande aplicabilidade, e com o objetivo de transformar a maneira como os operadores desenham as redes. Isto é alcançado pela evolução da tecnologia padrão de virtualização TI, de forma a consolidar vários tipos de equipamentos de redes como: servidores de grande volume, routers, switches e armazenamento (Xilouris et al., 2014). Nesta dissertação, foram usadas as soluções mais atuais de SDN e NFV, de forma a produzir um caso de uso que possa solucionar o crescimento do tráfego de rede e a excedência da sua capacidade máxima. Para o desenvolvimento e avalização da solução, foi instalada a plataforma de computação na nuvem OpenStack, de modo a implementar, gerir e testar um caso de uso de Live Migration.Global IP traffic has increased fivefold over the past five years, and will continue increasing threefold over the next five years. The overall IP traffic will grow at a compound annual growth rate (CAGR) nearly 3.9-fold from 2013 to 2018. Service Providers are experiencing the exponential growth of IP traffic that comes from the incredible increased number of devices and users who are connected to the internet along with their demands for various resources and network services like multimedia content distribution, security, mobility and else. Therefore, Service Providers are finding difficult to introduce new revenue generating services, optimize and adapt their expensive infrastructures, data centers, wide-area networks and enterprise networks (COMpuTIN, 2015). The networks continue to have serious known problems, such as, agility, manageability, mobility and time-to-application that have not been successfully addressed so far. Thus, novel Network Function Virtualization (NFV) models and Software-defined Networking (SDN) technologies have been proposed to solve the non-optimal capital and operational expenditures and network’s limitations (Lopez, 2014, Hakiri and Berthou, 2015). In order to solve these issues, the European Telecommunications Standards Institute (ETSI) and other standard organizations are proposing new network architecture approaches. According to ETSI, The Network Functions Virtualization is a powerful emerging technique with widespread applicability, aiming to transform the way that network operators design networks by evolving standard IT virtualization technology to consolidate many network equipment types: high volume servers, routers, switches and storage (Xilouris et al., 2014). In this thesis, the current Software-Defined Networking (SDN) and Network Function Virtualization (NFV) solutions were used in order to make a use case that can address the increasing of network traffic and exceeding its maximum capacity. To develop and evaluate the solution, OpenStack cloud computing platform was installed in order to deploy, manage and test a Live-Migration use-case

    Distributed trustworthy sensor data management architecture

    Get PDF
    Abstract. Growth in Internet of Things (IoT) market has led to larger data volumes generated by massive amount of smart sensors and devices. This data flow must be managed and stored by some data management service. Storing data to the cloud results high latency and need to transfer large amount of data over the Internet. Edge computing operates physically closer to the user than cloud, offering lower latency and reducing data transmission over the network. Going one step forward and storing data locally to the IoT device results smaller latency than cloud and edge computing. Utilizing isolation technique like virtualization enables easy to deploy environment to setup the needed software functionalities. Container technology works well on lightweight hardware as it offers good performance and small overhead. Containers are used to manage server-side services and to give clean environment for each test run. In this thesis two data management platforms, Apache Kafka and MySQL-based MariaDB are tested on a IoT platform. Key performance parameters considered for these platforms are latency and data throughput while also collecting system resource usage data. Variable amount of users and payload sizes are tested and results are presented in graphs. Kafka performed similarly to the SQL-based solution with small differences.Hajautettu luotettava anturidatan hallintajärjestelmä. Tiivistelmä. IoT-markkinoiden kasvu on johtanut suurempien datamäärien luontiin IoT-laitteiden toimesta. Tuo datavirta täytyy hallita and varastoida datan käsittelypalvelun toimesta. Datan tallennus pilvipalveluihin tuottaa suuren latenssin ja tarpeen suurien datamäärien siirrolle Internetin yli. Fyysisesti lähempänä loppukäyttäjää oleva reunapalvelu tarjoaa pienemmän latenssin ja vähentää siirrettävän datan määrää verkon yli. Kun palvelu tuodaan vielä askel lähemmäksi, päästään paikalliseen palveluun, mikä saavuttaa vielä pienemmän latenssin kuin pilvi- ja reunapalvelut. Virtualisointitekniikka mahdollistaa helposti jaettavan ympäristön käyttöönottoa, mikä mahdollistaa tarvittavien ohjelmiston toimintojen asennuksen. Virtualisointitekniikoista kontit nousivat muiden edelle, koska IoT-laitteet omaavat suhteellisesti vähän muistia ja laskentatehoa. Kontteja käytetään palvelinpuolen palveluiden hallintaan sekä tarjoamaan puhtaan vakioidun ympäristön jokaiselle testikierrokselle. Tämä diplomityö käsittelee kahden tiedonhallinta-alustan: Apache Kafka ja MySQL pohjaisen MariaDB-tietokannan suorituskykyeroja IoT-alustan päällä. Kerätyt suorituskykymittaukset ovat latenssi ja tiedonsiirtonopeus mitaten samalla järjestelmän resurssien käyttöasteita. Vaihtelevia määriä käyttäjiä ja hyötykuormia testataan ja tulokset esitetään graafeissa. Kafka suoriutui yhtä hyvin kuin SQL ohjelmisto näissä testeissä, mutta pieniä eroja näiden välillä havaittiin
    corecore