4,077 research outputs found

    Otimização de distribuição de conteúdos multimédia utilizando software-defined networking

    Get PDF
    The general use of Internet access and user equipments, such as smartphones, tablets and personal computers, is creating a new wave of video content consumption. In the past two decades, the Television broadcasting industry went through several evolutions and changes, evolving from analog to digital distribution, standard definition to high definition TV-channels, form the IPTV method of distribution to the latest set of technologies in content distribution, OTT. The IPTV technology introduced features that changed the passive role of the client to an active one, revolutionizing the way users consume TV content. Thus, the clients’ habits started to shape the services offered, leading to an anywhere and anytime offer of video content. OTT video delivery is a reflection of those habits, meeting the users’ desire, introducing several benefits discussed in this work over the previous technologies. However, the OTT type of delivery poses several challenges in terms of scalability and threatens the Telecommunications Operators business model, because OTT companies use the Telcos infrastructure for free. Consequently, Telecommunications Operators must prepare their infrastructure for future demand while offering new services to stay competitive. This dissertation aims to contribute with insights on what infrastructure changes a Telecommunications Operator must perform with a proposed bandwidth forecasting model. The results obtained from the forecast model paved the way to the proposed video content delivery method, which aims to improve users’ perceived Quality-of-Experience while optimizing load balancing decisions. The overall results show an improvement of users’ experience using the proposed method.A generalização do acesso à Internet e equipamentos pessoais como smartphones, tablets e computadores pessoais, está a criar uma nova onda de consumo de conteúdos multimedia. Nas ultimas duas décadas, a indústria de transmissão de Televisão atravessou várias evoluções e alterações, evoluindo da distribuição analógica para a digital, de canais de Televisão de definição padrão para alta definição, do método de distribuição IPTV, até ao último conjunto de tecnologias na distribuição de conteúdos, OTT. A tecnologia IPTV introduziu novas funcionalidades que mudaram o papel passivo do cliente para um papel activo, revolucionando a forma como os utilizadores consumem conteúdos televisivos. Assim, os hábitos dos clientes começaram a moldar os serviços oferecidos, levando à oferta de consumo de conteúdos em qualquer lugar e em qualquer altura. A entrega de vídeo OTT é um reflexo destes hábitos, indo ao encontro dos desejos dos utilizadores, que introduz inúmeras vantagens sobre outras tecnologias discutidas neste trabalho. No entanto, a entrega de conteúdos OTT cria diversos problemas de escalabilidade e ameaça o modelo de negócio das Operadoras de Telecomunicações, porque os fornecedores de serviço OTT usam a infraestrutura das mesmas sem quaisquer custos. Consequentemente, os Operadores de Telecomunicações devem preparar a sua infraestrutura para o consumo futuro ao mesmo tempo que oferecem novos serviços para se manterem competitivos. Esta dissertação visa contribuir com conhecimento sobre quais alterações uma Operadora de Telecomunicações deve executar com o modelo de previsão de largura de banda proposto. Os resultados obtidos abriram caminho para o método de entrega de conteúdos multimedia proposto, que visa ao melhoramento da qualidade de experiência do utilizador ao mesmo tempo que se optimiza o processo de balanceamento de carga. No geral os testes confirmam uma melhoria na qualidade de experiência do utilizador usando o método proposto.Mestrado em Engenharia de Computadores e Telemátic

    Runtime Adaptation of Scientific Service Workflows

    Get PDF
    Software landscapes are rather subject to change than being complete after having been built. Changes may be caused by a modified customer behavior, the shift to new hardware resources, or otherwise changed requirements. In such situations, several challenges arise. New architectural models have to be designed and implemented, existing software has to be integrated, and, finally, the new software has to be deployed, monitored, and, where appropriate, optimized during runtime under realistic usage scenarios. All of these situations often demand manual intervention, which causes them to be error-prone. This thesis addresses these types of runtime adaptation. Based on service-oriented architectures, an environment is developed that enables the integration of existing software (i.e., the wrapping of legacy software as web services). A workflow modeling tool that aims at an easy-to-use approach by separating the role of the workflow expert and the role of the domain expert. After the development of workflows, tools that observe the executing infrastructure and perform automatic scale-in and scale-out operations are presented. Infrastructure-as-a-Service providers are used to scale the infrastructure in a transparent and cost-efficient way. The deployment of necessary middleware tools is automatically done. The use of a distributed infrastructure can lead to communication problems. In order to keep workflows robust, these exceptional cases need to treated. But, in this way, the process logic of a workflow gets mixed up and bloated with infrastructural details, which yields an increase in its complexity. In this work, a module is presented that can deal automatically with infrastructural faults and that thereby allows to keep the separation of these two layers. When services or their components are hosted in a distributed environment, some requirements need to be addressed at each service separately. Although techniques as object-oriented programming or the usage of design patterns like the interceptor pattern ease the adaptation of service behavior or structures. Still, these methods require to modify the configuration or the implementation of each individual service. On the other side, aspect-oriented programming allows to weave functionality into existing code even without having its source. Since the functionality needs to be woven into the code, it depends on the specific implementation. In a service-oriented architecture, where the implementation of a service is unknown, this approach clearly has its limitations. The request/response aspects presented in this thesis overcome this obstacle and provide a SOA-compliant and new methods to weave functionality into the communication layer of web services. The main contributions of this thesis are the following: Shifting towards a service-oriented architecture: The generic and extensible Legacy Code Description Language and the corresponding framework allow to wrap existing software, e.g., as web services, which afterwards can be composed into a workflow by SimpleBPEL without overburdening the domain expert with technical details that are indeed handled by a workflow expert. Runtime adaption: Based on the standardized Business Process Execution Language an automatic scheduling approach is presented that monitors all used resources and is able to automatically provision new machines in case a scale-out becomes necessary. If the resource's load drops, e.g., because of less workflow executions, a scale-in is also automatically performed. The scheduling algorithm takes the data transfer between the services into account in order to prevent scheduling allocations that eventually increase the workflow's makespan due to unnecessary or disadvantageous data transfers. Furthermore, a multi-objective scheduling algorithm that is based on a genetic algorithm is able to additionally consider cost, in a way that a user can define her own preferences rising from optimized execution times of a workflow and minimized costs. Possible communication errors are automatically detected and, according to certain constraints, corrected. Adaptation of communication: The presented request/response aspects allow to weave functionality into the communication of web services. By defining a pointcut language that only relies on the exchanged documents, the implementation of services must neither be known nor be available. The weaving process itself is modeled using web services. In this way, the concept of request/response aspects is naturally embedded into a service-oriented architecture

    An active, ontology-driven network service for Internet collaboration

    No full text
    Web portals have emerged as an important means of collaboration on the WWW, and the integration of ontologies promises to make them more accurate in how they serve users’ collaboration and information location requirements. However, web portals are essentially a centralised architecture resulting in difficulties supporting seamless roaming between portals and collaboration between groups supported on different portals. This paper proposes an alternative approach to collaboration over the web using ontologies that is de-centralised and exploits content-based networking. We argue that this approach promises a user-centric, timely, secure and location-independent mechanism, which is potentially more scaleable and universal than existing centralised portals

    Peer-to-Peer File Sharing WebApp: Enhancing Data Security and Privacy through Peer-to-Peer File Transfer in a Web Application

    Get PDF
    Peer-to-peer (P2P) networking has emerged as a promising technology that enables distributed systems to operate in a decentralized manner. P2P networks are based on a model where each node in the network can act as both a client and a server, thereby enabling data and resource sharing without relying on centralized servers. The P2P model has gained considerable attention in recent years due to its potential to provide a scalable, fault-tolerant, and resilient architecture for various applications such as file sharing, content distribution, and social networks.In recent years, researchers have also proposed hybrid architectures that combine the benefits of both structured and unstructured P2P networks. For example, the Distributed Hash Table (DHT) is a popular hybrid architecture that provides efficient lookup and search algorithms while maintaining the flexibility and adaptability of the unstructured network.To demonstrate the feasibility of P2P systems, several prototypes have been developed, such as the BitTorrent file-sharing protocol and the Skype voice-over-IP (VoIP) service. These prototypes have demonstrated the potential of P2P systems for large-scale applications and have paved the way for the development of new P2P-based systems
    corecore