82 research outputs found

    COIN : a customisable, incentive driven video on demand framework for low-cost IPTV services

    Get PDF
    There has been a significant rise in the provision of television and video services over IP (IPTV) in recent years. Increasing network capacity and falling bandwidth costs have made it both technically and economically feasible for service providers to deliver IPTV services. Several telecommunications (telco) operators worldwide are rolling out IPTV solutions and view IPTV as a major service differentiator and alternative revenue source. The main challenge that IPTV providers currently face, however, is the increasingly congested television service provider market, which also includes Internet Television. IPTV solutions therefore need strong service differentiators to succeed. IPTV solutions can doubtlessly sell much faster if they are more affordable or low-cost. Advertising has already been used in many service sectors to help lower service costs, including traditional broadcast television. This thesis therefore explores the role that advertising can play in helping to lower the cost of IPTV services and to incentivise IPTV billing. Another approach that IPTV providers can use to help sell their product is by addressing the growing need for control by today's multimedia users. This thesis will therefore explore the varied approaches that can be used to achieve viewer focused IPTV implementations. To further lower the cost of IPTV services, telcos can also turn to low-cost, open source platforms for service delivery. The adoption of low-cost infrastructure by telcos can lead to reduced Capital Expenditure (CAPEX), which in turn can lead to lower service fees, and ultimately to higher subscriptions and revenue. Therefore, in this thesis, the author proposes a CustOmisable, INcentive (COIN) driven Video on Demand (VoD) framework to be developed and deployed using the Mobicents Communication Platform, an open source service creation and execution platform. The COIN framework aims to provide a viewer focused, economically competitive service that combines the potential cost savings of using free and open source software (FOSS), with an innovative, incentive-driven billing approach. This project will also aim to evaluate whether the Mobicents Platform is a suitable service creation and execution platform for the proposed framework. Additionally, the proposed implementation aims to be interoperable with other IPTV implementations, hence shall follow current IPTV standardisation architectures and trends. The service testbed and its implementation are described in detail and only free and open source software is used; this is to enable its easy duplication and extension for future research.TeX output 2012.03.02:1241Adobe Acrobat 9.2 Paper Capture Plug-i

    From OPIMA to MPEG IPMP-X: A standard's history across R&D projects

    Get PDF
    This paper describes the work performed by a number of companies and universities who have been working as a consortium under the umbrella of the European Union Framework Programme 5 (FP5), Information Society Technologies (IST) research program, in order to provide a set of Digital Rights Management (DRM) technologies and architectures, aiming at helping to reduce the copyright circumvention risks, that have been threatening the music and film industries in their transition from the “analogue” to “digital” age. The paper starts by addressing some of the earlier standardization efforts in the DRM arena, namely, Open Platform Initiative for Multimedia Access (OPIMA). One of the described FP5 IST projects, Open Components for Controlled Access to Multimedia Material (OCCAMM), has developed the OPIMA vision. The paper addresses also the Motion Pictures Expert Group—MPEG DRM work, starting from the MPEG Intellectual Propriety Management and Protection—IPMP “Hooks”, towards the MPEG IPMP Extensions, which has originated the first DRM-related standard (MPEG-4 Part 13, called IPMP Extensions or IPMP-X) ever released by ISO up to the present days.2 The paper clarifies how the FP5 IST project MPEG Open Security for Embedded Systems (MOSES), has extended the OPIMA interfaces and architecture to achieve compliance with the MPEG IPMP-X standard, and how it has contributed to the achievement of “consensus” and to the specification, implementation (Reference Software) and validation (Conformance Testing) of the MPEG IPMP-X standard.info:eu-repo/semantics/acceptedVersio

    Network sharing through service outsourcing in inter-domain IMS frameworks

    Get PDF
    Includes abstract.Includes bibliographical references (leaves 161-167).Resource sharing can be used as a short-term solution to the imbalance between the supply and demand of network resources. Resources sharing enables operators to provide services to their subscribers using networks belonging to other operators. Resource sharing in mobile networks is increasingly becoming an option for operators to provide service to their subscribers. In this thesis we explore a mechanism for sharing access network resources that utilises negotiable short-term Service Level Agreements (SLA) that can easily adapt to changing network conditions. Through this mechanism operators of resource constrained networks may use near real time dynamic SLAs to negotiate network access services for their subscribers. We refer to this form of resource sharing as 'Service Outsourcing'

    Multimedia in mobile networks: Streaming techniques, optimization and User Experience

    Get PDF
    1.UMTS overview and User Experience 2.Streaming Service & Streaming Platform 3.Quality of Service 4.Mpeg-4 5.Test Methodology & testing architecture 6.Conclusion

    Web Radio Blueprint

    Get PDF
    The Internet is one of the most significant technological developments in our lifetime, and its impact affects many established technologies and media. Radio is one of the established media revolutionized by the Internet because of the expanding multi-media capabilities, leading the way to a more focused medium when compared to traditional terrestrial radio broadcasting. Radio transmissions over the Internet (Web Radio) offers the opportunity to provide content focused to a “niche” audience, while providing an opportunity for broader operator participation than terrestrial radio. Web Radio is an Information Technology that offers a viable alternative to commercial radio, which has become increasingly consolidated since the Passage of the Telecommunications Act of 1996. Commercial radio consolidation into ten major owners has resulted in less localism, diversity, and competition in radio. Web Radio can restore these elements to the radio industry, assuming policies implemented support the goals of localism, diversity, competition, and interaction. Web Radio is at a critical stage in its development as an Internet supported information technology. Web Radio content providers are facing several significant issues in the economic and regulatory components of their businesses. Web Radio represents a unique opportunity for entrepreneurs and producers to establish viable conduits for the content that they are able to create. It is critical that policies concerning the technical, legal, and operational issues be determined in a way that does not cripple the development of the industry. This thesis provides a blueprint for individuals or organizations that are new to the technology of Web Radio, or would like to review the current state of affairs in the technical and legal components of webcasting. This “Web Radio Blueprint” will assist individuals or organizations with the implementation of webcasting as a way to communicate their music or message to an interested listener. It provides a blueprint for an organization attempting to become an Internet Broadcaster, or add an Internet Broadcasting function to an e-commerce site, by presenting three key areas that should be considered in the organization\u27s plan. These areas include infrastructure technologies used in webcasting, legal obstacles imposed by the 1998 “Digital Millennium Copyright Act” and other rulings, and operational concerns that an e-commerce organization should address

    A framework to provide charging for third party composite services

    Get PDF
    Includes synopsis.Includes bibliographical references (leaves 81-87).Over the past few years the trend in the telecommunications industry has been geared towards offering new and innovative services to end users. A decade ago network operators were content with offering simple services such as voice and text messaging. However, they began to notice that these services were generating lower revenues even while the number of subscribers increased. This was a direct result of the market saturation and network operators were forced to rapidly deploy services with minimum capital investment and while maximising revenue from service usage by end users. Network operators can achieve this by exposing the network to external content and service providers. They would create interfaces that would allow these 3rd party service and content providers to offer their applications and services to users. Composing and bundling of these services will essentially create new services for the user and achieve rapid deployment of enhanced services. The concept of offering a wide range of services that are coordinated in such a way that they deliver a unique experience has sparked interest and numerous research on Service Delivery Platforms (SDP). SDP‟s will enable network operators to be able to develop and offer a wide-variety service set. Given this interest on SDP standardisation bodies such as International Telecommunications Union – Telecommunications (ITU-T), Telecoms and Internet converged Servicers and Protocols for Advanced Networks) (TISPAN), 3rd Generations Partnership Project (3GPP) and Open Mobile Alliance (OMA) are leading efforts into standardising functions and protocols to enhance service delivery by network operators. Obtaining revenue from these services requires effective accounting of service usage and requires mechanisms for billing and charging of these services. The IP Multimedia subsystem(IMS) is a Next Generation Network (NGN) architecture that provides a platform for which multimedia services can be developed and deployed by network operators. The IMS provides network operators, both fixed or mobile, with a control layer that allows them to offer services that will enable them to remain key role players within the industry. Achieving this in an environment where the network operator interacts directly with the 3rd party service providers may become complicated

    Deliverable D6.4: Assessment report: Experimenting with CONNECT in Systems of Systems, and Mobile Environments

    Get PDF
    The core objective of WP6 is to evaluate the CONNECT technologies under realistic situations. To achieve this goal, WP6 concentrated a significant amount of its 4th year effort on the finalization of the implementation of the GMES scenario defined during the 3rd year. The GMES scenario allows the consortium to assess the validity of CONNECT claims and to investigate the exploitation of CONNECT technologies to deal with the integration of real systems. In particular, GMES requires the connection of highly heterogeneous and independently built systems provided by the industry partners. WP6 contributed also in providing mobile collaborative applications and case studies showing the exploitation of CONNECTORs on mobile devices

    Understanding user experience of mobile video: Framework, measurement, and optimization

    Get PDF
    Since users have become the focus of product/service design in last decade, the term User eXperience (UX) has been frequently used in the field of Human-Computer-Interaction (HCI). Research on UX facilitates a better understanding of the various aspects of the user’s interaction with the product or service. Mobile video, as a new and promising service and research field, has attracted great attention. Due to the significance of UX in the success of mobile video (Jordan, 2002), many researchers have centered on this area, examining users’ expectations, motivations, requirements, and usage context. As a result, many influencing factors have been explored (Buchinger, Kriglstein, Brandt & Hlavacs, 2011; Buchinger, Kriglstein & Hlavacs, 2009). However, a general framework for specific mobile video service is lacking for structuring such a great number of factors. To measure user experience of multimedia services such as mobile video, quality of experience (QoE) has recently become a prominent concept. In contrast to the traditionally used concept quality of service (QoS), QoE not only involves objectively measuring the delivered service but also takes into account user’s needs and desires when using the service, emphasizing the user’s overall acceptability on the service. Many QoE metrics are able to estimate the user perceived quality or acceptability of mobile video, but may be not enough accurate for the overall UX prediction due to the complexity of UX. Only a few frameworks of QoE have addressed more aspects of UX for mobile multimedia applications but need be transformed into practical measures. The challenge of optimizing UX remains adaptations to the resource constrains (e.g., network conditions, mobile device capabilities, and heterogeneous usage contexts) as well as meeting complicated user requirements (e.g., usage purposes and personal preferences). In this chapter, we investigate the existing important UX frameworks, compare their similarities and discuss some important features that fit in the mobile video service. Based on the previous research, we propose a simple UX framework for mobile video application by mapping a variety of influencing factors of UX upon a typical mobile video delivery system. Each component and its factors are explored with comprehensive literature reviews. The proposed framework may benefit in user-centred design of mobile video through taking a complete consideration of UX influences and in improvement of mobile videoservice quality by adjusting the values of certain factors to produce a positive user experience. It may also facilitate relative research in the way of locating important issues to study, clarifying research scopes, and setting up proper study procedures. We then review a great deal of research on UX measurement, including QoE metrics and QoE frameworks of mobile multimedia. Finally, we discuss how to achieve an optimal quality of user experience by focusing on the issues of various aspects of UX of mobile video. In the conclusion, we suggest some open issues for future study

    Timely Classification of Encrypted or ProtocolObfuscated Internet Traffic Using Statistical Methods

    Get PDF
    Internet traffic classification aims to identify the type of application or protocol that generated a particular packet or stream of packets on the network. Through traffic classification, Internet Service Providers (ISPs), governments, and network administrators can access basic functions and several solutions, including network management, advanced network monitoring, network auditing, and anomaly detection. Traffic classification is essential as it ensures the Quality of Service (QoS) of the network, as well as allowing efficient resource planning. With the increase of encrypted or obfuscated protocol traffic on the Internet and multilayer data encapsulation, some classical classification methods have lost interest from the scientific community. The limitations of traditional classification methods based on port numbers and payload inspection to classify encrypted or obfuscated Internet traffic have led to significant research efforts focused on Machine Learning (ML) based classification approaches using statistical features from the transport layer. In an attempt to increase classification performance, Machine Learning strategies have gained interest from the scientific community and have shown promise in the future of traffic classification, specially to recognize encrypted traffic. However, ML approach also has its own limitations, as some of these methods have a high computational resource consumption, which limits their application when classifying large traffic or realtime flows. Limitations of ML application have led to the investigation of alternative approaches, including featurebased procedures and statistical methods. In this sense, statistical analysis methods, such as distances and divergences, have been used to classify traffic in large flows and in realtime. The main objective of statistical distance is to differentiate flows and find a pattern in traffic characteristics through statistical properties, which enable classification. Divergences are functional expressions often related to information theory, which measure the degree of discrepancy between any two distributions. This thesis focuses on proposing a new methodological approach to classify encrypted or obfuscated Internet traffic based on statistical methods that enable the evaluation of network traffic classification performance, including the use of computational resources in terms of CPU and memory. A set of traffic classifiers based on KullbackLeibler and JensenShannon divergences, and Euclidean, Hellinger, Bhattacharyya, and Wootters distances were proposed. The following are the four main contributions to the advancement of scientific knowledge reported in this thesis. First, an extensive literature review on the classification of encrypted and obfuscated Internet traffic was conducted. The results suggest that portbased and payloadbased methods are becoming obsolete due to the increasing use of traffic encryption and multilayer data encapsulation. MLbased methods are also becoming limited due to their computational complexity. As an alternative, Support Vector Machine (SVM), which is also an ML method, and the KolmogorovSmirnov and Chisquared tests can be used as reference for statistical classification. In parallel, the possibility of using statistical methods for Internet traffic classification has emerged in the literature, with the potential of good results in classification without the need of large computational resources. The potential statistical methods are Euclidean Distance, Hellinger Distance, Bhattacharyya Distance, Wootters Distance, as well as KullbackLeibler (KL) and JensenShannon divergences. Second, we present a proposal and implementation of a classifier based on SVM for P2P multimedia traffic, comparing the results with KolmogorovSmirnov (KS) and Chisquare tests. The results suggest that SVM classification with Linear kernel leads to a better classification performance than KS and Chisquare tests, depending on the value assigned to the Self C parameter. The SVM method with Linear kernel and suitable values for the Self C parameter may be a good choice to identify encrypted P2P multimedia traffic on the Internet. Third, we present a proposal and implementation of two classifiers based on KL Divergence and Euclidean Distance, which are compared to SVM with Linear kernel, configured with the standard Self C parameter, showing a reduced ability to classify flows based solely on packet sizes compared to KL and Euclidean Distance methods. KL and Euclidean methods were able to classify all tested applications, particularly streaming and P2P, where for almost all cases they efficiently identified them with high accuracy, with reduced consumption of computational resources. Based on the obtained results, it can be concluded that KL and Euclidean Distance methods are an alternative to SVM, as these statistical approaches can operate in realtime and do not require retraining every time a new type of traffic emerges. Fourth, we present a proposal and implementation of a set of classifiers for encrypted Internet traffic, based on JensenShannon Divergence and Hellinger, Bhattacharyya, and Wootters Distances, with their respective results compared to those obtained with methods based on Euclidean Distance, KL, KS, and ChiSquare. Additionally, we present a comparative qualitative analysis of the tested methods based on Kappa values and Receiver Operating Characteristic (ROC) curves. The results suggest average accuracy values above 90% for all statistical methods, classified as ”almost perfect reliability” in terms of Kappa values, with the exception of KS. This result indicates that these methods are viable options to classify encrypted Internet traffic, especially Hellinger Distance, which showed the best Kappa values compared to other classifiers. We conclude that the considered statistical methods can be accurate and costeffective in terms of computational resource consumption to classify network traffic. Our approach was based on the classification of Internet network traffic, focusing on statistical distances and divergences. We have shown that it is possible to classify and obtain good results with statistical methods, balancing classification performance and the use of computational resources in terms of CPU and memory. The validation of the proposal supports the argument of this thesis, which proposes the implementation of statistical methods as a viable alternative to Internet traffic classification compared to methods based on port numbers, payload inspection, and ML.A classificação de tráfego Internet visa identificar o tipo de aplicação ou protocolo que gerou um determinado pacote ou fluxo de pacotes na rede. Através da classificação de tráfego, Fornecedores de Serviços de Internet (ISP), governos e administradores de rede podem ter acesso às funções básicas e várias soluções, incluindo gestão da rede, monitoramento avançado de rede, auditoria de rede e deteção de anomalias. Classificar o tráfego é essencial, pois assegura a Qualidade de Serviço (QoS) da rede, além de permitir planear com eficiência o uso de recursos. Com o aumento de tráfego cifrado ou protocolo ofuscado na Internet e do encapsulamento de dados multicamadas, alguns métodos clássicos da classificação perderam interesse de investigação da comunidade científica. As limitações dos métodos tradicionais da classificação com base no número da porta e na inspeção de carga útil payload para classificar o tráfego de Internet cifrado ou ofuscado levaram a esforços significativos de investigação com foco em abordagens da classificação baseadas em técnicas de Aprendizagem Automática (ML) usando recursos estatísticos da camada de transporte. Na tentativa de aumentar o desempenho da classificação, as estratégias de Aprendizagem Automática ganharam o interesse da comunidade científica e se mostraram promissoras no futuro da classificação de tráfego, principalmente no reconhecimento de tráfego cifrado. No entanto, a abordagem em ML também têm as suas próprias limitações, pois alguns desses métodos possuem um elevado consumo de recursos computacionais, o que limita a sua aplicação para classificação de grandes fluxos de tráfego ou em tempo real. As limitações no âmbito da aplicação de ML levaram à investigação de abordagens alternativas, incluindo procedimentos baseados em características e métodos estatísticos. Neste sentido, os métodos de análise estatística, tais como distâncias e divergências, têm sido utilizados para classificar tráfego em grandes fluxos e em tempo real. A distância estatística possui como objetivo principal diferenciar os fluxos e permite encontrar um padrão nas características de tráfego através de propriedades estatísticas, que possibilitam a classificação. As divergências são expressões funcionais frequentemente relacionadas com a teoria da informação, que mede o grau de discrepância entre duas distribuições quaisquer. Esta tese focase na proposta de uma nova abordagem metodológica para classificação de tráfego cifrado ou ofuscado da Internet com base em métodos estatísticos que possibilite avaliar o desempenho da classificação de tráfego de rede, incluindo a utilização de recursos computacionais, em termos de CPU e memória. Foi proposto um conjunto de classificadores de tráfego baseados nas Divergências de KullbackLeibler e JensenShannon e Distâncias Euclidiana, Hellinger, Bhattacharyya e Wootters. A seguir resumemse os tese. Primeiro, realizámos uma ampla revisão de literatura sobre classificação de tráfego cifrado e ofuscado de Internet. Os resultados sugerem que os métodos baseados em porta e baseados em carga útil estão se tornando obsoletos em função do crescimento da utilização de cifragem de tráfego e encapsulamento de dados multicamada. O tipo de métodos baseados em ML também está se tornando limitado em função da complexidade computacional. Como alternativa, podese utilizar a Máquina de Vetor de Suporte (SVM), que também é um método de ML, e os testes de KolmogorovSmirnov e Quiquadrado como referência de comparação da classificação estatística. Em paralelo, surgiu na literatura a possibilidade de utilização de métodos estatísticos para classificação de tráfego de Internet, com potencial de bons resultados na classificação sem aporte de grandes recursos computacionais. Os métodos estatísticos potenciais são as Distâncias Euclidiana, Hellinger, Bhattacharyya e Wootters, além das Divergências de Kullback–Leibler (KL) e JensenShannon. Segundo, apresentamos uma proposta e implementação de um classificador baseado na Máquina de Vetor de Suporte (SVM) para o tráfego multimédia P2P (PeertoPeer), comparando os resultados com os testes de KolmogorovSmirnov (KS) e Quiquadrado. Os resultados sugerem que a classificação da SVM com kernel Linear conduz a um melhor desempenho da classificação do que os testes KS e Quiquadrado, dependente do valor atribuído ao parâmetro Self C. O método SVM com kernel Linear e com valores adequados para o parâmetro Self C pode ser uma boa escolha para identificar o tráfego Par a Par (P2P) multimédia cifrado na Internet. Terceiro, apresentamos uma proposta e implementação de dois classificadores baseados na Divergência de KullbackLeibler (KL) e na Distância Euclidiana, sendo comparados com a SVM com kernel Linear, configurado para o parâmestro Self C padrão, apresenta reduzida capacidade de classificar fluxos com base apenas nos tamanhos dos pacotes em relação aos métodos KL e Distância Euclidiana. Os métodos KL e Euclidiano foram capazes de classificar todas as aplicações testadas, destacandose streaming e P2P, onde para quase todos os casos foi eficiente identificálas com alta precisão, com reduzido consumo de recursos computacionais.Com base nos resultados obtidos, podese concluir que os métodos KL e Distância Euclidiana são uma alternativa à SVM, porque essas abordagens estatísticas podem operar em tempo real e não precisam de retreinamento cada vez que surge um novo tipo de tráfego. Quarto, apresentamos uma proposta e implementação de um conjunto de classificadores para o tráfego de Internet cifrado, baseados na Divergência de JensenShannon e nas Distâncias de Hellinger, Bhattacharyya e Wootters, sendo os respetivos resultados comparados com os resultados obtidos com os métodos baseados na Distância Euclidiana, KL, KS e Quiquadrado. Além disso, apresentamos uma análise qualitativa comparativa dos métodos testados com base nos valores de Kappa e Curvas Característica de Operação do Receptor (ROC). Os resultados sugerem valores médios de precisão acima de 90% para todos os métodos estatísticos, classificados como “confiabilidade quase perfeita” em valores de Kappa, com exceçãode KS. Esse resultado indica que esses métodos são opções viáveis para a classificação de tráfego cifrado da Internet, em especial a Distância de Hellinger, que apresentou os melhores resultados do valor de Kappa em comparaçãocom os demais classificadores. Concluise que os métodos estatísticos considerados podem ser precisos e económicos em termos de consumo de recursos computacionais para classificar o tráfego da rede. A nossa abordagem baseouse na classificação de tráfego de rede Internet, focando em distâncias e divergências estatísticas. Nós mostramos que é possível classificar e obter bons resultados com métodos estatísticos, equilibrando desempenho de classificação e uso de recursos computacionais em termos de CPU e memória. A validação da proposta sustenta o argumento desta tese, que propõe a implementação de métodos estatísticos como alternativa viável à classificação de tráfego da Internet em relação aos métodos com base no número da porta, na inspeção de carga útil e de ML.Thesis prepared at Instituto de Telecomunicações Delegação da Covilhã and at the Department of Computer Science of the University of Beira Interior, and submitted to the University of Beira Interior for discussion in public session to obtain the Ph.D. Degree in Computer Science and Engineering. This work has been funded by Portuguese FCT/MCTES through national funds and, when applicable, cofunded by EU funds under the project UIDB/50008/2020, and by operation Centro010145FEDER000019 C4 Centro de Competências em Cloud Computing, cofunded by the European Regional Development Fund (ERDF/FEDER) through the Programa Operacional Regional do Centro (Centro 2020). This work has also been funded by CAPES (Brazilian Federal Agency for Support and Evaluation of Graduate Education) within the Ministry of Education of Brazil under a scholarship supported by the International Cooperation Program CAPES/COFECUB Project 9090134/ 2013 at the University of Beira Interior
    corecore