24 research outputs found

    A prediction-based dynamic content adaptation framework for enterprise documents applied to collaborative mobile web conferencing

    Get PDF
    Enterprise documents, created in applications such as PowerPoint and Word, can be used and shared using ubiquitousWeb-enabled terminals connected to the Internet. In the context ofWeb conferencing, enterprise documents, particularly presentation slides, are hosted on the server and presented to the meeting participants synchronously. When mobile devices are involved in such meeting conferencing applications, the content (e.g.: presentation slides) should be adapted to meet the target mobile terminal constraints, but more importantly, to provide the end-user with the best experience possible. Globally, two major trends in content adaptation have been studied: static and dynamic. In static content adaptation, the content is adapted into a set of versions using different transcoding parameter combinations. At runtime, when the content is requested, the optimal of those versions, based on a given quality criterion, is selected for delivery. The performance of these solutions is based on the granularity in use; the number of created versions. In dynamic content adaptation, also called just-in-time adaptation, based on the mobile device context, a customized version is created on-the-fly, while the end-user is still waiting. Dynamically identifying the optimal transcoding parameters, without performing any transcoding operation, is very challenging. In this thesis, we propose a novel dynamic adaptation framework that estimates, without performing transcoding, near-optimal transcoding parameters (format, scaling parameter and quality factor). The output formats considered in this research are JPEG- and XHTML-based Web pages. Firstly, we define a quality of experience measure to quantify the quality of the adapted content as experienced by the end-user. This measure takes into account the visual aspect of the content as well as its transport quality, which is mostly affected by the network conditions. Secondly, we propose a dynamic adaptation framework capable of selecting dynamically and with very little computational complexity, near-optimal adapted content that meets the best compromise between its visual quality and delivery time based on the proposed quality of experience measure. It uses predictors of file size and visual quality of JPEG images subject to changing their scaling parameter and quality factor proposed in recent researches. Our framework is comprised of five adaptation methods with increased quality and complexity. The first one, requiring one transcoding operation, estimates near-optimal adapted content, whereas the other four methods improve its prediction accuracy by allowing the system to perform more than one transcoding operation. The performance of the proposed dynamic framework was tested with a static exhaustive system and a typical dynamic system. Globally, the obtained results were very close to optimality and far better than the typical dynamic system. Besides, we were able to reach optimality on a large number of tested documents. The proposed dynamic framework has been applied to OpenOffice Impress presentations. It is designed to be general, but future work can be carried out to validate its applicability to other enterprise documents types such as Word (text) and Excel (spreadsheet)

    Systems and Methods for Measuring and Improving End-User Application Performance on Mobile Devices

    Full text link
    In today's rapidly growing smartphone society, the time users are spending on their smartphones is continuing to grow and mobile applications are becoming the primary medium for providing services and content to users. With such fast paced growth in smart-phone usage, cellular carriers and internet service providers continuously upgrade their infrastructure to the latest technologies and expand their capacities to improve the performance and reliability of their network and to satisfy exploding user demand for mobile data. On the other side of the spectrum, content providers and e-commerce companies adopt the latest protocols and techniques to provide smooth and feature-rich user experiences on their applications. To ensure a good quality of experience, monitoring how applications perform on users' devices is necessary. Often, network and content providers lack such visibility into the end-user application performance. In this dissertation, we demonstrate that having visibility into the end-user perceived performance, through system design for efficient and coordinated active and passive measurements of end-user application and network performance, is crucial for detecting, diagnosing, and addressing performance problems on mobile devices. My dissertation consists of three projects to support this statement. First, to provide such continuous monitoring on smartphones with constrained resources that operate in such a highly dynamic mobile environment, we devise efficient, adaptive, and coordinated systems, as a platform, for active and passive measurements of end-user performance. Second, using this platform and other passive data collection techniques, we conduct an in-depth user trial of mobile multipath to understand how Multipath TCP (MPTCP) performs in practice. Our measurement study reveals several limitations of MPTCP. Based on the insights gained from our measurement study, we propose two different schemes to address the identified limitations of MPTCP. Last, we show how to provide visibility into the end- user application performance for internet providers and in particular home WiFi routers by passively monitoring users' traffic and utilizing per-app models mapping various network quality of service (QoS) metrics to the application performance.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/146014/1/ashnik_1.pd

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences

    Video Conference as a tool for Higher Education

    Get PDF
    The book describes the activities of the consortium member institutions in the framework of the TEMPUS IV Joint Project ViCES - Video Conferencing Educational Services (144650-TEMPUS-2008-IT-JPGR). In order to provide the basis for the development of a distance learning environment based on video conferencing systems and develop a blended learning courses methodology, the TEMPUS Project VICES (2009-2012) was launched in 2009. This publication collects the conclusion of the project and it reports the main outcomes together with the approach followed by the different partners towards the achievement of the project's goal. The book includes several contributions focussed on specific topics related to videoconferencing services, namely how to enable such services in educational contexts so that, the installation and deployment of videoconferencing systems could be conceived an integral part of virtual open campuses

    Analyse intelligente de la qualité d'expérience (QoE) dans les réseaux de diffusion de contenu web et mutimédia

    Get PDF
    Today user experience is becoming a reliable indicator for service providers and telecommunication operators to convey overall end to end system functioning. Moreover, to compete for a prominent market share, different network operators and service providers should retain and increase the customers’ subscription. To fulfil these requirements they require an efficient Quality of Experience (QoE) monitoring and estimation. However, QoE is a subjective metric and its evaluation is expensive and time consuming since it requires human participation. Therefore, there is a need for an objective tool that can measure the QoE objectively with reasonable accuracy in real-Time. As a first contribution, we analyzed the impact of network conditions on Video on Demand (VoD) services. We also proposed an objective QoE estimation tool that uses fuzzy expert system to estimate QoE from network layer QoS parameters. As a second contribution, we analyzed the impact of MAC layer QoS parameters on VoD services over IEEE 802.11n wireless networks. We also proposed an objective QoE estimation tool that uses random neural network to estimate QoE from the MAC layer perspective. As our third contribution, we analyzed the effect of different adaption scenarios on QoE of adaptive bit rate streaming. We also developed a web based subjective test platform that can be easily integrated in a crowdsourcing platform for performing subjective tests. As our fourth contribution, we analyzed the impact of different web QoS parameters on web service QoE. We also proposed a novel machine learning algorithm i.e. fuzzy rough hybrid expert system for estimating web service QoE objectivelyDe nos jours, l’expĂ©rience de l'utilisateur appelĂ© en anglais « User Experience » est devenue l’un des indicateurs les plus pertinents pour les fournisseurs de services ainsi que pour les opĂ©rateurs de tĂ©lĂ©communication pour analyser le fonctionnement de bout en bout de leurs systĂšmes (du terminal client, en passant par le rĂ©seaux jusqu’à l’infrastructure des services etc.). De plus, afin d’entretenir leur part de marchĂ© et rester compĂ©titif, les diffĂ©rents opĂ©rateurs de tĂ©lĂ©communication et les fournisseurs de services doivent constamment conserver et accroĂźtre le nombre de souscription des clients. Pour rĂ©pondre Ă  ces exigences, ils doivent disposer de solutions efficaces de monitoring et d’estimation de la qualitĂ© d'expĂ©rience (QoE) afin d’évaluer la satisfaction de leur clients. Cependant, la QoE est une mesure qui reste subjective et son Ă©valuation est coĂ»teuse et fastidieuse car elle nĂ©cessite une forte participation humaine (appelĂ© panel de d’évaluation). Par consĂ©quent, la conception d’un outil qui peut mesurer objectivement cette qualitĂ© d'expĂ©rience avec une prĂ©cision raisonnable et en temps rĂ©el est devenue un besoin primordial qui constitue un challenge intĂ©ressant Ă  rĂ©soudre. Comme une premiĂšre contribution, nous avons analysĂ© l'impact du comportement d’un rĂ©seau sur la qualitĂ© des services de vidĂ©o Ă  la demande (VOD). Nous avons Ă©galement proposĂ© un outil d'estimation objective de la QoE qui utilise le systĂšme expert basĂ© sur la logique floue pour Ă©valuer la QoE Ă  partir des paramĂštres de qualitĂ© de service de la couche rĂ©seau. Dans une deuxiĂšme contribution, nous avons analysĂ© l'impact des paramĂštres QoS de couche MAC sur les services de VoD dans le cadre des rĂ©seaux sans fil IEEE 802.11n. Nous avons Ă©galement proposĂ© un outil d'estimation objective de la QoE qui utilise le rĂ©seau alĂ©atoire de neurones pour estimer la QoE dans la perspective de la couche MAC. Pour notre troisiĂšme contribution, nous avons analysĂ© l'effet de diffĂ©rents scĂ©narios d'adaptation sur la QoE dans le cadre du streaming adaptatif au dĂ©bit. Nous avons Ă©galement dĂ©veloppĂ© une plate-Forme Web de test subjectif qui peut ĂȘtre facilement intĂ©grĂ© dans une plate-Forme de crowd-Sourcing pour effectuer des tests subjectifs. Finalement, pour notre quatriĂšme contribution, nous avons analysĂ© l'impact des diffĂ©rents paramĂštres de qualitĂ© de service Web sur leur QoE. Nous avons Ă©galement proposĂ© un algorithme d'apprentissage automatique i.e. un systĂšme expert hybride rugueux basĂ© sur la logique floue pour estimer objectivement la QoE des Web service

    MMBnet 2017 - Proceedings of the 9th GI/ITG Workshop „Leistungs-, VerlĂ€sslichkeits- und ZuverlĂ€ssigkeitsbewertung von Kommunikationsnetzen und Verteilten Systemen“

    Get PDF
    Nowadays, mathematical methods of systems and network monitoring, modeling, simulation, and performance, dependability and reliability analysis constitute the foundation of quantitative evaluation methods with regard to software-defined next-generation networks and advanced cloud computing systems. Considering the application of the underlying methodologies in engineering practice, these sophisticated techniques provide the basis in many different areas. The GI/ITG Technical Committee “Measurement, Modelling and Evaluation of Computing Systems“ (MMB) and its members have investigated corresponding research topics and initiated a series of MMB conferences and workshops over the last decades. Its 9th GI/ITG Workshop MMBnet 2017 „Leistungs-, VerlĂ€sslichkeits- und ZuverlĂ€ssigkeitsbewertung von Kommunikationsnetzen und Verteilten Systemen“ was held at Hamburg University of Technology (TUHH), Germany, on September 14, 2017. The proceedings of MMBnet 2017 summarize the contributions of one invited talk and four contributed papers of young researchers. They deal with current research issues in next-generation networks, IP-based real-time communication systems, and new application architectures and intend to stimulate the reader‘s future research in these vital areas of modern information society

    CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines

    Get PDF
    Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective. The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines. From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research

    Robust density modelling using the student's t-distribution for human action recognition

    Full text link
    The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. © 2011 IEEE
    corecore