14 research outputs found

    An adaptive e-Learning platform based on IP multicast technology

    Get PDF
    Comunicação apresentada na International Conference on Information and Communication Technologies in Education, Badajoz, 2002.A wide range of applications involving different types of media, with distinct quality of service and network resources requirements have been fostering the computer communications community in order to improve the service provided by the Internet. Besides the IETF recent proposals for introducing QoS in the Internet, multicast technology proposed by S.Deering assumes a major role in supporting group-oriented applications. This article describes the design, implementation and operation of an adaptive distance learning system based on IP multicast technology accessible through a Web browser. This system uses public domain multimedia multicast to build a system which adapts conveniently to the available network resources and to the hardware capabilities of the end-system. The system architecture includes an adaptive module based on Java applets and embedded Javascript, responsible for assessing the existing operating conditions, by collecting the client's system performance (e-student's host) and relevant group characteristics. The collected data is subsequently computed weighting parameters, such as the available bandwidth at the client side, the round-trip time between the client and the remote server, the client's current CPU load and free memory. The obtained result is used for proper multicast applications scheduling and parameterisation in a transparent way

    QoS adaptation in multimedia multicast conference applications for e-learning services

    Get PDF
    The evolution of the World Wide Web (WWW) service has incorporated new distributed multimedia conference applications, powering a new generation of e-learning development, and allowing improved interactivity and pro- human relations. Groupware applications are increasingly representative in the Internet home applications market, however, the Quality of Service (QoS) provided by the network is still a limitation impairing their performance. Such applications have found in multicast technology an ally contributing for their efficient implementation and scalability. Additionally, consider QoS as design goal at application level becomes crucial for groupware development, enabling QoS proactivity to applications. The applications’ ability to adapt themselves dynamically according to the resources availability can be considered a quality factor. Tolerant real-time applications, such as videoconferences, are in the frontline to benefit from QoS adaptation. However, not all include adaptive technology able to provide both end-system and network quality awareness. Adaptation, in these cases, can be achieved by introducing a multiplatform middleware layer responsible for tutoring the applications' resources (enabling adjudication or limitation) based on the available processing and networking capabilities. Congregating these technological contributions, an adaptive platform has been developed integrating public domain multicast tools, applied to a web-based distance learning system. The system is user-centered (e-student), aiming at good pedagogical practices and proactive usability for multimedia and network resources. The services provided, including QoS adapted interactive multimedia multicast conferences (MMC), are fully integrated and transparent to end-users. QoS adaptation, when treated systematically in tolerant real-time applications, denotes advantages in group scalability and QoS sustainability in heterogeneous and unpredictable environments such as the Internet

    QoS adaptation in multimedia multicast conference applications for e-learning services

    Get PDF
    Tolerant real-time applications, such as video conferences, are in the frontline to benefit from QoS adaptation. However, not all include adaptive technology able to provide both end-system and network quality awareness. Adaptation, in these cases, can be achieved by introducing a multiplatform middleware layer responsible for tutoring the applications’ resources (enabling adjudication or limitation) based on the available processing and networking capabilities. Congregating these technological contributions, an adaptive platform has been developed integrating public domain multicast tools, applied to a Web-based distance learning system. The system is user-centered (estudent), aiming at good pedagogical practices and proactive usability for multimedia and networkresources. The services provided, including QoS adapted interactive multimedia multicast conferences (MMC), are fully integrated and transparent to end-users. QoS adaptation, when treated systematically in tolerant real-time applications, denotes advantages in group scalability and QoS sustainability in heterogeneous and unpredictable environments such as the Internet

    A New Approach to Manage QoS in Distributed Multimedia Systems

    Full text link
    Dealing with network congestion is a criterion used to enhance quality of service (QoS) in distributed multimedia systems. The existing solutions for the problem of network congestion ignore scalability considerations because they maintain a separate classification for each video stream. In this paper, we propose a new method allowing to control QoS provided to clients according to the network congestion, by discarding some frames when needed. The technique proposed, called (m,k)-frame, is scalable with little degradation in application performances. (m,k)-frame method is issued from the notion of (m,k)-firm realtime constraints which means that among k invocations of a task, m invocations must meet their deadline. Our simulation studies show the usefulness of (m,k)-frame method to adapt the QoS to the real conditions in a multimedia application, according to the current system load. Notably, the system must adjust the QoS provided to active clients1 when their number varies, i.e. dynamic arrival of clients.Comment: 10 pages, International Journal of Computer Science and Information Security (IJCSIS

    Telehealth tinnitus therapy during the COVID-19 outbreak in the UK : uptake and related factors

    Get PDF
    OBJECTIVE : The Audiology Department at the Royal Surrey County Hospital usually offers face-to-face audiologist-delivered cognitive behavioural therapy (CBT) for tinnitus rehabilitation. During COVID-19 lockdown, patients were offered telehealth CBT via video using a web-based platform. This study evaluated the proportion of patients who took up the offer of telehealth sessions and factors related to this. DESIGN : Retrospective service evaluation. STUDY SAMPLE : 113 consecutive patients whose care was interrupted by the lockdown. RESULTS : 80% of patients accepted telehealth. The main reasons for declining were not having access to a suitable device and the belief that telehealth appointments would not be useful. Compared to having no hearing loss in the better ear, having a mild or moderate hearing loss increased the chance of declining telehealth by factors of 3.5 (p = 0.04) and 14.9 (p = 0.038), respectively. High tinnitus annoyance as measured via the visual analogue scale increased the chance of declining telehealth appointments by a factor of 1.4 (p = 0.019). CONCLUSIONS : Although CBT via telehealth was acceptable to most patients, alternatives may be necessary for the 20% who declined. These tended to have worse hearing in their better ear and more annoying tinnitus.https://www.tandfonline.com/loi/iija20hj2022Speech-Language Pathology and Audiolog

    Quality aspects of Internet telephony

    Get PDF
    Internet telephony has had a tremendous impact on how people communicate. Many now maintain contact using some form of Internet telephony. Therefore the motivation for this work has been to address the quality aspects of real-world Internet telephony for both fixed and wireless telecommunication. The focus has been on the quality aspects of voice communication, since poor quality leads often to user dissatisfaction. The scope of the work has been broad in order to address the main factors within IP-based voice communication. The first four chapters of this dissertation constitute the background material. The first chapter outlines where Internet telephony is deployed today. It also motivates the topics and techniques used in this research. The second chapter provides the background on Internet telephony including signalling, speech coding and voice Internetworking. The third chapter focuses solely on quality measures for packetised voice systems and finally the fourth chapter is devoted to the history of voice research. The appendix of this dissertation constitutes the research contributions. It includes an examination of the access network, focusing on how calls are multiplexed in wired and wireless systems. Subsequently in the wireless case, we consider how to handover calls from 802.11 networks to the cellular infrastructure. We then consider the Internet backbone where most of our work is devoted to measurements specifically for Internet telephony. The applications of these measurements have been estimating telephony arrival processes, measuring call quality, and quantifying the trend in Internet telephony quality over several years. We also consider the end systems, since they are responsible for reconstructing a voice stream given loss and delay constraints. Finally we estimate voice quality using the ITU proposal PESQ and the packet loss process. The main contribution of this work is a systematic examination of Internet telephony. We describe several methods to enable adaptable solutions for maintaining consistent voice quality. We have also found that relatively small technical changes can lead to substantial user quality improvements. A second contribution of this work is a suite of software tools designed to ascertain voice quality in IP networks. Some of these tools are in use within commercial systems today

    (m,k)-WFQ : Intégration des contraintes temporelles (m,k)-firm dans les réseaux à débit garanti

    Get PDF
    Dans les réseaux à commutation de paquets, les ordonnanceurs à débit garanti tels que WFQ (Weighted Fair Queueing) et ses variantes sont largement utilisés pour garantir principalement de la bande passante et par conséquent une borne sur le délai pour les applications temps-réel étant donné que leurs courbes d'arrivées cumulatives de travail sont bornées. Cependant, le délai garanti à une application temps-réel par de tels ordonnanceurs pourrait excéder l'exigence requise par l'application si cette dernière génère un trafic en rafales de tailles importantes. La solution de réservation de débit crête pourrait remédier à ce problème mais au prix d'une sous-utilisation de bande passante. Nous proposons une nouvelle solution qui consiste à intégrer les contraintes temporelles dans le processus d'ordonnancement de WFQ. Par conséquent, sachant que plusieurs applications temps-réel tolèrent quelques dépassements d'échéances selon le modèle (m,k)-firm, nous proposons une nouvelle technique d'ordonnancement à partage équitable de bande passante, appelé (m,k)-WFQ, qui permet d'étendre WFQ pour considérer en plus les contraintes (m,k)-firm des applications temps-réel. Nous évaluons analytiquement notre proposition en utilisant le formalisme du Network Calculus et nous dérivons la borne sur le délai garantie par (m,k)-WFQ. Les résultats analytiques et la simulation montrent l'avantage de (m,k)-WFQ pour garantir des délais plus petits tout en maintenant l'équité du partage de la bande passante

    AXMEDIS 2007 Conference Proceedings

    Get PDF
    The AXMEDIS International Conference series has been established since 2005 and is focused on the research, developments and applications in the cross-media domain, exploring innovative technologies to meet the challenges of the sector. AXMEDIS2007 deals with all subjects and topics related to cross-media and digital-media content production, processing, management, standards, representation, sharing, interoperability, protection and rights management. It addresses the latest developments and future trends of the technologies and their applications, their impact and exploitation within academic, business and industrial communities

    Detection and Mitigation of Impairments for Real-Time Multimedia Applications

    Get PDF
    Measures of Quality of Service (QoS) for multimedia services should focus on phenomena that are observable to the end-user. Metrics such as delay and loss may have little direct meaning to the end-user because knowledge of specific coding and/or adaptive techniques is required to translate delay and loss to the user-perceived performance. Impairment events, as defined in this dissertation, are observable by the end-users independent of coding, adaptive playout or packet loss concealment techniques employed by their multimedia applications. Methods for detecting real-time multimedia (RTM) impairment events from end-to-end measurements are developed here and evaluated using 26 days of PlanetLab measurements collected over nine different Internet paths. Furthermore, methods for detecting impairment-causing network events like route changes and congestion are also developed. The advanced detection techniques developed in this work can be used by applications to detect and match response to network events. The heuristics-based techniques for detecting congestion and route changes were evaluated using PlanetLab measurements. It was found that Congestion events occurred for 6-8 hours during the days on weekdays on two paths. The heuristics-based route change detection algorithm detected 71\% of the visible layer 2 route changes and did not detect the events that occurred too close together in time or the events for which the minimum RTT change was small. A practical model-based route change detector named the parameter unaware detector (PUD) is also developed in this deissertation because it was expected that model-based detectors would perform better than the heuristics-based detector. Also, the optimal detector named the parameter aware detector (PAD) is developed and is useful because it provides the upper bound on the performance of any detector. The analysis for predicting the performance of PAD is another important contribution of this work. Simulation results prove that the model-based PUD algorithm has acceptable performance over a larger region of the parameter space than the heuristics-based algorithm and this difference in performance increases with an increase in the window size. Also, it is shown that both practical algorithms have a smaller acceptable performance region compared to the optimal algorithm. The model-based algorithms proposed in this dissertation are based on the assumption that RTTs have a Gamma density function. This Gamma distribution assumption may not hold when there are wireless links in the path. A study of CDMA 1xEVDO networks was initiated to understand the delay characteristics of these networks. During this study, it was found that the widely deployed proportional-fair (PF) scheduler can be corrupted accidentally or deliberately to cause RTM impairments. This is demonstrated using measurements conducted over both in-lab and deployed CDMA 1xEVDO networks. A new variant to PF that solves the impairment vulnerability of the PF algorithm is proposed and evaluated using ns-2 simulations. It is shown that this new scheduler solution together with a new adaptive-alpha initialization stratergy reduces the starvation problem of the PF algorithm

    Reconstrução de sinal e codificação : reconstrução de sinal com detecção da posição das amostras erradas e códigos de correcção de erros

    Get PDF
    Doutoramento em Engenharia ElectrotécnicaConsidere-se um sinal passa-baixo com N amostras, t das quais foram posteriormente modificadas. Será possível detectar quais as amostras modificadas, e recuperar o seu valor original? A quase totalidade das técnicas conhecidas sobre reconstrução de sinal não resolve este problema, sendo necessário conhecer a posição das amostras erradas para que seja possível obter a amplitude correcta. No entanto, os códigos de correcção de erros do tipo BCH, entre outros, conseguem resolver este problema, ou seja, determinar a posição das amostras erradas e, numa segunda fase, corrigir a sua amplitude. Estes códigos são normalmente aplicados a sinais digitais recorrendo a “software” com aritmética num corpo finito necessitando, por esse motivo, de processadores dedicados para realizar as operações de codificação e descodificação de forma eficiente. No entanto, e como veremos ao longo deste trabalho, é possível utilizar estas técnicas no corpo dos números complexos, levantando-se no entanto uma série de questões novas como, por exemplo, a da estabilidade da reconstrução. As técnicas de reconstrução de sinal e os códigos de correcção de erros são usualmente encarados como disciplinas distintas com as suas técnicas próprias não existindo aparentemente qualquer relação entre as duas. No entanto, estas dificuldades resultam em grande medida da diferente aritmética, notação e linguagem utilizadas. Esta dissertação utiliza em simultâneo estas duas disciplinas, transportando técnicas e conceitos de uma área para outra, numa tentativa de enriquecer o conhecimento e compreensão de ambas. Neste trabalho estudamos várias técnicas para determinar a posição das amostras erradas num sinal discreto limitado em banda e de duração finita, sendo descrito como reconstrução não linear. Comparamos igualmente algumas técnicas de determinação da amplitude das amostras erradas, oriundas dos códigos de correcção de erros, com as que são utilizadas nos algoritmos de reconstrução de sinal. O problema da estabilidade da determinação da posição das amostras erradas é investigado, mostrando-se por exemplo que os efeitos da amplitude e posição dos erros na estabilidade podem ser separados. Um dos objectivos deste trabalho foi o de encontrar técnicas que permitissem o projecto de códigos de correcção de erros no corpo dos números complexos. A chave para a solução deste problema reside na combinatória dos padrões dos erros e da sua influência na estabilidade dos algoritmos de reconstrução.Consider a low-pass signal with N samples where the amplitude of t samples was modified. Is it possible to find a method to detect which samples violate the low-pass condition and reconstruct their original amplitude? Most of the known techniques of signal reconstruction don’t solve this problem, being necessary to know the error positions in order to find their correct amplitudes. However, the BCH type error correction codes, for example, can solve this problem in two steps, first, they found the error positions, then, they found the error amplitudes. Those codes are usually implemented on a finite arithmetic field with dedicated processors, which implement the coding and decoding tasks on an efficient way. Nevertheless, and as we will see during this work, it is possible to use those techniques on the complex field, arising some new problems as the numerically stability of the reconstruction process. The signal reconstruction techniques and the error correction codes are usually faced as distinct disciplines, with their own techniques and results, and apparently with few aspects in common. These difficulties are the result of differences in the arithmetic notation and language used. This thesis deals with both disciplines transporting techniques and concepts from one area to another, trying to enrich the knowledge and comprehension of both. In this work we have studied several techniques to find the error positions in a band-limited and time-limited signal. Those techniques are described as non-linear signal reconstruction. We also compare some techniques for error amplitude correction imported from error correction codes with some signal reconstruction techniques. The stability of the problem of finding the error positions was also studied. We have achieved some interesting results as for example the separation of the influence of the error amplitude from the error position on the stability. One of the goals of this work was to find some techniques to design error correction codes on the complex field. The key to this problem is the error pattern combinatory and their influence on the reconstruction stability
    corecore