10,219 research outputs found

    Multiresolution vector quantization

    Get PDF
    Multiresolution source codes are data compression algorithms yielding embedded source descriptions. The decoder of a multiresolution code can build a source reproduction by decoding the embedded bit stream in part or in whole. All decoding procedures start at the beginning of the binary source description and decode some fraction of that string. Decoding a small portion of the binary string gives a low-resolution reproduction; decoding more yields a higher resolution reproduction; and so on. Multiresolution vector quantizers are block multiresolution source codes. This paper introduces algorithms for designing fixed- and variable-rate multiresolution vector quantizers. Experiments on synthetic data demonstrate performance close to the theoretical performance limit. Experiments on natural images demonstrate performance improvements of up to 8 dB over tree-structured vector quantizers. Some of the lessons learned through multiresolution vector quantizer design lend insight into the design of more sophisticated multiresolution codes

    Progressively communicating rich telemetry from autonomous underwater vehicles via relays

    Get PDF
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution June 2012As analysis of imagery and environmental data plays a greater role in mission construction and execution, there is an increasing need for autonomous marine vehicles to transmit this data to the surface. Without access to the data acquired by a vehicle, surface operators cannot fully understand the state of the mission. Communicating imagery and high-resolution sensor readings to surface observers remains a significant challenge – as a result, current telemetry from free-roaming autonomous marine vehicles remains limited to ‘heartbeat’ status messages, with minimal scientific data available until after recovery. Increasing the challenge, longdistance communication may require relaying data across multiple acoustic hops between vehicles, yet fixed infrastructure is not always appropriate or possible. In this thesis I present an analysis of the unique considerations facing telemetry systems for free-roaming Autonomous Underwater Vehicles (AUVs) used in exploration. These considerations include high-cost vehicle nodes with persistent storage and significant computation capabilities, combined with human surface operators monitoring each node. I then propose mechanisms for interactive, progressive communication of data across multiple acoustic hops. These mechanisms include wavelet-based embedded coding methods, and a novel image compression scheme based on texture classification and synthesis. The specific characteristics of underwater communication channels, including high latency, intermittent communication, the lack of instantaneous end-to-end connectivity, and a broadcast medium, inform these proposals. Human feedback is incorporated by allowing operators to identify segments of data thatwarrant higher quality refinement, ensuring efficient use of limited throughput. I then analyze the performance of these mechanisms relative to current practices. Finally, I present CAPTURE, a telemetry architecture that builds on this analysis. CAPTURE draws on advances in compression and delay tolerant networking to enable progressive transmission of scientific data, including imagery, across multiple acoustic hops. In concert with a physical layer, CAPTURE provides an endto- end networking solution for communicating science data from autonomous marine vehicles. Automatically selected imagery, sonar, and time-series sensor data are progressively transmitted across multiple hops to surface operators. Human operators can request arbitrarily high-quality refinement of any resource, up to an error-free reconstruction. The components of this system are then demonstrated through three field trials in diverse environments on SeaBED, OceanServer and Bluefin AUVs, each in different software architectures.Thanks to the National Science Foundation, and the National Oceanic and Atmospheric Administration for their funding of my education and this work

    Avaliação da qualidade de experiência de vídeo em várias tecnologias

    Get PDF
    Mestrado em Engenharia Eletrónica e TelecomunicaçõesNowadays the internet is associated with many services. Combined with this fact, there is a marked increase of the users joining this service. In this perspective, it is required that the service providers guarantee a minimum quality to the network services. The Quality of Experience of services is quite crucial in the development of services in networks. Also noteworthy, the tra c increase in multimedia services, including video streaming, increases the probability of congesting the networks. In the perspective of the service provider, the monitoring is a solution to avoid saturation in network. This way, this dissertation proposes to develop a platform that allows a multimedia tra c monitoring in the Meo Go service provided by the operator Portugal Telecom Communications. The architecture of the adaptive streaming over HTTP has been studied and tested to obtain the quality of experience metrics. This adaptive streaming technique presents the smooth streaming, an architecture made by Microsoft company, and it is used in the Meo Go service. Then, it is monitored the metrics obtained with the video player. This analysis is done objectively and subjectively. In this phase, the objective implementation of the method allows to obtain the prediction value of the Quality of Experience by consumers. The selected metrics were derived from the state / performance of network and terminal device. The obtained metrics aim to simulate human action in video score quality. Otherwise, subjectively, it is conducted a survey based in a questionnaire to compare methods. In this phase it was created an on-line platform to allow the obtain a greater number of rankings and data processing. In the obtained results, rstly in the smooth streaming player, it is shown the adaptive streaming implementation technique. On the next phase, test scenarios were created to demonstrate the functioning of the method in many cases, with greater relevance for those ones with higher dynamic complexity. From the perspective of subjective and objective methods, these have values that con rm the architecture of the implemented module. Over time, the performance of the scoring the quality of video streaming services approaches the one in a human mental action.Nos dias de hoje a Internet é um dos meios com mais serviços associados. Conjugado a este facto, existe um acentuado aumento de utilizadores a aderir a este serviço. Nesta perspectiva existe a necessidade de garantir uma qualidade mínima por parte dos prestadores de serviços. A Qualidade de Experiência que os consumidores têm dos serviços é bastante crucial no desenvolvimento e optimização dos serviços nas redes. É ainda de salientar que o aumento do tráfego multimédia, nomeadamente os streamings de vídeo, apresenta incrementos na probabilidade de as redes se congestionarem. Na perspectiva do prestador de serviços a monitorização é a solução para evitar a saturação total. Neste sentido, esta dissertação pretende desenvolver uma plataforma que permite a monitorização do tráfego de multimédia do serviço do Meo Go, fornecido pela operadora Portugal Telecom Comunicações. Neste trabalho foi necessário investigar e testar a arquitectura do streaming adaptativo sobre HTTP para ser possível obter métricas de qualidade de experiência. Este streaming adaptativo apresenta a técnica de smooth streaming, sendo esta arquitectura projectada pela empresa Microsoft e utilizada no serviço Meo Go. Posteriormente foram monitorizadas as métricas que se obtiveram no player de vídeo. Esta análise foi realizada de forma objectiva e subjectiva. Nesta fase da implementação objectiva do método em que se pretende obter uma predição do valor de Qualidade de Experiência por parte do consumidor, foram seleccionadas as métricas oriundas do estado/desempenho da rede e do dispositivo terminal. As métricas obtidas entram num processo de tratamento que pretende simular a ação humana nas classificações da qualidade dos vídeos. De outra forma, subjectivamente, foi realizada uma pesquisa, com base num questionário, de modo a comparar os métodos. Nesta etapa foi gerada uma plataforma online que possibilitou obter um maior número de classificações dos vídeos para posteriormente se proceder ao tratamento de dados. Nos resultados obtidos, primeiramente ao nível do player de smooth streaming, estes permitem analisar a técnica de implementação de streaming adaptativo. Numa fase seguinte foram criados cenários de teste para comprovar o funcionamento do método em diversas situações, tendo com maior relevância aqueles que contêm dinâmicas mais complexas. Na perspectiva dos métodos subjectivo e objectivo, estes apresentam valores que confirmam a arquitectura do módulo implementado. Adicionalmente, o desempenho do método em classificar a qualidade de serviço de vídeo streaming, ao longo do tempo, apresentou valores que se aproximam da dinâmica esperada numa ação mental humana

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Video transmission over wireless networks

    Get PDF
    Compressed video bitstream transmissions over wireless networks are addressed in this work. We first consider error control and power allocation for transmitting wireless video over CDMA networks in conjunction with multiuser detection. We map a layered video bitstream to several CDMA fading channels and inject multiple source/parity layers into each of these channels at the transmitter. We formulate a combined optimization problem and give the optimal joint rate and power allocation for each of linear minimum mean-square error (MMSE) multiuser detector in the uplink and two types of blind linear MMSE detectors, i.e., the direct-matrix-inversion (DMI) blind detector and the subspace blind detector, in the downlink. We then present a multiple-channel video transmission scheme in wireless CDMA networks over multipath fading channels. For a given budget on the available bandwidth and total transmit power, the transmitter determines the optimal power allocations and the optimal transmission rates among multiple CDMA channels, as well as the optimal product channel code rate allocation. We also make use of results on the large-system CDMA performance for various multiuser receivers in multipath fading channels. We employ a fast joint source-channel coding algorithm to obtain the optimal product channel code structure. Finally, we propose an end-to-end architecture for multi-layer progressive video delivery over space-time differentially coded orthogonal frequency division multiplexing (STDC-OFDM) systems. We propose to use progressive joint source-channel coding to generate operational transmission distortion-power-rate (TD-PR) surfaces. By extending the rate-distortion function in source coding to the TD-PR surface in joint source-channel coding, our work can use the ??equal slope?? argument to effectively solve the transmission rate allocation problem as well as the transmission power allocation problem for multi-layer video transmission. It is demonstrated through simulations that as the wireless channel conditions change, these proposed schemes can scale the video streams and transport the scaled video streams to receivers with a smooth change of perceptual quality

    Forward Error Correction applied to JPEG-XS codestreams

    Full text link
    JPEG-XS offers low complexity image compression for applications with constrained but reasonable bit-rate, and low latency. Our paper explores the deployment of JPEG-XS on lossy packet networks. To preserve low latency, Forward Error Correction (FEC) is envisioned as the protection mechanism of interest. Despite the JPEG-XS codestream is not scalable in essence, we observe that the loss of a codestream fraction impacts the decoded image quality differently, depending on whether this codestream fraction corresponds to codestream headers, to coefficients significance information, or to low/high frequency data, respectively. Hence, we propose a rate-distortion optimal unequal error protection scheme that adapts the redundancy level of Reed-Solomon codes according to the rate of channel losses and the type of information protected by the code. Our experiments demonstrate that, at 5% loss rates, it reduces the Mean Squared Error by up to 92% and 65%, compared to a transmission without and with optimal but equal protection, respectively
    corecore