22 research outputs found

    Joint Optimization of Sensor Selection and Routing for Distributed Estimation in Wireless Sensor Networks

    Get PDF
    Avances recientes en redes inal谩mbricos de sensores (WSNs, Wireless Sensor Networks) han posibilitado que peque帽os sensores, baratos y con recursos limitados tanto en sensado, comunicaci贸n, como en computaci贸n, sean desplegados a gran escala. En consecuencia, las WSNs pueden ofrecer diversos servicios en importantes aplicaciones para la sociedad. Entre las varias restricciones que aparecen en el dise帽o de WSNs, tales como la limitaci贸n en energ铆a disponible, procesamiento y memoria, la limitaci贸n en energ铆a es muy importante ya que en muchas aplicaciones (ej., monitorizaci贸n remota de diferentes entornos, edificios administrativos, monitoreo del h谩bitat, los incendios forestales, la atenci贸n sanitaria, la vigilancia del tr谩fico, vigilancia del campo de batalla, las reservas de vida silvestre, etc.) los sensores est谩n alimentados por bater铆as, pudiendo hacer uso tambi茅n de captaci贸n de energ铆a renovables. Dado que las comunicaciones son causantes del mayor consumo energ茅tico en un nodo, la transmisi贸n y recepci贸n de informaci贸n deben optimizarse lo m谩ximo posible. Estas limitaciones y el dise帽o espec铆fico de los sensores, hacen necesario el estudio de m茅todos eficientes energ茅ticamente y que reduzcan la cantidad de informaci贸n a transmitir. Motivaci贸n y Objetivos: Aunque las WSNs necesitan cubrir en muchas ocasiones una importante 谩rea geogr谩fica, muchos eventos necesitan ser detectados y tratados localmente. Algunos de estos ejemplos son la energ铆a capturada por sensores de energ铆a ac煤stica donde existe una cierta fuente ac煤stica localizada en el espacio, detecci贸n y verificaci贸n de un foco de fuego en un bosque, sensores de direcci贸n de llegada para localizaci贸n, u otra fuente difusiva localmente generada (ej. radiaci贸n nuclear). Intuitivamente, en estos escenarios, los nodos que est谩n localizados lejos de la fuente observar谩n medidas significativamente menos informativas que los nodos cercanos a la fuente. Por lo tanto, la vida 煤til de la red puede ser incrementada al considerar la activaci贸n de solo un subconjunto de sensores (los m谩s informativos) cuya informaci贸n es 煤til y por tanto debe ser recolectada. Adem谩s, la eficiencia energ茅tica puede ser mejorada a煤n m谩s al elegir la mejor estructura de enrutamiento. Es importante resaltar que la t茅cnica utilizada m谩s tradicional es la transmisi贸n directa inal谩mbrica de las medidas desde todos los nodos seleccionados al centro de fusi贸n de datos (nodo solicitante de la estimaci贸n global), lo cual resultas en un ineficiente uso de los recursos energ茅ticos. Una soluci贸n factible puede ser el uso de la naturaleza multisalto de la transmisi贸n de datos, el cual puede significativamente reducir la potencia total de transmisi贸n, y por tanto aumentar la vida de la red. La cuantificaci贸n de la informaci贸n (fusi贸n) puede tambi茅n utilizarse en un procesado intra-red para ahorrar energ铆a, ya que reduce la cantidad de informaci贸n a ser reenviada en direcci贸n al nodo centro de fusi贸n. La asignaci贸n din谩mica de bit-rate (bits por muestra) en cada nodo puede tambi茅n ser empleada para reducir tambi茅n el consumo total de la red. De esta manera, se puede obtener un importante ahorro energ茅tico al realizar de manera distribuida una cierta tarea de estimaci贸n optimizando el conjunto de sensores activo; la estructura de enrutamiento, y los bits por muestra para cada sensor seleccionado. En la literatura reciente, se ha demostrado claramente que la transmisi贸n multisalto en WSNs es m谩s eficiente energ茅ticamente que la transmisi贸n directa, donde cada medida es directamente transmitida al centro de fusi贸n de datos (MT, Measure-and-Transmit). Adem谩s, transmisi贸n mutlisalto, en general, permite el env铆o de las medidas al nodo fusi贸n de dos formas: a) cada nodo reenviar directamente la informaci贸n recibida, b) cada nodo reenviar la informaci贸n agregada. Puede observarse que, el fusionar las medidas en sensores intermedios ofrece una mejora en la calidad global de la estimaci贸n con coste computacional limitado. Esto nos lleva a considerar los dos esquemas siguientes: Medir-y-reenviar (MF, Measure-and-Forward): En este esquema, los nodos sensores simplemente reenv铆an las medidas que reciben de sus nodos sensores hijos en direcci贸n al nodo solicitante a lo largo de la estructura de enrutamiento elegida. El nodo solicitante obtendr谩 por tanto la estimaci贸n final, por lo tanto, no hay estimaci贸n agregadas incrementales en los sensores intermedios. Estimar-y-reenviar (EF, Estimate-and-Forward): En este esquema, se considera un enfoque con estimaci贸n agregada secuencial en los nodos intermedios de la ruta de encaminamiento. Dada una estructura de enrutamiento, cada sensor fusiona todas las otras medidas que son recibidas de sus nodos hijos junto con la suya propia, con el objetivo de obtener una estimaci贸n agregada, y luego enviar un 煤nico flujo de la informaci贸n fusionada a su nodo sensor padre en la estructura de enrutamiento elegida. El esquema EF tiene varias ventajas interesantes respecto al esquema MF. En primer lugar, el esquema EF es m谩s eficiente energ茅ticamente ya que un nodo sensor activo en una ruta solo tiene que reenviar la estimaci贸n fusionada (una 煤nico paquete de informaci贸n transmitir), en vez de reenviar su propia medida adem谩s de las medidas de sus nodos hijos. Adem谩s, utilizando un esquema EF, los nodos intermedios en la ruta tienen una estimaci贸n del par谩metro que es mejor conforme el nodo est谩 m谩s cercano al nodo solicitante. La otra principal desventaja del esquema MF es que los nodos cerca del nodo solicitante pueden sobrecargarse, lo crea un efecto de cuello de botella. Por lo tanto, dada una WSN con una cierto grafo subyacente de conectividad de red, un cierto nodo solicitante, y una fuente localizada, esta tesis considera el problema de la estimaci贸n distribuida de un par谩metro, donde la potencia total disponible esta limitada, por lo tanto, y donde utilizamos el esquema EF, optimizando conjuntamente el subconjunto de sensores activos, la asignaci贸n de bit-rate en cada sensor y la estructura de enrutamiento multisalto asociada hasta el nodo solicitante. Por lo tanto, la distorsi贸n total en la estimaci贸n es minimizada para una cierta potencia total de transmisi贸n. Un resultado importante de este trabajo el consiste en que el algoritmo Shortes Path Tree (SPT) basado solo en coste de comunicaci贸n (SPT-CC) no es la estructura 贸ptima de enrutamiento en general cuando se busca alcanzar un compromiso 贸ptimo entre la distorsi贸n de la estimaci贸n y el coste total de comunicaciones, sin importar si uno usa el esquema MF o el EF. En nuestra estimaci贸n distribuida multisalto, mientras nos dirigimos hacia el nodo solicitante, necesitamos asignar tasas mayores de bits ya que la precisi贸n de la estimaci贸n mejora a medida que m谩s informaci贸n se fusiona en los nodos de sensores intermedios. Por lo tanto, la asignaci贸n de tasa de bits en un sensor depende del n煤mero de saltos que existe entre dicho nodo y el nodo solicitante, de tal manera que hay una necesidad de proporcionar mayores tasas de bits al ir acerc谩ndose al nodo solicitante en la ruta de multisalto escogido. Por otro parte, la localizaci贸n de la fuente que determine fen贸meno estimar tambi茅n influencia la asignaci贸n de bit-rate para sensor. Por ejemplo, si un sensor est谩 cerca de la fuente (relaci贸n Se帽al-Ruido alto), incluso aunque existe un gran n煤mero de saltos necesarios para llegar al nodo solicitante, necesitamos asignar un bit-rate razonablemente alto. En consecuencia, hay una clara necesidad de dise帽ar un cuantificador adaptativo en cada sensor con el objetivo de proporcionar un apropiado bit-rate, el cual depende del compromiso entre el n煤mero de saltos y la localizaci贸n de la fuente. Adem谩s, el bit-rate tambi茅n depende del coste de comunicaci贸n entre cada dos sensores. Metodolog铆a: En esta tesis, combinamos m茅todos de an谩lisis te贸rico, dise帽o algoritmos iterativos inspirados en herramientas de optimizaci贸n as铆 como simulaciones por ordenador. En el caso del an谩lisis te贸rico del problema mencionado anteriormente, hemos seguido la metodolog铆a est谩ndar de estimaci贸n 贸ptima lineal no sesgada; en otras palabras, Best Linear Unbiased Estimator (BLUE). En particular, este trabajo de tesis se centra en el problema de optimizar conjuntamente la selecci贸n de sensores, la estructura de enrutamiento y la asignaci贸n de bit-rate para cada sensor seleccionado. En primer lugar, consideramos solamente la optimizaci贸n conjunta de la selecci贸n de sensores y la estructura de enrutamiento, donde se asume una cuantificaci贸n fina, y por tanto se ignora la asignaci贸n 贸ptima de bit-rate. En este caso, la funci贸n objetivo es lineal y las restricciones en el problema de optimizaci贸n son no convexas, lo cual lleva a un problema a resolver que tiene una complejidad y alto. En segundo lugar, tenemos en cuenta la asignaci贸n del bit-rate como una variable adicional en el primer problema, convirti茅ndose en un problema de optimizaci贸n no lineal no convexo. Por lo tanto, el problema de optimizaci贸n conjunta se hace a煤n m谩s dif铆cil de resolver que el primera problema de optimizaci贸n. La soluci贸n de este problema no convexo se aborda utilizando varios pasos de relajaci贸n convexa y resolviendo estos problemas relajados para las diferentes variables en t谩ndem. El objetivo en ambos problemas anteriormente mencionadas es reducir al m铆nimo la distorsi贸n total en la estimaci贸n bajo una cierta limitaci贸n de potencia total dada. Tambi茅n demostramos que nuestros problemas pertenecen a la clase de problemas NP-hard, realizando una reducci贸n (de complejidad polinomial) de nuestro problema el problema Hamiltoniano no dirigido (UHP, Undirected Hamiltonian Path). Nuestros problemas de optimizaci贸n relajados se pueden resolver a trav茅s de m茅todos de optimizaci贸n convexa, tales como los m茅todos de punto interior. Despu茅s de los an谩lisis te贸ricos, los algoritmos propuestos considerados para ambos casos (cuantificaci贸n fina y cuantificaci贸n adaptativa), son simulados usando programaci贸n Matlab y el toolbox de CVX. Los algoritmos propuestos son comparados, en cada caso, con los mejores algoritmos propuestos en la literatura para la asignaci贸n de recursos en WSN para estimaci贸n. %Aunque las simulaciones fueron realizadas con el lenguaje de programaci贸n de Matlab, es posible usar otras plataformas de simulaci贸n y lenguajes de programaci贸n. Conclusiones: En esta tesis, dada una WSN con un grafo subyacente de conectividad de red, un cierto nodo solicitante (sumidero) y una fuente localizada, hemos considerado el problema de la estimaci贸n distribuida de par谩metros con donde la potencia total disponible esta limitada. Por lo tanto, para llevar a cabo un cierta tarea de estimaci贸n distribuida (por ejemplo, detecci贸n de fuego en un bosque, localizaci贸n basada en direcci贸n de llegada, estimaci贸n de cualquier otro fen贸meno localizado, etc.), hemos considerado el problema, usando el esquema EF, de optimizar conjuntamente el subconjunto de sensores activas, la asignaci贸n de bit-rate y la asociada estructura de enrutamiento multisalto para enviar la informaci贸n agregada hasta el nodo solicitante. De esta manera, la distorsi贸n en la estimaci贸n total es minimizada una cierta potencia total. La mayor铆a de las soluciones recientemente propuestas, intentan simplificar el problema considerando solamente la selecci贸n de un subconjunto de sensores, ignorando la optimizaci贸n conjunta de la estructura de enrutamiento as铆 como de la codificaci贸n. Sin embargo, optimizar la estructura de enrutamiento es una importante variable en el problema ya que, en general, transmitir informaci贸n que est谩 lejos del nodo solicitante es m谩s costoso que desde un nodo cercano. La cuantificaci贸n de fuente tambi茅n juega un papel importante ya que los sensores lejos de la fuente requieren menos niveles de cuantificaci贸n ya que reciben un SNR menor. A continuaci贸n resumimos nuestras principales contribuciones: 1. El problema de optimizaci贸n conjunta de la selecci贸n de sensores, la estructura de enrutamiento multisalto y la asignaci贸n adaptativa de la tasa de bit (mediciones del sensor) para la estimaci贸n distribuida con un restricci贸n en el coste total de comunicaciones, es formulado y analizado, tanto en t茅rminos de dise帽o de algoritmos como de an谩lisis de complejidad, demostrando que es un problema NP-hard cuando se utiliza el esquema EF. Tambi茅n proporcionamos una cota inferior para la soluci贸n 贸ptima del problema de optimizaci贸n NP-hard original. 2. En primer lugar, consideramos el problema de optimizaci贸n conjunta de la selecci贸n de los sensores y de la estructura de enrutamiento multisalto asumiendo que se dispone de una cuantificaci贸n fina para cada medici贸n de los sensores. A continuaci贸n, presentamos un Algoritmo que llamamos FTRA (Fixed-Tree Relaxation-based Algorithm) que consiste en una relajaci贸n de nuestro problema de optimizaci贸n original, y que desacopla la elecci贸n de la estructura de enrutamiento de la selecci贸n de sensores activos. 3. A continuaci贸n, tambi茅n dise帽amos un nuevo y eficiente algoritmo iterativo distribuido que llamamos IDA (Iterative Distributed Algorithm), que optimiza de forma conjunta a nivel local y distribuida la selecci贸n de sensores y la estructura de enrutamiento de saltos m煤ltiples. Tambi茅n demostramos experimentalmente que nuestro IDA genera una soluci贸n que est谩 cerca de la soluci贸n 贸ptima al problema NP-hard original, haci茅ndose uso de la cota anteriormente obtenida. 4. En segundo lugar, hemos considerado la asignaci贸n de tasa de bit como una variable adicional al anterior problema la optimizaci贸n, en un problema de optimizaci贸n no lineal y no convexo resultando en un problema todav铆a mas complejo de resolver, y por tanto NP-Hard tambi茅n. 5. Para este segundo problema de optimizaci贸n, hemos desarrollado dos algoritmos: a) Algoritmo de Cuantificaci贸n Adaptativa basado en 谩rbol Fijo (FTR-AQ, Fixed-Tree Relaxation-based Adaptive Quantization), y b) Algoritmo de Cuantificaci贸n Adaptativa basado en Optimizaci贸n Local (LO-AQ, Local Optimization-based Adaptive Quantization). LO-AQ proporciona una estimaci贸n m谩s precisa para la misma potencia total dada, aunque esto implica una complejidad computacional adicional en cada nodo. 6. Por 煤ltimo, comparamos nuestros algoritmos con los otros mejores trabajos relacionados presentados previamente en la literatura, mostrando claramente un rendimiento superior en t茅rminos de distorsi贸n en la estimaci贸n para la misma potencia total dada.In this PhD thesis, we consider the problem of power efficient distributed estimation of a deterministic parameter related to a localized phenomena in a Wireless Sensor Network (WSN), where due to the power constraints, we propose to jointly optimize (i) selection of a subset of active sensors, (ii) multihop routing structure and (iii) bit-rate allocation for all active sensor measurements. Thus, our goal is to obtain the best possible estimation performance at a given querying (sink) node, for a given total power budget in the WSN. Furthermore, because of the power constraints, each selected sensor fuses all other measurements that are received from its child sensors on the chosen multihop routing tree structure together with its own measurement to perform an aggregated parameter estimation, and then it sends only one flow of fused data to its parent sensor on the tree. We call this scheme as an Estimate-and-Forward (EF). The thesis is divided in two parts. In the first part, an optimization problem is formulated where fine quantization (high bit-rates) is assumed to be provided at all the sensor measurements, that is, ignoring the bit-rate optimization problem. Then, only the sensor selection and multihop routing structure are jointly optimized in order to minimize the total distortion in estimation (estimation error) under a constraint on the total multihop communication cost. The resulting problem is non-convex, and we show that, in fact, it is an NP-Hard problem. Thus, first we propose an algorithm based on a relaxation of our original optimization problem, where the choice of the sensor selection is decoupled from the choice of the multihop routing structure. In this case, the routing structure is taken from the Shortest Path Tree, that is, it's based only on the Communication Cost (SPT-CC). Furthermore, we also design an efficient iterative distributed algorithm that jointly optimizes the sensor selection and multihop routing structure. Then, we also provide a lower bound for the optimal solution of our original NP-Hard optimization problem and show experimentally that our iterative distributed algorithm generates a solution that is close to this lower bound, thus approaching optimality. Although there is no strict guarantee that the gap between this lower bound and the optimal solution of the main problem is always small, our numerical experiments support that this gap is actually very small in many cases. In the second part, the bit-rate allocation is also considered in the optimization problem along with the sensor selection and multihop routing structure. In this case, the problem becomes a nonlinear non-convex optimization problem. Note that in the first part, the objective function was linear, but the constraints were non-convex. Since the problem in the second part is a nonlinear non-convex optimization problem, very interestingly, we address this nonlinear non-convex optimization problem using several relaxation steps and then solving the relaxed convex version over different variables in tandem, resulting in a sequence of linear (convex) subproblems that can be solved efficiently. Then, we propose an algorithm using the EF scheme and an adaptive uniform dithered quantizer to solve this problem. First, by assuming a certain fixed routing structure and high bit-rates to each sensor measurement are available, we optimize the sensor selection. Then, given the subset of sensors and associated routing structure, we optimize the bit-rate allocation only for the selected sensors for a given total power budget, in order to minimize the total distortion in estimation. In addition, we also show that the total distortion in estimation can be further minimized by allowing interplay between the edges of the selected routing structure and other available smaller communication cost edges, while keeping the routing tree routed at the sink node. An important result from our work is that because of the interplay between the communication cost over the links and the gain in estimation accuracy obtained by choosing certain sensors and fusing their measurements on the routing tree, the traditional SPT routing structure, widely used in practice, is no longer optimal. To be more specific, our routing structures provide a better trade-off between the overall power consumption and the final estimation accuracy obtained at the sink node. Comparing to more conventional sensor selection, adaptive quantization and fixed routing algorithms, our proposed joint optimization algorithms yield a significant amount of energy saving for the same estimation accuracy

    Optimal Cooperative Spectrum Sensing for Cognitive Radio

    Get PDF
    The rapid increasing interest in wireless communication has led to the continuous development of wireless devices and technologies. The modern convergence and interoperability of wireless technologies has further increased the amount of services that can be provided, leading to the substantial demand for access to the radio frequency spectrum in an efficient manner. Cognitive radio (CR) an innovative concept of reusing licensed spectrum in an opportunistic manner promises to overcome the evident spectrum underutilization caused by the inflexible spectrum allocation. Spectrum sensing in an unswerving and proficient manner is essential to CR. Cooperation amongst spectrum sensing devices are vital when CR systems are experiencing deep shadowing and in a fading environment. In this thesis, cooperative spectrum sensing (CSS) schemes have been designed to optimize detection performance in an efficient and implementable manner taking into consideration: diversity performance, detection accuracy, low complexity, and reporting channel bandwidth reduction. The thesis first investigates state of the art spectrums sensing algorithms in CR. Comparative analysis and simulation results highlights the different pros, cons and performance criteria of a practical CSS scheme leading to the problem formulation of the thesis. Motivated by the problem of diversity performance in a CR network, the thesis then focuses on designing a novel relay based CSS architecture for CR. A major cooperative transmission protocol with low complexity and overhead - Amplify and Forward (AF) cooperative protocol and an improved double energy detection scheme in a single relay and multiple cognitive relay networks are designed. Simulation results demonstrated that the developed algorithm is capable of reducing the error of missed detection and improving detection probability of a primary user (PU). To improve spectrum sensing reliability while increasing agility, a CSS scheme based on evidence theory is next considered in this thesis. This focuses on a data fusion combination rule. The combination of conflicting evidences from secondary users (SUs) with the classical Dempster Shafter (DS) theory rule may produce counter-intuitive results when combining SUs sensing data leading to poor CSS performance. In order to overcome and minimise the effect of the counter-intuitive results, and to enhance performance of the CSS system, a novel state of the art evidence based decision fusion scheme is developed. The proposed approach is based on the credibility of evidence and a dissociability degree measure of the SUs sensing data evidence. Simulation results illustrate the proposed scheme improves detection performance and reduces error probability when compared to other related evidence based schemes under robust practcial scenarios. Finally, motivated by the need for a low complexity and minmum bandwidth reporting channels which can be significant in high data rate applications, novel CSS quantization schemes are proposed. Quantization methods are considered for a maximum likelihood estimation (MLE) and an evidence based CSS scheme. For the MLE based CSS, a novel uniform and optimal output entropy quantization scheme is proposed to provide fewer overhead complexities and improved throughput. While for the Evidence based CSS scheme, a scheme that quantizes the basic probability Assignment (BPA) data at each SU before being sent to the FC is designed. The proposed scheme takes into consideration the characteristics of the hypothesis distribution under diverse signal-to-noise ratio (SNR) of the PU signal based on the optimal output entropy. Simulation results demonstrate that the proposed quantization CSS scheme improves sensing performance with minimum number of quantized bits when compared to other related approaches

    Efficient Coding of Transform Coefficient Levels in Hybrid Video Coding

    Get PDF
    All video coding standards of practical importance, such as Advanced Video Coding (AVC), its successor High Efficiency Video Coding (HEVC), and the state-of-the-art Versatile Video Coding (VVC), follow the basic principle of block-based hybrid video coding. In such an architecture, the video pictures are partitioned into blocks. Each block is first predicted by either intra-picture or motion-compensated prediction, and the resulting prediction errors, referred to as residuals, are compressed using transform coding. This thesis deals with the entropy coding of quantization indices for transform coefficients, also referred to as transform coefficient levels, as well as the entropy coding of directly quantized residual samples. The entropy coding of quantization indices is referred to as level coding in this thesis. The presented developments focus on both improving the coding efficiency and reducing the complexity of the level coding for HEVC and VVC. These goals were achieved by modifying the context modeling and the binarization of the level coding. The first development presented in this thesis is a transform coefficient level coding for variable transform block sizes, which was introduced in HEVC. It exploits the fact that non-zero levels are typically concentrated in certain parts of the transform block by partitioning blocks larger than \square{4} samples into \square{4} sub-blocks. Each \square{4} sub-block is then coded similarly to the level coding specified in AVC for \square{4} transform blocks. This sub-block processing improves coding efficiency and has the advantage that the number of required context models is independent of the set of supported transform block sizes. The maximum number of context-coded bins for a transform coefficient level is one indicator for the complexity of the entropy coding. An adaptive binarization of absolute transform coefficient levels using Rice codes is presented that reduces the maximum number of context-coded bins from 15 (as used in AVC) to three for HEVC. Based on the developed selection of an appropriate Rice code for each scanning position, this adaptive binarization achieves virtually the same coding efficiency as the binarization specified in AVC for bit-rate operation points typically used in consumer applications. The coding efficiency is improved for high bit-rate operation points, which are used in more advanced and professional applications. In order to further improve the coding efficiency for HEVC and VVC, the statistical dependencies among the transform coefficient levels of a transform block are exploited by a template-based context modeling developed in this thesis. Instead of selecting the context model for a current scanning position primarily based on its location inside a transform block, already coded neighboring locations inside a local template are utilized. To further increase the coding efficiency achieved by the template-based context modeling, the different coding phases of the initially developed level coding are merged into a single coding phase. As a consequence, the template-based context modeling can utilize the absolute levels of the neighboring frequency locations, which provides better conditional probability estimates and further improves coding efficiency. This template-based context modeling with a single coding phase is also suitable for trellis-coded quantization (TCQ), since TCQ is state-driven and derives the next state from the current state and the parity of the current level. TCQ introduces different context model sets for coding the significance flag depending on the current state. Based on statistical analyses, an extension of the state-dependent context modeling of TCQ is presented, which further improves the coding efficiency in VVC. After that, a method to reduce the complexity of the level coding at the decoder is presented. This method separates the level coding into a coding phase exclusively consisting of context-coded bins and another one consisting of bypass-coded bins only. For retaining the state-dependent context selection, which significantly contributes to the coding efficiency of TCQ, a dedicated parity flag is introduced and coded with context models in the first coding phase. An adaptive approach is then presented that further reduces the worst-case complexity, effectively lowering the maximum number of context-coded bins per transform coefficient to 1.75 without negatively affecting the coding efficiency. In the last development presented in this thesis, a dedicated level coding for transform skip blocks, which often occur in screen content applications, is introduced for VVC. This dedicated level coding better exploits the statistical properties of directly quantized residual samples for screen content. Various modifications to the level coding improve the coding efficiency for this type of content. Examples for these modifications are a binarization with additional context-coded flags and the coding of the sign information with adaptive context models

    Scalable Front End Designs for Communication and Learning

    Get PDF
    In this work we provide three examples of estimation/detection problems, for which customizing the Front End to the specific application makes the system more efficient and scalable. The three problems we consider are all classical, but face new scalability challenges. This introduces additional constraints, accounting for which results in front end designs that are very distinct from the conventional approaches. The first two case studies pertain to the canonical problems of synchronization and equalization for communication links. As the system bandwidths scale, challenges arise due to the limiting resolution of analog-to-digital converters (ADCs). We discuss system designs that react to this bottleneck by drastically relaxing the precision requirements of the front end and correspondingly modifying the back end algorithms using Bayesian principles. The third problem we discuss belongs to the field of computer vision. Inspired by the research in neuroscience about the mammalian visual system, we redesign the front end of a machine vision system to be neuro-mimetic, followed by layers of unsupervised learning using simple k-means clustering. This results in a framework that is intuitive, more computationally efficient compared to the approach of supervised deep networks, and amenable to the increasing availability of large amounts of unlabeled data. We first consider the problem of blind carrier phase and frequency synchronization in order to obtain insight into the performance limitations imposed by severe quantization constraints. We adopt a mixed signal analog front end that coarsely quantizes the phase and employs a digitally controlled feedback that applies a phase shift prior to the ADC, this acts as a controllable dither signal and aids in the estimation process. We propose a control policy for the feedback and show that combined with blind Bayesian algorithms, it results in excellent performance, close to that of an unquantized system.Next, we take up the problem of channel equalization with severe limits on the number of slicers available for the ADC. We find that the standard flash ADC architecture can be highly sub-optimal in the presence of such constraints. Hence we explore a ``space-time'' generalization of the flash architecture by allowing a fixed numberof slicers to be dispersed in time (sampling phase) as well as space (i.e., amplitude). We show that optimizing the slicer locations, conditioned on the channel, results in significant gains in the bit error rate (BER) performance. Finally, we explore alternative ways of learning convolutionalnets for machine vision, making it easier to interpret and simpler to implement than currently used purely supervised nets. In particular, we investigate a framework that combines a neuro-mimetic front end (designed in collaboration with the neuroscientists from the psychology department at UCSB) together with unsupervised feature extraction based on clustering. Supervised classification, using a generic support vector machine (SVM), is applied at the end.We obtain competitive classification results on standard image databases, beating the state of the art for NORB (uniform-normalized) and approaching it for MNIST

    Irregular Variable Length Coding

    Get PDF
    In this thesis, we introduce Irregular Variable Length Coding (IrVLC) and investigate its applications, characteristics and performance in the context of digital multimedia broadcast telecommunications. During IrVLC encoding, the multimedia signal is represented using a sequence of concatenated binary codewords. These are selected from a codebook, comprising a number of codewords, which, in turn, comprise various numbers of bits. However, during IrVLC encoding, the multimedia signal is decomposed into particular fractions, each of which is represented using a different codebook. This is in contrast to regular Variable Length Coding (VLC), in which the entire multimedia signal is encoded using the same codebook. The application of IrVLCs to joint source and channel coding is investigated in the context of a video transmission scheme. Our novel video codec represents the video signal using tessellations of Variable-Dimension Vector Quantisation (VDVQ) tiles. These are selected from a codebook, comprising a number of tiles having various dimensions. The selected tessellation of VDVQ tiles is signalled using a corresponding sequence of concatenated codewords from a Variable Length Error Correction (VLEC) codebook. This VLEC codebook represents a specific joint source and channel coding case of VLCs, which facilitates both compression and error correction. However, during video encoding, only particular combinations of the VDVQ tiles will perfectly tessellate, owing to their various dimensions. As a result, only particular sub-sets of the VDVQ codebook and, hence, of the VLEC codebook may be employed to convey particular fractions of the video signal. Therefore, our novel video codec can be said to employ IrVLCs. The employment of IrVLCs to facilitate Unequal Error Protection (UEP) is also demonstrated. This may be applied when various fractions of the source signal have different error sensitivities, as is typical in audio, speech, image and video signals, for example. Here, different VLEC codebooks having appropriately selected error correction capabilities may be employed to encode the particular fractions of the source signal. This approach may be expected to yield a higher reconstruction quality than equal protection in cases where the various fractions of the source signal have different error sensitivities. Finally, this thesis investigates the application of IrVLCs to near-capacity operation using EXtrinsic Information Transfer (EXIT) chart analysis. Here, a number of component VLEC codebooks having different inverted EXIT functions are employed to encode particular fractions of the source symbol frame. We show that the composite inverted IrVLC EXIT function may be obtained as a weighted average of the inverted component VLC EXIT functions. Additionally, EXIT chart matching is employed to shape the inverted IrVLC EXIT function to match the EXIT function of a serially concatenated inner channel code, creating a narrow but still open EXIT chart tunnel. In this way, iterative decoding convergence to an infinitesimally low probability of error is facilitated at near-capacity channel SNRs

    Joint source and channel coding

    Get PDF
    corecore