1,667 research outputs found

    Fast Graph Sampling Set Selection Using Gershgorin Disc Alignment

    Full text link
    Graph sampling set selection, where a subset of nodes are chosen to collect samples to reconstruct a smooth graph signal, is a fundamental problem in graph signal processing (GSP). Previous works employ an unbiased least-squares (LS) signal reconstruction scheme and select samples via expensive extreme eigenvector computation. Instead, we assume a biased graph Laplacian regularization (GLR) based scheme that solves a system of linear equations for reconstruction. We then choose samples to minimize the condition number of the coefficient matrix---specifically, maximize the smallest eigenvalue λmin\lambda_{\min}. Circumventing explicit eigenvalue computation, we maximize instead the lower bound of λmin\lambda_{\min}, designated by the smallest left-end of all Gershgorin discs of the matrix. To achieve this efficiently, we first convert the optimization to a dual problem, where we minimize the number of samples needed to align all Gershgorin disc left-ends at a chosen lower-bound target TT. Algebraically, the dual problem amounts to optimizing two disc operations: i) shifting of disc centers due to sampling, and ii) scaling of disc radii due to a similarity transformation of the matrix. We further reinterpret the dual as an intuitive disc coverage problem bearing strong resemblance to the famous NP-hard set cover (SC) problem. The reinterpretation enables us to derive a fast approximation scheme from a known SC error-bounded approximation algorithm. We find an appropriate target TT efficiently via binary search. Extensive simulation experiments show that our disc-based sampling algorithm runs substantially faster than existing sampling schemes and outperforms other eigen-decomposition-free sampling schemes in reconstruction error.Comment: Very fast deterministic graph sampling set selection algorithm without explicit eigen-decompositio

    Research routing and MAC based on LEACH and S-MAC for energy efficiency and QoS in wireless sensor network

    Get PDF
    The wireless sensor is a micro-embedded device with weak data processing capability and small storage space. These nodes need to complete complex jobs, including data monitoring, acquisition and conversion, and data processing. Energy efficiency should be considered as one of the important aspects of the Wireless Sensor Network (WSN) throughout architecture and protocol design. At the same time, supporting Quality of Service (QoS) in WSNs is a research field, because the time-sensitive and important information is expected for the transmitting to to the sink node immediately. The thesis is supported by the projects entitled “The information and control system for preventing forest fires”, and “The Erhai information management system”, funded by the Chinese Government. Energy consumption and QoS are two main objectives of the projects. The thesis discusses the two aspects in route and Media Access Control (MAC). For energy efficiency, the research is based on Low Energy Adaptive Clustering Hierarchy (LEACH) protocol. LEACH is a benchmark clustering routing protocol which imposes upon cluster heads to complete a lot of aggregation and relay of messages to the base-station. However, there are limitations in LEACH. LEACH does not suit a wide area in clustering strategy and multi-hop routing. Moreover, routing protocols only focus on one factor, combining the clustering strategy and multi-hop routing mechanism were not considered in routing protocol for performance of network. QoS is supported by the MAC and routing protocol. Sensor MAC(S-MAC) makes the use of the periodically monitoring / sleeping mechanism, as well as collision and crosstalk avoidance mechanism. The mechanism reduces energy costs. Meanwhile, it supports good scalability and avoids the collision. However, the protocols do not take the differentiated services. For supporting QoS,A new route protocol needs to be designed and realized on embed platforms, which has WIFI mode and a Linux operation system to apply on the actual system. This research project was conducted as following the steps: A new protocol called RBLEACH is proposed to solve cluster on a widely scale based on LEACH. The area is divided into a few areas, where LEACH is improved to alter the selecting function in each area. RBLEACH creates routes selected by using a new algorithm to optimize the performance of the network. A new clustering method that has been developed to use several factors is PS-ACO-LEACH. The factors include the residual energy of the cluster head and Euclidean distances between cluster members and a cluster head. It can optimally solve fitness function and maintain a load balance in between the cluster head nodes, a cluster head and the base station. Based on the “Ant Colony” algorithm and transition of probability, a new routing protocol was created by “Pheromone” to find the optimal path of cluster heads to the base station. This protocol can reduce energy consumption of cluster heads and unbalanced energy consumption. Simulations prove that the improved protocol can enhance the performance of the network, including lifetime and energy conservation. Additionally, Multi Index Adaptive Routing Algorithm (MIA-QR) was designed based on network delay, packet loss rate and signal strength for QoS. The protocol is achieved by VC on an embedded Linux system. The MIA-QR is tested and verified by experiment and the protocol is to support QoS. Finally, an improved protocol (SMAC -SD) for wireless sensor networks is proposed, in order to solve the problem of S-MAC protocol that consider either service differentiation or ensure quality of service. According to service differentiation, SMAC-SD adopts an access mechanism based on different priorities including the adjustment of priority mechanisms of channel access probability, channel multi-request mechanisms and the configuring of waiting queues with different priorities and RTS backoff for different service, which makes the important service receive high channel access probability, ensuring the transmission quality of the important service. The simulation results show that the improved protocol is able to gain amount of important service and shortens the delay at the same time. Meanwhile, it improves the performance of the network effectivel

    Real-time data compression of broadcast video signals

    Get PDF
    A non-adaptive predictor, a nonuniform quantizer, and a multi-level Huffman coder are incorporated into a differential pulse code modulation system for coding and decoding broadcast video signals in real time

    System-on-chip Computing and Interconnection Architectures for Telecommunications and Signal Processing

    Get PDF
    This dissertation proposes novel architectures and design techniques targeting SoC building blocks for telecommunications and signal processing applications. Hardware implementation of Low-Density Parity-Check decoders is approached at both the algorithmic and the architecture level. Low-Density Parity-Check codes are a promising coding scheme for future communication standards due to their outstanding error correction performance. This work proposes a methodology for analyzing effects of finite precision arithmetic on error correction performance and hardware complexity. The methodology is throughout employed for co-designing the decoder. First, a low-complexity check node based on the P-output decoding principle is designed and characterized on a CMOS standard-cells library. Results demonstrate implementation loss below 0.2 dB down to BER of 10^{-8} and a saving in complexity up to 59% with respect to other works in recent literature. High-throughput and low-latency issues are addressed with modified single-phase decoding schedules. A new "memory-aware" schedule is proposed requiring down to 20% of memory with respect to the traditional two-phase flooding decoding. Additionally, throughput is doubled and logic complexity reduced of 12%. These advantages are traded-off with error correction performance, thus making the solution attractive only for long codes, as those adopted in the DVB-S2 standard. The "layered decoding" principle is extended to those codes not specifically conceived for this technique. Proposed architectures exhibit complexity savings in the order of 40% for both area and power consumption figures, while implementation loss is smaller than 0.05 dB. Most modern communication standards employ Orthogonal Frequency Division Multiplexing as part of their physical layer. The core of OFDM is the Fast Fourier Transform and its inverse in charge of symbols (de)modulation. Requirements on throughput and energy efficiency call for FFT hardware implementation, while ubiquity of FFT suggests the design of parametric, re-configurable and re-usable IP hardware macrocells. In this context, this thesis describes an FFT/IFFT core compiler particularly suited for implementation of OFDM communication systems. The tool employs an accuracy-driven configuration engine which automatically profiles the internal arithmetic and generates a core with minimum operands bit-width and thus minimum circuit complexity. The engine performs a closed-loop optimization over three different internal arithmetic models (fixed-point, block floating-point and convergent block floating-point) using the numerical accuracy budget given by the user as a reference point. The flexibility and re-usability of the proposed macrocell are illustrated through several case studies which encompass all current state-of-the-art OFDM communications standards (WLAN, WMAN, xDSL, DVB-T/H, DAB and UWB). Implementations results are presented for two deep sub-micron standard-cells libraries (65 and 90 nm) and commercially available FPGA devices. Compared with other FFT core compilers, the proposed environment produces macrocells with lower circuit complexity and same system level performance (throughput, transform size and numerical accuracy). The final part of this dissertation focuses on the Network-on-Chip design paradigm whose goal is building scalable communication infrastructures connecting hundreds of core. A low-complexity link architecture for mesochronous on-chip communication is discussed. The link enables skew constraint looseness in the clock tree synthesis, frequency speed-up, power consumption reduction and faster back-end turnarounds. The proposed architecture reaches a maximum clock frequency of 1 GHz on 65 nm low-leakage CMOS standard-cells library. In a complex test case with a full-blown NoC infrastructure, the link overhead is only 3% of chip area and 0.5% of leakage power consumption. Finally, a new methodology, named metacoding, is proposed. Metacoding generates correct-by-construction technology independent RTL codebases for NoC building blocks. The RTL coding phase is abstracted and modeled with an Object Oriented framework, integrated within a commercial tool for IP packaging (Synopsys CoreTools suite). Compared with traditional coding styles based on pre-processor directives, metacoding produces 65% smaller codebases and reduces the configurations to verify up to three orders of magnitude

    Random Linear Network Coding for Wireless Layered Video Broadcast: General Design Methods for Adaptive Feedback-free Transmission

    Full text link
    This paper studies the problem of broadcasting layered video streams over heterogeneous single-hop wireless networks using feedback-free random linear network coding (RLNC). We combine RLNC with unequal error protection (UEP) and our main purpose is twofold. First, to systematically investigate the benefits of UEP+RLNC layered approach in servicing users with different reception capabilities. Second, to study the effect of not using feedback, by comparing feedback-free schemes with idealistic full-feedback schemes. To these ends, we study `expected percentage of decoded frames' as a key content-independent performance metric and propose a general framework for calculation of this metric, which can highlight the effect of key system, video and channel parameters. We study the effect of number of layers and propose a scheme that selects the optimum number of layers adaptively to achieve the highest performance. Assessing the proposed schemes with real H.264 test streams, the trade-offs among the users' performances are discussed and the gain of adaptive selection of number of layers to improve the trade-offs is shown. Furthermore, it is observed that the performance gap between the proposed feedback-free scheme and the idealistic scheme is very small and the adaptive selection of number of video layers further closes the gap.Comment: 15 pages, 12 figures, 3 tables, Under 2nd round of review, IEEE Transactions on Communication
    corecore