4,929 research outputs found

    IP-based NGNs and Interconnection: The Debate in Europe

    Get PDF
    Historically, interconnection in the world of the Internet has been approached significantly differently from interconnection in the fixed Public Switched Telephone Network (PSTN) and the mobile Public Land Mobile Network (PLMN). As fixed and mobile networks evolve to Next Generation Networks (NGNs) based on the Internet Protocol (IP), it becomes increasingly necessary to merge these perspectives in order to achieve a unified and integrated approach to network interconnection. There is a rich history of economic analysis of IP-based and of conventional switched networks that began to converge early in this decade. In 2008, this issue is coming to a boil, as regulators seek to provide regulatory certainty for the build-out of NGNs, even in the face of substantial uncertainties, and even though practical experience with NGNs is still in a very preliminary state. What can we learn from the historical evolution of the theory of interconnection for Internet, NGN, PSTN and PLMN? What issues are "in play" today? What is the appropriate destination in the long term? What nearer term measures are appropriate?interconnection, NGN, Internet Protocol (IP), bill and keep, Calling Party's Network Pays (CPNP), peering, transit.

    System-on-chip Computing and Interconnection Architectures for Telecommunications and Signal Processing

    Get PDF
    This dissertation proposes novel architectures and design techniques targeting SoC building blocks for telecommunications and signal processing applications. Hardware implementation of Low-Density Parity-Check decoders is approached at both the algorithmic and the architecture level. Low-Density Parity-Check codes are a promising coding scheme for future communication standards due to their outstanding error correction performance. This work proposes a methodology for analyzing effects of finite precision arithmetic on error correction performance and hardware complexity. The methodology is throughout employed for co-designing the decoder. First, a low-complexity check node based on the P-output decoding principle is designed and characterized on a CMOS standard-cells library. Results demonstrate implementation loss below 0.2 dB down to BER of 10^{-8} and a saving in complexity up to 59% with respect to other works in recent literature. High-throughput and low-latency issues are addressed with modified single-phase decoding schedules. A new "memory-aware" schedule is proposed requiring down to 20% of memory with respect to the traditional two-phase flooding decoding. Additionally, throughput is doubled and logic complexity reduced of 12%. These advantages are traded-off with error correction performance, thus making the solution attractive only for long codes, as those adopted in the DVB-S2 standard. The "layered decoding" principle is extended to those codes not specifically conceived for this technique. Proposed architectures exhibit complexity savings in the order of 40% for both area and power consumption figures, while implementation loss is smaller than 0.05 dB. Most modern communication standards employ Orthogonal Frequency Division Multiplexing as part of their physical layer. The core of OFDM is the Fast Fourier Transform and its inverse in charge of symbols (de)modulation. Requirements on throughput and energy efficiency call for FFT hardware implementation, while ubiquity of FFT suggests the design of parametric, re-configurable and re-usable IP hardware macrocells. In this context, this thesis describes an FFT/IFFT core compiler particularly suited for implementation of OFDM communication systems. The tool employs an accuracy-driven configuration engine which automatically profiles the internal arithmetic and generates a core with minimum operands bit-width and thus minimum circuit complexity. The engine performs a closed-loop optimization over three different internal arithmetic models (fixed-point, block floating-point and convergent block floating-point) using the numerical accuracy budget given by the user as a reference point. The flexibility and re-usability of the proposed macrocell are illustrated through several case studies which encompass all current state-of-the-art OFDM communications standards (WLAN, WMAN, xDSL, DVB-T/H, DAB and UWB). Implementations results are presented for two deep sub-micron standard-cells libraries (65 and 90 nm) and commercially available FPGA devices. Compared with other FFT core compilers, the proposed environment produces macrocells with lower circuit complexity and same system level performance (throughput, transform size and numerical accuracy). The final part of this dissertation focuses on the Network-on-Chip design paradigm whose goal is building scalable communication infrastructures connecting hundreds of core. A low-complexity link architecture for mesochronous on-chip communication is discussed. The link enables skew constraint looseness in the clock tree synthesis, frequency speed-up, power consumption reduction and faster back-end turnarounds. The proposed architecture reaches a maximum clock frequency of 1 GHz on 65 nm low-leakage CMOS standard-cells library. In a complex test case with a full-blown NoC infrastructure, the link overhead is only 3% of chip area and 0.5% of leakage power consumption. Finally, a new methodology, named metacoding, is proposed. Metacoding generates correct-by-construction technology independent RTL codebases for NoC building blocks. The RTL coding phase is abstracted and modeled with an Object Oriented framework, integrated within a commercial tool for IP packaging (Synopsys CoreTools suite). Compared with traditional coding styles based on pre-processor directives, metacoding produces 65% smaller codebases and reduces the configurations to verify up to three orders of magnitude

    Subcube embeddability and fault tolerance of augmented hypercubes

    Full text link
    Hypercube networks have received much attention from both parallel processing and communications areas over the years since they offer a rich interconnection structure with high bandwidth, logarithmic diameter, and high degree of fault tolerance. They are easily partitionable and exhibit a high degree of fault tolerance. Fault-tolerance in hypercube and hypercube-based networks received the attention of several researchers in recent years; The primary idea of this study is to address and analyze the reliability issues in hypercube networks. It is well known that the hypercube can be augmented with one dimension to replace any of the existing dimensions should any dimension fail. In this research, it is shown that it is possible to add i dimensions to the standard hypercube, Qn to tolerate (i - 1) dimension failures, where 0 \u3c i ≤ n. An augmented hypercube, Qn +(n) with n additional dimensions is introduced and compared with two other hypercube networks with the same amount of redundancy. Reliability analysis for the three hypercube networks is done using the combinatorial and Markov modeling. The MTTF values are calculated and compared for all three networks. Comparison between similar size hypercube networks show that the augmented hypercube is more robust than the standard hypercube; As a related problem, we also look at the subcube embeddability. Subcube embeddability of the hypercube can be enhanced by introducing an additional dimension. A set of new dimensions, characterized by the Hamming distance between the pairs of nodes it connects, is introduced using a measure defined as the magnitude of a dimension. An enumeration of subcubes of various sizes is presented for a dimension parameterized by its magnitude. It is shown that the maximum number of subcubes for a Qn can only be attained when the magnitude of dimension is n - 1 or n. It is further shown that the latter two dimensions can optimally increase the number of subcubes among all possible choices

    A Brief Library & Internet Glossary

    Get PDF
    Useful terms for a research project

    Software metrics: Software quality metrics for distributed systems

    Get PDF
    Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability

    The structural role of the core literature in history

    Full text link
    The intellectual landscapes of the humanities are mostly uncharted territory. Little is known on the ways published research of humanist scholars defines areas of intellectual activity. An open question relates to the structural role of core literature: highly cited sources, naturally playing a disproportionate role in the definition of intellectual landscapes. We introduce four indicators in order to map the structural role played by core sources into connecting different areas of the intellectual landscape of citing publications (i.e. communities in the bibliographic coupling network). All indicators factor out the influence of degree distributions by internalizing a null configuration model. By considering several datasets focused on history, we show that two distinct structural actions are performed by the core literature: a global one, by connecting otherwise separated communities in the landscape, or a local one, by rising connectivity within communities. In our study, the global action is mainly performed by small sets of scholarly monographs, reference works and primary sources, while the rest of the core, and especially most journal articles, acts mostly locally

    Software development tools: A bibliography, appendix C.

    Get PDF
    A bibliography containing approximately 200 citations on tools which help software developers perform some development task (such as text manipulation, testing, etc.), and which would not necessarily be found as part of a computing facility is given. The bibliography comes from a relatively random sampling of the literature and is not complete. But it is indicative of the nature and range of tools currently being prepared or currently available

    Power Plant Siting: A Literature Review

    Get PDF
    • …
    corecore