91 research outputs found

    FPGA-based DOCSIS upstream demodulation

    Get PDF
    In recent years, the state-of-the-art in field programmable gate array (FPGA) technology has been advancing rapidly. Consequently, the use of FPGAs is being considered in many applications which have traditionally relied upon application-specific integrated circuits (ASICs). FPGA-based designs have a number of advantages over ASIC-based designs, including lower up-front engineering design costs, shorter time-to-market, and the ability to reconfigure devices in the field. However, ASICs have a major advantage in terms of computational resources. As a result, expensive high performance ASIC algorithms must be redesigned to fit the limited resources available in an FPGA. Concurrently, coaxial cable television and internet networks have been undergoing significant upgrades that have largely been driven by a sharp increase in the use of interactive applications. This has intensified demand for the so-called upstream channels, which allow customers to transmit data into the network. The format and protocol of the upstream channels are defined by a set of standards, known as DOCSIS 3.0, which govern the flow of data through the network. Critical to DOCSIS 3.0 compliance is the upstream demodulator, which is responsible for the physical layer reception from all customers. Although upstream demodulators have typically been implemented as ASICs, the design of an FPGA-based upstream demodulator is an intriguing possibility, as FPGA-based demodulators could potentially be upgraded in the field to support future DOCSIS standards. Furthermore, the lower non-recurring engineering costs associated with FPGA-based designs could provide an opportunity for smaller companies to compete in this market. The upstream demodulator must contain complicated synchronization circuitry to detect, measure, and correct for channel distortions. Unfortunately, many of the synchronization algorithms described in the open literature are not suitable for either upstream cable channels or FPGA implementation. In this thesis, computationally inexpensive and robust synchronization algorithms are explored. In particular, algorithms for frequency recovery and equalization are developed. The many data-aided feedforward frequency offset estimators analyzed in the literature have not considered intersymbol interference (ISI) caused by micro-reflections in the channel. It is shown in this thesis that many prominent frequency offset estimation algorithms become biased in the presence of ISI. A novel high-performance frequency offset estimator which is suitable for implementation in an FPGA is derived from first principles. Additionally, a rule is developed for predicting whether a frequency offset estimator will become biased in the presence of ISI. This rule is used to establish a channel excitation sequence which ensures the proposed frequency offset estimator is unbiased. Adaptive equalizers that compensate for the ISI take a relatively long time to converge, necessitating a lengthy training sequence. The convergence time is reduced using a two step technique to seed the equalizer. First, the ISI equivalent model of the channel is estimated in response to a specific short excitation sequence. Then, the estimated channel response is inverted with a novel algorithm to initialize the equalizer. It is shown that the proposed technique, while inexpensive to implement in an FPGA, can decrease the length of the required equalizer training sequence by up to 70 symbols. It is shown that a preamble segment consisting of repeated 11-symbol Barker sequences which is well-suited to timing recovery can also be used effectively for frequency recovery and channel estimation. By performing these three functions sequentially using a single set of preamble symbols, the overall length of the preamble may be further reduced

    Design and Performance Analysis of Functional Split in Virtualized Access Networks

    Get PDF
    abstract: Emerging modular cable network architectures distribute some cable headend functions to remote nodes that are located close to the broadcast cable links reaching the cable modems (CMs) in the subscriber homes and businesses. In the Remote- PHY (R-PHY) architecture, a Remote PHY Device (RPD) conducts the physical layer processing for the analog cable transmissions, while the headend runs the DOCSIS medium access control (MAC) for the upstream transmissions of the distributed CMs over the shared cable link. In contrast, in the Remote MACPHY (R-MACPHY) ar- chitecture, a Remote MACPHY Device (RMD) conducts both the physical and MAC layer processing. The dissertation objective is to conduct a comprehensive perfor- mance comparison of the R-PHY and R-MACPHY architectures. Also, development of analytical delay models for the polling-based MAC with Gated bandwidth alloca- tion of Poisson traffic in the R-PHY and R-MACPHY architectures and conducting extensive simulations to assess the accuracy of the analytical model and to evaluate the delay-throughput performance of the R-PHY and R-MACPHY architectures for a wide range of deployment and operating scenarios. Performance evaluations ex- tend to the use of Ethernet Passive Optical Network (EPON) as transport network between remote nodes and headend. The results show that for long CIN distances above 100 miles, the R-MACPHY architecture achieves significantly shorter mean up- stream packet delays than the R-PHY architecture, especially for bursty traffic. The extensive comparative R-PHY and R-MACPHY comparative evaluation can serve as a basis for the planning of modular broadcast cable based access networks.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    An Efficient DOCSIS Upstream Equalizer

    Get PDF
    The advancement in the CATV industry has been remarkable. In the beginning, CATV provided a few television channels. Now it provides a variety of advanced services such as video on demand (VOD), Internet access, Pay-Per-View on demand and interactive TV. These advances have increased the popularity of CATV manyfold. Current improvements focus on interactive services with high quality. These interactive services require more upstream (transmission from customer premises to cable operator premises) channel bandwidth. The flow of data through the CATV network in both the upstream and downstream directions is governed by a standard referred to as the Data Over Cable Service Interface Specification (DOCSIS) standard. The latest version is DOCSIS 3.1, which was released in January 2014. The previous version, DOCSIS 3.0, was released in 2006. One component of the upstream communication link is the QAM demodulator. An important component in the QAM demodulator is the equalizer, whose purpose is to remove distortion caused by the imperfect upstream channel as well as the residual timing offset and frequency offset. Most of the timing and frequency offset are corrected by timing and frequency recovery circuits; what remains is referred to as offset. A DOCSIS receiver, and hence the equalizer within, can be implemented with ASIC or FPGA technology. Implementing an equalizer in an ASIC has a large nonrecurring engineering cost, but relatively small per chip production cost. Implementing equalizer in an FPGA has very low non-recurring cost, but a relatively high per chip cost. If the choice technology was based on cost, one would think it would depends only on the volume, but in practice that is not the case. The dominant factor when it comes to profit, is the time-to-market, which makes FPGA technology the only choice. The goal of this thesis is to design a cost optimized equalizer for DOCSIS upstream demodulator and implement in an FPGA. With this in mind, an important objective is to establish a relationship between the equalizer’s critical parameters and its performance. The parameter-performance relationship that has been established in this study revealed that equalizer step size and length parameters should be 1/64 and approximately 20 to yield a near optimum equalizer when considering the MER-convergence time trade-off. In the pursuit of the objective another relationship was established that is useful in determining the accuracy of the timing recovery circuit. That relationship establishes the sensitivity both of the MER and convergence time to timing offset. The equalizer algorithm was implemented in a cost effective manner using DSP Builder. The effort to minimize cost was focused on minimizing the number of multipliers. It is shown that the equalizer can be constructed with 8 multipliers when the proposed time sharing algorithm is implemented

    Techniques to Improve the Efficiency of Data Transmission in Cable Networks

    Get PDF
    The cable television (CATV) networks, since their introduction in the late 1940s, have now become a crucial part of the broadcasting industry. To keep up with growing demands from the subscribers, cable networks nowadays not only provide television programs but also deliver two-way interactive services such as telephone, high-speed Internet and social TV features. A new standard for CATV networks is released every five to six years to satisfy the growing demands from the mass market. From this perspective, this thesis is concerned with three main aspects for the continuing development of cable networks: (i) efficient implementations of backward-compatibility functions from the old standard, (ii) addressing and providing solutions for technically-challenging issues in the current standard and, (iii) looking for prospective features that can be implemented in the future standard. Since 1997, five different versions of the digital CATV standard had been released in North America. A new standard often contains major improvements over the previous one. The latest version of the standard, namely DOCSIS 3.1 (released in late 2013), is packed with state-of-the-art technologies and allows approximately ten times the amount of traffic as compared to the previous standard, DOCSIS 3.0 (released in 2008). Backward-compatibility is a must-have function for cable networks. In particular, to facilitate the system migration from older standards to a newer one, the backward compatible functions in the old standards must remain in the newer-standard products. More importantly, to keep the implementation cost low, the inherited backward compatible functions must be redesigned by taking advantage of the latest technology and algorithms. To improve the backward-compatibility functions, the first contribution of the thesis focuses on redesigning the pulse shaping filter by exploiting infinite impulse response (IIR) filter structures as an alternative to the conventional finite impulse response (FIR) structures. Comprehensive comparisons show that more economical filters with better performance can be obtained by the proposed design algorithm, which considers a hybrid parameterization of the filter's transfer function in combination with a constraint on the pole radius to be less than 1. The second contribution of the thesis is a new fractional timing estimation algorithm based on peak detection by log-domain interpolation. When compared with the commonly-used timing detection method, which is based on parabolic interpolation, the proposed algorithm yields more accurate estimation with a comparable implementation cost. The third contribution of the thesis is a technique to estimate the multipath channel for DOCSIS 3.1 cable networks. DOCSIS 3.1 is markedly different from prior generations of CATV networks in that OFDM/OFDMA is employed to create a spectrally-efficient signal. In order to effectively demodulate such a signal, it is necessary to employ a demodulation circuit which involves estimation and tracking of the multipath channel. The estimation and tracking must be highly accurate because extremely dense constellations such as 4096-QAM and possibly 16384-QAM can be used in DOCSIS 3.1. The conventional OFDM channel estimators available in the literature either do not perform satisfactorily or are not suitable for the DOCSIS 3.1 channel. The novel channel estimation technique proposed in this thesis iteratively searches for parameters of the channel paths. The proposed technique not only substantially enhances the channel estimation accuracy, but also can, at no cost, accurately identify the delay of each echo in the system. The echo delay information is valuable for proactive maintenance of the network. The fourth contribution of this thesis is a novel scheme that allows OFDM transmission without the use of a cyclic prefix (CP). The structure of OFDM in the current DOCSIS 3.1 does not achieve the maximum throughput if the channel has multipath components. The multipath channel causes inter-symbol-interference (ISI), which is commonly mitigated by employing CP. The CP acts as a guard interval that, while successfully protecting the signal from ISI, reduces the transmission throughput. The problem becomes more severe for downstream direction, where the throughput of the entire system is determined by the user with the worst channel. To solve the problem, this thesis proposes major alterations to the current DOCSIS 3.1 OFDM/OFDMA structure. The alterations involve using a pair of Nyquist filters at the transceivers and an efficient time-domain equalizer (TEQ) at the receiver to reduce ISI down to a negligible level without the need of CP. Simulation results demonstrate that, by incorporating the proposed alterations to the DOCSIS 3.1 down-link channel, the system can achieve the maximum throughput over a wide range of multipath channel conditions

    Análise técnico-económica de redes de acesso : ferramentas de decisão

    Get PDF
    Mestrado em Engenharia Electrónica e TelecomunicaçõesO recente crescimento de consumo de internet e televisão por cabo desencadeou a necessidade de novas redes de acesso. O mundo das telecomunicações tornou-se num negócio competitivo entre as operadoras. As estratégias de competitividade são agora com baseadas em qualidade do serviço e na acessibilidade dos preços a todas as categorias da população. Para garantir estes requisitos é necessário inovar em equipamentos e meios de distribuição. A implantação de novas redes de acesso tornou-se crucial na sociedade, mas a recente crise económica mundial forçou um dimensionamento cuidado para garantir o máximo lucro possível no negócio. Portanto esta dissertação apresenta uma análise económica e financeira da implementação de uma rede HFC. Mostra a estrutura da rede e as suas características tecnológicas, além disso explica como lidar com problemas no dimensionamento da rede: a incerteza espacial associada ao processo de adesão dos utilizadores e como lidar com consumo em excesso de largura de banda, também causado pelos utilizadores. Por fim realiza o estudo da instalação da rede HFC em três tipos diferentes de cenários e expondo os resultados económicos obtidos, permitindo a conclusão sobre a viabilidade destes projetos.The recent growth in Data Traffic and Cable Tv consumption triggered the need for new access networks and the world of Telecommunications has become a very competitive business among service providers. The strategies of competitiveness are now based on quality of services and affordable prices to all classes of the population. To guarantee these requirements, an equipment and distribution facilities innovation was necessary. The deployment of Next Generation Access Networks (NGA) has become crucial in society, but the recent world economic crisis has forced a careful dimensioning to produce the most profit possible with small investments. This dissertation presents a techno-economic analysis of a HFC network implementation. The network structure and technologic characteristics are presented, along with explanation of how to deal with problems in the network dimensioning: as spatial uncertainty associated with the adhesion process of the users and the surplus consumption of bandwidth by them. Finally, the study of the network implementation in three different sorts of areas is shown and the economic results obtained are exposed, providing the viability of these projects

    Leveraging Kubernetes in Edge-Native Cable Access Convergence

    Get PDF
    Public clouds provide infrastructure services and deployment frameworks for modern cloud-native applications. As the cloud-native paradigm has matured, containerization, orchestration and Kubernetes have become its fundamental building blocks. For the next step of cloud-native, an interest to extend it to the edge computing is emerging. Primary reasons for this are low-latency use cases and the desire to have uniformity in cloud-edge continuum. Cable access networks as specialized type of edge networks are not exception here. As the cable industry transitions to distributed architectures and plans the next steps to virtualize its on-premise network functions, there are opportunities to achieve synergy advantages from convergence of access technologies and services. Distributed cable networks deploy resource-constrained devices like RPDs and RMDs deep in the edge networks. These devices can be redesigned to support more than one access technology and to provide computing services for other edge tenants with MEC-like architectures. Both of these cases benefit from virtualization. It is here where cable access convergence and cloud-native transition to edge-native intersect. However, adapting cloud-native in the edge presents a challenge, since cloud-native container runtimes and native Kubernetes are not optimal solutions in diverse edge environments. Therefore, this thesis takes as its goal to describe current landscape of lightweight cloud-native runtimes and tools targeting the edge. While edge-native as a concept is taking its first steps, tools like KubeEdge, K3s and Virtual Kubelet can be seen as the most mature reference projects for edge-compatible solution types. Furthermore, as the container runtimes are not yet fully edge-ready, WebAssembly seems like a promising alternative runtime for lightweight, portable and secure Kubernetes compatible workloads

    U.S. vs. European Broadband Deployment: What Do the Data Say?

    Get PDF
    As the Internet becomes more important to the everyday lives of people around the world, commentators have tried to identify the best policies increasing the deployment and adoption of high-speed broadband technologies. Some claim that the European model of service-based competition, induced by telephone-style regulation, has outperformed the facilities-based competition underlying the US approach to promoting broadband deployment. The mapping studies conducted by the US and the EU for 2011 and 2012 reveal that the US led the EU in many broadband metrics. • High-Speed Access: A far greater percentage of US households had access to Next Generation Access (NGA) networks (25 Mbps) than in Europe. This was true whether one considered coverage for the entire nation (82% vs. 54%) or for rural areas (48% vs. 12%). • Fiber Deployment: The US had better coverage for fiber-to-the-premises (FTTP) (23% vs. 12%). Furthermore, FTTP remained a less important contributor to NGA coverage than other technologies. • Regression Analysis of Key Policy Variables: Regressions built around the mapping date indicate that the US emphasis on facilities-based competition has proven more effective in promoting NGA coverage than the European emphasis on infrastructure sharing and service-based competition. • Investment: Other data indicate that the US broadband industry has invested more than two times more capital per household than the European broadband industry every year from 2007 to 2012. In 2012, for example, the US industry invested US562perhousehold,whileEUprovidersinvestedonlyUS 562 per household, while EU providers invested only US 244 per household. • Download Speeds: US download speeds during peak times (weekday evenings) averaged 15 Mbps in 2012, which was below the European average of 19 Mbps. There was also a disparity between the speeds advertised and delivered by broadband providers in the US and Europe. During peak hours, US actual download speeds were 96% of what was advertised, compared to Europe where consumers received only 74% of advertised download speeds. The US also fared better in terms of advertised vs. actual upload speeds, latency, and packet loss. • Pricing: The European pricing study reveals that US broadband was cheaper than European broadband for all speed tiers below 12 Mbps. US broadband was more expensive for higher speed tiers, although the higher cost was justified in no small part by the fact that US Internet users on average consumed 50% more bandwidth than their European counterparts. Case studies of eight European countries (Denmark, France, Germany, Italy, the Netherlands, Spain, Sweden, and the United Kingdom) confirm that facilities-based competition has served as the primary driver of investments in upgrading broadband networks. Moreover, the countries that emphasized fiber-to-the-premises had the lowest NGA coverage rates in this study and ranked among the lowest NGA coverage rates in the European Union. In fact, two countries often mentioned as leaders in broadband deployment (Sweden and France) end up being rather disappointing both in terms of national NGA coverage and rural NGA coverage. These case studies emphasize that broadband coverage is best promoted by a flexible approach that does not focus exclusively on any one technology

    Systems analysis of emerging IPTV entertainment platform : stakeholders, threats and opportunities

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, System Design and Management Program, 2008.Includes bibliographical references (p. 140-144).Why do certain types of companies, goods, services survive and others do not. Why does one set continuously reinvent themselves and others wither away and die? Why does Cisco continue to provide exciting and innovative networking products, while companies like Cabletron die? Several academics believe that a dominant factor is that winners are able to create robust and effective product platforms. These platforms are able to cater to changing customer needs. On the winning side, the platform leader is effectively able to manage the various conflicts that are present in the platform ecosystem. On the loosing team, often there is no platform leader!I believe that effective platform leadership, platform architecture play a key role in product success.In this thesis, I plan to compare two large platforms. These are the IPTV platform and the conventional cable based TV platform. Both are competing with each other to provide similar services to the same customer set. I have coined the term 'Mega Platform" to describe such large platforms. . As part of this comparison I will develop a set of metrics or comparison points which will help compare the two competing platforms. Please note that the purpose of this thesis is not to prove that there is a strong correlation between platform success and market success.by Shantnu Sharma.S.M
    • …
    corecore