9 research outputs found
Convergence: the next big step
Recently, web based multimedia services have gained popularity and have proven themselves to be viable means of communication. This has inspired the telecommunication service providers and network operators to reinvent themselves to try and provide value added IP centric services. There was need for a system which would allow new services to be introduced rapidly with reduced capital expense (CAPEX) and operational expense (OPEX) through increased efficiency in network utilization. Various organizations and standardization agencies have been working together to establish such a system. Internet Protocol Multimedia Subsystem (IMS) is a result of these efforts. IMS is an application level system. It is being developed by 3GPP (3rd Generation Partnership Project) and 3GPP2 (3rd Generation Partnership Project 2) in collaboration with IETF (Internet Engineering Task Force), ITU-T (International Telecommunication Union – Telecommunication Standardization Sector), and ETSI (European Telecommunications Standards Institute) etc. Initially, the main aim of IMS was to bring together the internet and the cellular world, but it has extended to include traditional wire line telecommunication systems as well. It utilizes existing internet protocols such as SIP (Session Initiation Protocol), AAA (Authentication, Authorization and Accounting protocol), and COPS (Common Open Policy Service) etc, and modifies them to meet the stringent requirements of reliable, real time communication systems. The advantages of IMS include easy service quality management (QoS), mobility management, service control and integration. At present a lot of attention is being paid to providing bundled up services in the home environment. Service providers have been successful in providing traditional telephony, high speed internet and cable services in a single package. But there is very little integration among these services. IMS can provide a way to integrate them as well as extend the possibility of various other services to be added to allow increased automation in the home environment. This thesis extends the concept of IMS to provide convergence and facilitate internetworking of the various bundled services available in the home environment; this may include but is not limited to communications (wired and wireless), entertainment, security etc. In this thesis, I present a converged home environment which has a number of elements providing a variety of communication and entertainment services. The proposed network would allow effective interworking of these elements, based on IMS architecture. My aim is to depict the possible advantages of using IMS to provide convergence, automation and integration at the residential level
Recommended from our members
Error Behaviour In Optical Networks
Optical fibre communications are now widely used in many applications, including local area computer networks. I postulate that many future optical LANs will be required to operate with limited optical power budgets for a variety of reasons, including increased system complexity and link speed, low cost components and minimal increases in transmit power. Some developers will wish to run links with reduced power budget margins, and the received data in these systems will be more susceptible to errors than has been the case previously.
The errors observed in optical systems are investigated using the particular case of Gigabit Ethernet on fibre as an example. Gigabit Ethernet is one of three popular optical local area interconnects which use 8B/10B line coding, along with Fibre Channel and Infiniband, and is widely deployed. This line encoding is also used by packet switched optical LANs currently under development. A probabilistic analysis follows the effects of a single channel error in a frame, through the line coding scheme and the MAC layer frame error detection mechanisms. Empirical data is used to enhance this original analysis, making it directly relevant to deployed systems.
Experiments using Gigabit Ethernet on fibre with reduced power levels at the receiver to simulate the effect of limited power margins are described. It is found that channel bit error rate and packet loss rate have only a weakly deterministic relationship, due to interactions between a number of non-uniform error characteristics at various network sub-layers. Some data payloads suffer from high bit error rates and low packet loss rates, compared to others with lower bit error rates and yet higher packet losses. Experiments using real Internet traffic contribute to the development of a novel model linking packet loss, the payload damage rate, and channel bit error rate. The observed error behaviours at various points in the physical and data link layers are detailed. These include data-dependent channel errors; this error hot- spotting is in contrast to the failure modes observed in a copper-based system. It is also found that both multiple channel errors within a single code-group, and multiple error instances within a frame, occur more frequently than might be expected. The overall effects of these error characteristics on the ability of cyclic redundancy checks (CRCs) to detect errors, and on the performance of higher layers in the network, is considered.
This dissertation contributes to the discussion of layer interactions, which may lead to un-foreseen performance issues at higher levels of the network stack, and extends it by considering the physical and data link layers for a common form of optical link. The increased risk of errors in future optical networks, and my findings for 8B/10B encoded optical links, demonstrate the need for a cross-layer understanding of error characteristics in such systems. The development of these new networks should take error performance into account in light of the particular requirements of the application in question.The UK Engineering and Physical Sciences Research Council and Marconi Corporation supported my work financially through an Industrial CASE studentship
Recommended from our members
Error Behaviour In Optical Networks
Optical fibre communications are now widely used in many applications, including local area computer networks. I postulate that many future optical LANs will be required to operate with limited optical power budgets for a variety of reasons, including increased system complexity and link speed, low cost components and minimal increases in transmit power. Some developers will wish to run links with reduced power budget margins, and the received data in these systems will be more susceptible to errors than has been the case previously.
The errors observed in optical systems are investigated using the particular case of Gigabit Ethernet on fibre as an example. Gigabit Ethernet is one of three popular optical local area interconnects which use 8B/10B line coding, along with Fibre Channel and Infiniband, and is widely deployed. This line encoding is also used by packet switched optical LANs currently under development. A probabilistic analysis follows the effects of a single channel error in a frame, through the line coding scheme and the MAC layer frame error detection mechanisms. Empirical data is used to enhance this original analysis, making it directly relevant to deployed systems.
Experiments using Gigabit Ethernet on fibre with reduced power levels at the receiver to simulate the effect of limited power margins are described. It is found that channel bit error rate and packet loss rate have only a weakly deterministic relationship, due to interactions between a number of non-uniform error characteristics at various network sub-layers. Some data payloads suffer from high bit error rates and low packet loss rates, compared to others with lower bit error rates and yet higher packet losses. Experiments using real Internet traffic contribute to the development of a novel model linking packet loss, the payload damage rate, and channel bit error rate. The observed error behaviours at various points in the physical and data link layers are detailed. These include data-dependent channel errors; this error hot- spotting is in contrast to the failure modes observed in a copper-based system. It is also found that both multiple channel errors within a single code-group, and multiple error instances within a frame, occur more frequently than might be expected. The overall effects of these error characteristics on the ability of cyclic redundancy checks (CRCs) to detect errors, and on the performance of higher layers in the network, is considered.
This dissertation contributes to the discussion of layer interactions, which may lead to un-foreseen performance issues at higher levels of the network stack, and extends it by considering the physical and data link layers for a common form of optical link. The increased risk of errors in future optical networks, and my findings for 8B/10B encoded optical links, demonstrate the need for a cross-layer understanding of error characteristics in such systems. The development of these new networks should take error performance into account in light of the particular requirements of the application in question.The UK Engineering and Physical Sciences Research Council and Marconi Corporation supported my work financially through an Industrial CASE studentship
Towards Connecting Base Stations over Metro Gigabit Ethernets
Abstract — Emerging high-speed metropolitan Ethernets create new opportunities to save costs when converging data and telephony services. However, connecting GSM and UMTS base stations over metropolitan Ethernets require these networks to meet stringent QoS requirements in the presence of bursty data traffic. To investigate this problem, we have probed ETH’s campus network, which spans the metropolitan area of Zurich, for several weeks. From our results, we infer that lightly-loaded metropolitan Gigabit Ethernets with average utilizations below 1 % presumably have a potential to carry traffic from GSM/UMTS base stations. I
DETERMINATION OF END-TO-END DELAYS OF SWITCHED ETHERNET LOCAL AREA NETWORKS
The design of switched local area networks in practice has largely been based on heuristics and experience; in fact, in many situations, no network design is carried out, but only network installation (network cabling and nodes/equipment placements). This has resulted in local area networks that are sluggish, and that fail to satisfy the users that are connected to such networks in terms of speed of uploading and downloading of information, when, a user’s computer is in a communication session with other computers or host machines that are attached to the local area network or with switching devices that connect the local area network to wide area networks. Therefore, the need to provide deterministic guarantees on the delays of packets’ flows when designing switched local area networks has led to the need for analytic and formal basis for designing such networks. This is because, if the maximum packet delay between any two nodes of a network is not known, it is impossible to provide a deterministic guarantee of worst case response time of packets’ flows. This is the problem that this research work set out to solve. A model of a packet switch was developed, with which the maximum delay for a packet to cross any N-ports packet switch can be calculated. The maximum packet delay value provided by this model was compared from the point of view of practical reality to values that were obtained from literature, and was found to be by far a more realistic value. An algorithm with which network design engineers can generate optimum network designs in terms of installed network switches and attached number of hosts while respecting specified maximum end-to-end delay constraints was developed. This work revealed that the widely held notion in the literature as regards origin-destination pairs of hosts enumeration for end-to-end delay computation appears to be wrong in the context of switched local area networks. We have for the first time shown how this enumeration should be done. It has also been empirically shown in this work that the number of hosts that can be attached to any switched local area network is actually bounded by the number of ports in the switches of which the network is composed. Computed numerical values of maximum end-to-end delays using the developed model and algorithm further revealed that the predominant cause of delay (sluggishness) in switched local area networks is the queuing delay, and not the number of users (hosts) that are connected to the networks. The fact that a switched local area network becomes slow as more users are logged on to it is as a result of the flow of bursty traffic (uploading and downloading of high-bit rates and bandwidth consuming applications). We have also implemented this work’s model and algorithms in a developed C programming language-based inter-active switched local area networks’ design application program. Further studies were recommended on the need to develop method(s) for determining the maximum amount of traffic that can arrive to a switch in a burst, on the need for the introduction of weighting function(s) in the end-to-end delay computation models; and on the need to introduce cost variables in determining the optimal Internet access device input and output rates specifications
Top 10 technologies and their impact on CPA\u27s
https://egrove.olemiss.edu/aicpa_guides/2474/thumbnail.jp
Technology and policy drivers for standardization : consequences for the optical components industry
Thesis (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Materials Science and Engineering, 2004.Includes bibliographical references.Optical communications promise the delivery of high bandwidth service to all types of customers. The potential for optical communications is enormous and has generated excitement and anticipation over the last decade. However, the emergence of a growing market has not materialized and the 1990s communications "bubble" has burst. One result of the bubble burst is that manufacturers of optical components have seen demand for their products plummet and are now struggling to survive. The future of the communications industry depends on its ability to provide better services and higher reliability. At some point, the upward curve of communications demand will require a strong optical components industry to support the industry. If the current stagnation continues, and the manufacturers fail, the economic pillar that is communications will suffer. The MIT Microphotonics Center has initiated a Communications Technology Roadmap study to better understand the technical, economic, and political factors that are inhibiting growth in the optical communications industry. This thesis examines the current state of the optoelectronic manufacturing industry and the causes of the decline. The primary focus is the rampant proliferation of optical transceiver designs resulting from abnormal market conditions during the "boom years" of the 1990s. The transceiver provides send/receiver capabilities and is the major component of optical networks. Convergence, or standardization, could potentially allow the industry to reach its full potential. System Dynamics is used to analyze transceiver standardization as a potential solution to the industry's lackluster growth.(cont.) To support the findings of the System Dynamics model, historical examples are explored to better understand the behavior of the industry and the potential effects of standardization. The industry currently offers literally hundreds of transceiver varieties. One major challenge to standardization is the development of a reasonable platform for the standard. This thesis will also examine the technical requirements of a transceiver platform and then provide a basic example of a transceiver platform before finishing with proposed policy measures that could guide the industry as it takes its first steps down the path to standardization.by Michael James Speerschneider.S.M