169 research outputs found
Recommended from our members
Simple network management protocol co- existence with hydrocarbon process automation communication real-time network
Hydrocarbon Process Automation Applications (HPAA) utilizes Real-time network connecting process instrumentations, controllers, and real-time logic control applications. Conventional practice is to dedicate a real-time network for process automation applications and prevent other applications from utilizing the same infrastructure. An important application that can help optimize, improve network performance, and provide rapid response time in network diagnostics and mitigation is Simple Network Management Protocol (SNMP). This paper addresses the co-existence of SNMP traffic with real-time applications. The impacts of activating this protocol with the real-time HPAA utilizing high speed Ethernet network design will be examined. Empirical data for an implemented Hydrocarbon process automation system will be used to illustrate the interdependency of application performance, traffic mix, and potential areas of improvements. The outcomes of this effort demonstrate the co-existence of SNMP with HPPA, given special considerations (i.e., bandwidth, number of applications, etc.)
ANALYSIS OF DATA & COMPUTER NETWORKS IN STUDENTS' RESIDENTIAL AREA IN UNIVERSITI TEKNOLOGI PETRONAS
In Universiti Teknologi Petronas (UTP), most of the students depend
on the Internet and computer network connection to gain academics information and
share educational resources. Even though the Internet connections and computers
networks are provided, the service always experience interruption, such as slow
Internet access, viruses and worms distribution, and network abuse by irresponsible
students. Since UTP organization keeps on expanding, the need for a better service in
UTP increases. Several approaches were put into practice to address the problems.
Research on data and computer network was performed to understand the network
technology applied in UTP. A questionnaire forms were distributed among the
students to obtain feedback and statistical data about UTP's network in Students'
Residential Area. The studies concentrate only on Students' Residential Area as it is
where most of the users reside. From the survey, it can be observed that 99% of the
students access the network almost 24 hours a day. In 2005, the 2 Mbps allocated
bandwidth was utilized 100% almost continuously but in 2006, the bottleneck of
Internet access has reduced significantly since the bandwidth allocated have been
increased to 8 Mbps. Server degradation due to irresponsible acts by users also adds
burden to the main server. In general, if the proposal to ITMS (Information
Technology & Media Services) Department for them to improve their Quality of
Service (QoS) and established UTP Computer Emergency Response Team (UCert),
most of the issues addressed in this report can be solved
ACTS 118x Final Report High-Speed TCP Interoperability Testing
With the recent explosion of the Internet and the enormous business opportunities available to communication system providers, great interest has developed in improving the efficiency of data transfer using the Transmission Control Protocol (TCP) of the Internet Protocol (IP) suite. The satellite system providers are interested in solving TCP efficiency problems associated with long delas and error-prone links. Similarly, the terrestrial community is interested in solving TCP problems over high-bandwidth links. Whereas the wireless community is intested in improving TCP performance over bandwidth constrained, error-prone links.
NASA realized that solutions had already been proposed for most of the problems associated with efficient data transfer over large bandwidth-delay links (which include satellite links). The solutions are detailed in various Internet Engineering Task Force (IETF) Request for Comments (RFCs). Unfortunately, most of these solutions had not been tested at high-speed (155+ Mbps). Therefore, the NASA\u27s ACTS experiments program initiated a series of TCP experiments to demonstrate scalability of TCP/IP and determine how far the protocol can be optimised over a 622 Mbps satellite link. These experiments were known as the 118i and 118j experiments.
During the 118i and 118j experiments, NASA worled closely with SUN Microsystems and FORE Systems to improve the operating system, TCP stacks, and network interface cards and drivers. We were able to obtain instantaneous data througput rates of greater than 529 Mbps and average throughput rates of 470 Mbps using TCP over Asynchronous Transfer Mode (ATM) over a 622 Mbps Synchronous Optical Network (SONET) OC12 link. Following the success of these experiments and the successful government/industry collaboration, a new series of experiments, the 118x experiments, were developed
Design, implementation & first run problems of a factory corporate network
En aquest projecte s'ha dut a terme el disseny de la infraestructura de comunicacions i de
xarxa d'una fà brica que comptarà amb zones de producció i d’oficines corporatives, s'han
analitzat les subseqĂĽents necessitats dels recursos de comunicacions dels diferents
departaments per determinar els equipaments de xarxa necessaris, aixĂ com la topologia
de la jerarquia d'interconnexions.
Igualment, s'ha tingut en compte la infraestructura de connexions sense fils per donar
cobertura als dispositius tant corporatius com de dispositius personals o treballadors
externs.
Un cop establerta la topologia de xarxa, s'ha realitzat l'assignaciĂł d'adreces IP,
segmentant la xarxa en diferents VLANs segons una classificaciĂł de funcionalitats i
necessitats de la mateixa (nombre de dispositius, servidor DHCP, nivells de seguretat…)
Finalment, s'ha realitzat un estudi econòmic respecte al pressupost del qual es disponia
per al projecte i el que finalment ha fet falta per cobrir tot el material, obres i hores
d’enginyeria necessaris per a la realització d'aquest.In this project, the design of the communications and network infrastructure of a factory
that will have production areas and corporate offices has been carried out, the subsequent
needs of the communications resources of the different departments have been analyzed
for determine the necessary network equipment, as well as the topology of the
interconnection hierarchy.
Similarly, the infrastructure of wireless connections has been taken into account to provide
coverage for both corporate devices and personal devices or external workers.
Once the network topology has been established, the assignment of IP addresses has
been carried out, segmenting the network into different VLANs according to a
classification of functionalities and its needs (number of devices, DHCP server, security
levels...) Finally , an economic study has been carried out with respect to the budget that
was available for the project and what was ultimately needed to cover all the material,
works and hours of engineering necessary to carry it out
Distributed control architecture for multiservice networks
The research focuses in devising decentralised and distributed control system architecture for the management of internetworking systems to provide improved service delivery and network control. The theoretical basis, results of simulation and implementation in a real-network are presented. It is demonstrated that better performance, utilisation and fairness can be achieved for network customers as well as network/service operators with a value based control system.
A decentralised control system framework for analysing networked and shared resources is developed and demonstrated. This fits in with the fundamental principles of the Internet. It is demonstrated that distributed, multiple control loops can be run on shared resources and achieve proportional fairness in their allocation, without a central control. Some of the specific characteristic behaviours of the service and network layers are identified. The network and service layers are isolated such that each layer can evolve independently to fulfil their functions better. A common architecture pattern is devised to serve the different layers independently. The decision processes require no co-ordination between peers and hence improves scalability of the solution. The proposed architecture can readily fit into a clearinghouse mechanism for integration with business logic. This architecture can provide improved QoS and better revenue from both reservation-less and reservation-based networks. The limits on resource usage for different types of flows are analysed. A method that can sense and modify user utilities and support dynamic price offers is devised. An optimal control system (within the given conditions), automated provisioning, a packet scheduler to enforce the control and a measurement system etc are developed. The model can be extended to enhance the autonomicity of the computer communication networks in both client-server and P2P networks and can be introduced on the Internet in an incremental fashion. The ideas presented in the model built with the model-view-controller and electronic enterprise architecture frameworks are now independently developed elsewhere into common service delivery platforms for converged networks.
Four US/EU patents were granted based on the work carried out for this thesis, for the cross-layer architecture, multi-layer scheme, measurement system and scheduler. Four conference papers were published and presented
Final report on the evaluation of RRM/CRRM algorithms
Deliverable public del projecte EVERESTThis deliverable provides a definition and a complete evaluation of the RRM/CRRM algorithms selected in D11 and D15, and evolved and refined on an iterative process. The evaluation will be carried out by means of simulations using the simulators provided at D07, and D14.Preprin
Wireless Bandwidth Aggregation for Internet Traffic
This MQP proposes a new method for bandwidth aggregation, utilize-able by the typical home network owner. The methods explained herein aggregate a network of coordinating routers within local WiFi communication range to achieve increased bandwidth at the application layer, over the HTTP protocol. Our protocol guarantees content delivery and reliability, as well as non-repudiation measures that hold each participant, rather then the group of routers, accountable for the content they download
- …