147 research outputs found

    Predictive performance modeling and simulation

    Get PDF
    The purpose of this thesis was to create and simulate a model of an existing Campus network with a view to predict future performance. This thesis also suggests ways in which such a network can be optimized. Such simulation and modeling can be referred to as Predictive Performance Modeling\u27. In this research a model of Florida International University (University Park campus) High Speed Network was created. Simulation of the model was carried out and an ATM Backbone Analysis was done. The results obtained were compared with corresponding results obtained by network performance monitoring and measurement software tools. A strong correlation between measured and simulation results were observed. A more detailed model was also created for the Engineering and Applied Science (EAS) Local Area Network. Various performance parameters results were collected and analyzed. Based on simulated results, predictions were made in regards to the scalability and optimization of this network in light of expected future requirements

    Implementation of realistic scenarios for ground truth purposes

    Get PDF
    Mestrado em Engenharia Electrónica e TelecomunicaçõesA segurança em redes de telecomunicações é um tópico que desde sempre gerou preocupação em todos os meios (instituições, empresas e outros) que utilizam estas redes. Novas ameaças ou mutações de ameaças já existentes surgem a uma elevada velocidade e os meios disponíveis parecem não ser suficientes para uma detecção positiva das mesmas. As respostas actuais para combater estas ameaças baseiam-se numa análise em tempo real do tráfego ou num treino prévio que muitas vezes tem que ser supervisionado por um ser humano que, dependendo da sua experiência na área pode estar a criar uma falha de segurança no sistema sem se aperceber do sucedido. Novas técnicas surgem para uma detecção eficaz de muitos ataques ou anomalias. No entanto, estas técnicas devem ser testadas de modo a validar o seu correcto funcionamento e, nesse sentido, são precisos fluxos de tráfego gerados na rede que possam ser utilizados sem comprometer a confidencialidade dos utilizadores e que obedeçam a critérios préestabelecidos. Com esta dissertação pretende-se constituir um conjunto de dados fiável e o mais abrangente possível de um conjunto de cenários realistas de rede, através da emulação em ambiente controlado de diferentes topologias, diferentes serviços e padrões de tráfego. Um outro objectivo fundamental deste trabalho passa por disponibilizar os dados obtidos à comunidade científica de modo a criar uma base de dados uniforme que permita avaliar o desempenho de novas metodologias de detecção de anomalias que venham a ser propostas. ABSTRACT: Security in telecommunication networks is a topic that has caused a lot of worries to network users (institutions, enterprises and others). New threats or mutations of existing ones appear at a very fast rate and the available solutions seem not to be enough for a positive detection of these threats. The solutions that are nowadays used to fight these threats require the realtime analysis of the network traffic or have to be previously trained. Most of the times, this training has to be supervised by a human being that, depending on his experience, can create a security breach in the system without knowing it. New techniques have been proposed in order to more efficiently detect many security attacks or threats. However, these techniques need to be tested in order to validate their correct functioning and, in order to do that, network traffic flows that can be used without compromising the users confidentiality and that obey to a pre-established criteria are needed. This dissertation intends to establish a set of trustworthy data as extensive as possible from a set of realistic network scenarios. Network emulation techniques will be used in a controlled environment, building different network topologies, with different services and traffic patterns. Another main objective of this work it is to make all this obtained data available to the scientific community in order to create a uniform data base that will allow the performance evaluation of new anomaly detection methodologies that can be proposed in the future

    Characterizing, managing and monitoring the networks for the ATLAS data acquisition system

    Get PDF
    Particle physics studies the constituents of matter and the interactions between them. Many of the elementary particles do not exist under normal circumstances in nature. However, they can be created and detected during energetic collisions of other particles, as is done in particle accelerators. The Large Hadron Collider (LHC) being built at CERN will be the world's largest circular particle accelerator, colliding protons at energies of 14 TeV. Only a very small fraction of the interactions will give raise to interesting phenomena. The collisions produced inside the accelerator are studied using particle detectors. ATLAS is one of the detectors built around the LHC accelerator ring. During its operation, it will generate a data stream of 64 Terabytes/s. A Trigger and Data Acquisition System (TDAQ) is connected to ATLAS -- its function is to acquire digitized data from the detector and apply trigger algorithms to identify the interesting events. Achieving this requires the power of over 2000 computers plus an interconnecting network capable of sustaining a throughput of over 150 Gbit/s with minimal loss and delay. The implementation of this network required a detailed study of the available switching technologies to a high degree of precision in order to choose the appropriate components. We developed an FPGA-based platform (the GETB) for testing network devices. The GETB system proved to be flexible enough to be used as the ba sis of three different network-related projects. An analysis of the traffic pattern that is generated by the ATLAS data-taking applications was also possible thanks to the GETB. Then, while the network was being assembled, parts of the ATLAS detector started commissioning -- this task relied on a functional network. Thus it was imperative to be able to continuously identify existing and usable infrastructure and manage its operations. In addition, monitoring was required to detect any overload conditions with an indication where the excess demand was being generated. We developed tools to ease the maintenance of the network and to automatically produce inventory reports. We created a system that discovers the network topology and this permitted us to verify the installation and to track its progress. A real-time traffic visualization system has been built, allowing us to see at a glance which network segments are heavily utilized. Later, as the network achieves production status, it will be necessary to extend the monitoring to identify individual applications' use of the available bandwidth. We studied a traffic monitoring technology that will allow us to have a better understanding on how the network is used. This technology, based on packet sampling, gives the possibility of having a complete view of the network: not only its total capacity utilization, but also how this capacity is divided among users and software applicati ons. This thesis describes the establishment of a set of tools designed to characterize, monitor and manage complex, large-scale, high-performance networks. We describe in detail how these tools were designed, calibrated, deployed and exploited. The work that led to the development of this thesis spans over more than four years and closely follows the development phases of the ATLAS network: its design, its installation and finally, its current and future operation

    Management, Optimization and Evolution of the LHCb Online Network

    Get PDF
    The LHCb experiment is one of the four large particle detectors running at the Large Hadron Collider (LHC) at CERN. It is a forward single-arm spectrometer dedicated to test the Standard Model through precision measurements of Charge-Parity (CP) violation and rare decays in the b quark sector. The LHCb experiment will operate at a luminosity of 2x10^32cm-2s-1, the proton-proton bunch crossings rate will be approximately 10 MHz. To select the interesting events, a two-level trigger scheme is applied: the rst level trigger (L0) and the high level trigger (HLT). The L0 trigger is implemented in custom hardware, while HLT is implemented in software runs on the CPUs of the Event Filter Farm (EFF). The L0 trigger rate is dened at about 1 MHz, and the event size for each event is about 35 kByte. It is a serious challenge to handle the resulting data rate (35 GByte/s). The Online system is a key part of the LHCb experiment, providing all the IT services. It consists of three major components: the Data Acquisition (DAQ) system, the Timing and Fast Control (TFC) system and the Experiment Control System (ECS). To provide the services, two large dedicated networks based on Gigabit Ethernet are deployed: one for DAQ and another one for ECS, which are referred to Online network in general. A large network needs sophisticated monitoring for its successful operation. Commercial network management systems are quite expensive and dicult to integrate into the LHCb ECS. A custom network monitoring system has been implemented based on a Supervisory Control And Data Acquisition (SCADA) system called PVSS which is used by LHCb ECS. It is a homogeneous part of the LHCb ECS. In this thesis, it is demonstrated how a large scale network can be monitored and managed using tools originally made for industrial supervisory control. The thesis is organized as the follows: Chapter 1 gives a brief introduction to LHC and the B physics on LHC, then describes all sub-detectors and the trigger and DAQ system of LHCb from structure to performance. Chapter 2 first introduces the LHCb Online system and the dataflow, then focuses on the Online network design and its optimization. In Chapter 3, the SCADA system PVSS is introduced briefly, then the architecture and implementation of the network monitoring system are described in detail, including the front-end processes, the data communication and the supervisory layer. Chapter 4 first discusses the packet sampling theory and one of the packet sampling mechanisms: sFlow, then demonstrates the applications of sFlow for the network trouble-shooting, the traffic monitoring and the anomaly detection. In Chapter 5, the upgrade of LHC and LHCb is introduced, the possible architecture of DAQ is discussed, and two candidate internetworking technologies (high speed Ethernet and InfniBand) are compared in different aspects for DAQ. Three schemes based on 10 Gigabit Ethernet are presented and studied. Chapter 6 is a general summary of the thesis

    ANALYSIS OF DATA & COMPUTER NETWORKS IN STUDENTS' RESIDENTIAL AREA IN UNIVERSITI TEKNOLOGI PETRONAS

    Get PDF
    In Universiti Teknologi Petronas (UTP), most of the students depend on the Internet and computer network connection to gain academics information and share educational resources. Even though the Internet connections and computers networks are provided, the service always experience interruption, such as slow Internet access, viruses and worms distribution, and network abuse by irresponsible students. Since UTP organization keeps on expanding, the need for a better service in UTP increases. Several approaches were put into practice to address the problems. Research on data and computer network was performed to understand the network technology applied in UTP. A questionnaire forms were distributed among the students to obtain feedback and statistical data about UTP's network in Students' Residential Area. The studies concentrate only on Students' Residential Area as it is where most of the users reside. From the survey, it can be observed that 99% of the students access the network almost 24 hours a day. In 2005, the 2 Mbps allocated bandwidth was utilized 100% almost continuously but in 2006, the bottleneck of Internet access has reduced significantly since the bandwidth allocated have been increased to 8 Mbps. Server degradation due to irresponsible acts by users also adds burden to the main server. In general, if the proposal to ITMS (Information Technology & Media Services) Department for them to improve their Quality of Service (QoS) and established UTP Computer Emergency Response Team (UCert), most of the issues addressed in this report can be solved

    Application of ant based routing and intelligent control to telecommunications network management

    Get PDF
    This thesis investigates the use of novel Artificial Intelligence techniques to improve the control of telecommunications networks. The approaches include the use of Ant-Based Routing and software Agents to encapsulate learning mechanisms to improve the performance of the Ant-System and a highly modular approach to network-node configuration and management into which this routing system can be incorporated. The management system uses intelligent Agents distributed across the nodes of the network to automate the process of network configuration. This is important in the context of increasingly complex network management, which will be accentuated with the introduction of IPv6 and QoS-aware hardware. The proposed novel solution allows an Agent, with a Neural Network based Q-Learning capability, to adapt the response speed of the Ant-System - increasing it to counteract congestion, but reducing it to improve stability otherwise. It has the ability to adapt its strategy and learn new ones for different network topologies. The solution has been shown to improve the performance of the Ant-System, as well as outperform a simple non-learning strategy which was not able to adapt to different networks. This approach has a wide region of applicability to such areas as road-traffic management, and more generally, positioning of learning techniques into complex domains. Both Agent architectures are Subsumption style, blending short-term responses with longer term goal-driven behaviour. It is predicted that this will be an important approach for the application of AI, as it allows modular design of systems in a similar fashion to the frameworks developed for interoperability of telecommunications systems

    New Challenges in Quality of Services Control Architectures in Next Generation Networks

    Get PDF
    A mesura que Internet i les xarxes IP s'han anat integrant dins la societat i les corporacions, han anat creixent les expectatives de nous serveis convergents així com les expectatives de qualitat en les comunicacions. Les Next Generation Networks (NGN) donen resposta a les noves necessitats i representen el nou paradigma d'Internet a partir de la convergència IP. Un dels aspectes menys desenvolupats de les NGN és el control de la Qualitat del Servei (QoS), especialment crític en les comunicacions multimèdia a través de xarxes heterogènies i/o de diferents operadors. A més a més, les NGN incorporen nativament el protocol IPv6 que, malgrat les deficiències i esgotament d'adreces IPv4, encara no ha tingut l'impuls definitiu.Aquesta tesi està enfocada des d'un punt de vista pràctic. Així doncs, per tal de poder fer recerca sobre xarxes de proves (o testbeds) que suportin IPv6 amb garanties de funcionament, es fa un estudi en profunditat del protocol IPv6, del seu grau d'implementació i dels tests de conformància i interoperabilitat existents que avaluen la qualitat d'aquestes implementacions. A continuació s'avalua la qualitat de cinc sistemes operatius que suporten IPv6 mitjançant un test de conformància i s'implementa el testbed IPv6 bàsic, a partir del qual es farà la recerca, amb la implementació que ofereix més garanties.El QoS Broker és l'aportació principal d'aquesta tesi: un marc integrat que inclou un sistema automatitzat per gestionar el control de la QoS a través de sistemes multi-domini/multi-operador seguint les recomanacions de les NGN. El sistema automatitza els mecanismes associats a la configuració de la QoS dins d'un mateix domini (sistema autònom) mitjançant la gestió basada en polítiques de QoS i automatitza la negociació dinàmica de QoS entre QoS Brokers de diferents dominis, de forma que permet garantir QoS extrem-extrem sense fissures. Aquesta arquitectura es valida sobre un testbed de proves multi-domini que utilitza el mecanisme DiffServ de QoS i suporta IPv6.L'arquitectura definida en les NGN permet gestionar la QoS tant a nivell 3 (IP) com a nivell 2 (Ethernet, WiFi, etc.) de forma que permet gestionar també xarxes PLC. Aquesta tesi proposa una aproximació teòrica per aplicar aquesta arquitectura de control, mitjançant un QoS Broker, a les noves xarxes PLC que s'estan acabant d'estandarditzar, i discuteix les possibilitats d'aplicació sobre les futures xarxes de comunicació de les Smart Grids.Finalment, s'integra en el QoS Broker un mòdul per gestionar l'enginyeria del tràfic optimitzant els dominis mitjançant tècniques de intel·ligència artificial. La validació en simulacions i sobre un testbed amb routers Cisco demostra que els algorismes genètics híbrids són una opció eficaç en aquest camp.En general, les observacions i avenços assolits en aquesta tesi contribueixen a augmentar la comprensió del funcionament de la QoS en les NGN i a preparar aquests sistemes per afrontar problemes del món real de gran complexitat.A medida que Internet y las redes IP se han ido integrando dentro de la sociedad y las corporaciones, han ido creciendo las expectativas de nuevos servicios convergentes así como las expectativas de calidad en las comunicaciones. Las Next Generation Networks (NGN) dan respuesta a las nuevas necesidades y representan el nuevo paradigma de Internet a partir de la convergencia IP. Uno de los aspectos menos desarrollados de las NGN es el control de la Calidad del Servicio (QoS), especialmente crítico en las comunicaciones multimedia a través de redes heterogéneas y/o de diferentes operadores. Además, las NGN incorporan nativamente el protocolo IPv6 que, a pesar de las deficiencias y agotamiento de direcciones IPv4, aún no ha tenido el impulso definitivo.Esta tesis está enfocada desde un punto de vista práctico. Así pues, con tal de poder hacer investigación sobre redes de prueba (o testbeds) que suporten IPv6 con garantías de funcionamiento, se hace un estudio en profundidad del protocolo IPv6, de su grado de implementación y de los tests de conformancia e interoperabilidad existentes que evalúan la calidad de estas implementaciones. A continuación se evalua la calidad de cinco sistemas operativos que soportan IPv6 mediante un test de conformancia y se implementa el testbed IPv6 básico, a partir del cual se realizará la investigación, con la implementación que ofrece más garantías.El QoS Broker es la aportación principal de esta tesis: un marco integrado que incluye un sistema automatitzado para gestionar el control de la QoS a través de sistemas multi-dominio/multi-operador siguiendo las recomendaciones de las NGN. El sistema automatiza los mecanismos asociados a la configuración de la QoS dentro de un mismo dominio (sistema autónomo) mediante la gestión basada en políticas de QoS y automatiza la negociación dinámica de QoS entre QoS brokers de diferentes dominios, de forma que permite garantizar QoS extremo-extremo sin fisuras. Esta arquitectura se valida sobre un testbed de pruebas multi-dominio que utiliza el mecanismo DiffServ de QoS y soporta IPv6. La arquitectura definida en las NGN permite gestionar la QoS tanto a nivel 3 (IP) o como a nivel 2 (Ethernet, WiFi, etc.) de forma que permite gestionar también redes PLC. Esta tesis propone una aproximación teórica para aplicar esta arquitectura de control, mediante un QoS Broker, a las noves redes PLC que se están acabando de estandardizar, y discute las posibilidades de aplicación sobre las futuras redes de comunicación de las Smart Grids.Finalmente, se integra en el QoS Broker un módulo para gestionar la ingeniería del tráfico optimizando los dominios mediante técnicas de inteligencia artificial. La validación en simulaciones y sobre un testbed con routers Cisco demuestra que los algoritmos genéticos híbridos son una opción eficaz en este campo.En general, las observaciones y avances i avances alcanzados en esta tesis contribuyen a augmentar la comprensión del funcionamiento de la QoS en las NGN y en preparar estos sistemas para afrontar problemas del mundo real de gran complejidad.The steady growth of Internet along with the IP networks and their integration into society and corporations has brought with it increased expectations of new converged services as well as greater demands on quality in communications. The Next Generation Networks (NGNs) respond to these new needs and represent the new Internet paradigm from the IP convergence. One of the least developed aspects in the NGNs is the Quality of Service (QoS) control, which is especially critical in the multimedia communication through heterogeneous networks and/or different operators. Furthermore, the NGNs natively incorporate the IPv6 protocol which, despite its shortcomings and the depletion of IPv4 addresses has not been boosted yet.This thesis has been developed with a practical focus. Therefore, with the aim of carrying out research over testbeds supporting the IPv6 with performance guarantees, an in-depth study of the IPv6 protocol development has been conducted and its degree of implementation and the existing conformance and interoperability tests that evaluate these implementations have been studied. Next, the quality of five implementations has been evaluated through a conformance test and the basic IPv6 testbed has been implemented, from which the research will be carried out. The QoS Broker is the main contribution to this thesis: an integrated framework including an automated system for QoS control management through multi-domain/multi-operator systems according to NGN recommendations. The system automates the mechanisms associated to the QoS configuration inside the same domain (autonomous system) through policy-based management and automates the QoS dynamic negotiation between peer QoS Brokers belonging to different domains, so it allows the guarantee of seamless end-to-end QoS. This architecture is validated over a multi-domain testbed which uses the QoS DiffServ mechanism and supports IPv6.The architecture defined in the NGN allows QoS management at level 3 (IP) as well as at level 2 (e.g. Ethernet, WiFi) so it also facilitates the management of PLC networks. Through the use of a QoS Broker, this thesis proposes a theoretical approach for applying this control architecture to the newly standardized PLC networks, and discusses the possibilities of applying it over the future communication networks of the Smart Grids.Finally, a module for managing traffic engineering which optimizes the network domains through artificial intelligence techniques is integrated in the QoS Broker. The validations by simulations and over a Cisco router testbed demonstrate that hybrid genetic algorithms are an effective option in this area.Overall, the advances and key insights provided in this thesis help advance our understanding of QoS functioning in the NGNs and prepare these systems to face increasingly complex problems, which abound in current industrial and scientific applications

    Data Communications and Network Technologies

    Get PDF
    This open access book is written according to the examination outline for Huawei HCIA-Routing Switching V2.5 certification, aiming to help readers master the basics of network communications and use Huawei network devices to set up enterprise LANs and WANs, wired networks, and wireless networks, ensure network security for enterprises, and grasp cutting-edge computer network technologies. The content of this book includes: network communication fundamentals, TCP/IP protocol, Huawei VRP operating system, IP addresses and subnetting, static and dynamic routing, Ethernet networking technology, ACL and AAA, network address translation, DHCP server, WLAN, IPv6, WAN PPP and PPPoE protocol, typical networking architecture and design cases of campus networks, SNMP protocol used by network management, operation and maintenance, network time protocol NTP, SND and NFV, programming, and automation. As the world’s leading provider of ICT (information and communication technology) infrastructure and smart terminals, Huawei’s products range from digital data communication, cyber security, wireless technology, data storage, cloud-computing, and smart computing to artificial intelligence
    corecore