14 research outputs found

    Streamlining Network Operations: Combining Meraki MX with Cisco DNA Center for Automation and Assurance

    Get PDF
    This research explores the integration of Meraki MX with Cisco DNA Center to better operate networks, automate management processes, and ensure network performance and reliability. Modern network environments are becoming increasingly complex, and organizations are seeking solutions that make it possible to enhance automation, reduce manual work, and improve operating efficiency. The study primarily evaluates the level of automation through the Meraki MX device, scrutinizes the implementation and performance assured through Cisco DNA Center, and investigates the positive impacts of merging them on performance efficiency and efficacy of the operations in the networks. This exploratory qualitative piece synthesizes qualitative case studies based on secondary research on expert views and technical manuals regarding the aspects and best practice that could or are being accomplished in this mergence. The results indicate major network performance improvement, which include a 6.5% increase in uptime, a 62.5% reduction in troubleshooting time, and a 20% increase in network health score. On the other hand, the issues included compatibility with legacy systems, initial setup costs, and training of staff were identified. Based on the research, the conclusion is that integration offers great advantages in terms of automation and efficiency in operation; however, it has to address these challenges to be successfully implemented. It seems the research helps develop a better understanding of how the union of Meraki MX with Cisco DNA Center could optimize network management by giving practical insights into surmounting integration hurdles and maximum performance

    Adopting a cloud-based network: An assessment of Cisco Meraki's ease of implementation

    Get PDF
    Software Defined Networking er en ny og voksende nettverksarkitektur der kontroll- og dataplanet i nettverksenhetene er separert. Dette har gjort det mulig å administrere nettverksenhetene gjennom et sentralisert kontrollsystem framfor å forholde seg til hver enhet individuelt, som igjen har forenklet den generelle prosessen for nettverksadministrasjon. På grunn av denne separasjonen blir nettverket lettere å programmere, som vil si at administratorene står fritt til å bruke høynivå programmeringsspråk til å lage programmer og nettverksapplikasjoner etter eget behov. I denne oppgaven utforsker vi utrullingen av et Software Defined nettverk ved å bruke Cisco Meraki, og sammenligner det med et tradisjonelt nettverksoppsett simulert i Packet Tracer. Vi gjør dette for å få en bedre forståelse for hvor enkelt det er å implementere hvert nettverk, og om de kan tas i bruk av noen med begrenset erfaring innen nettverk. Vi utforsker i tillegg Cisco Meraki sitt automatiseringspotensiale. Våre resultater viser at Cisco Meraki sin løsning er generelt mer brukervennlig sammenlignet med den tradisjonelle løsningen, men samtidig har den også noen utfordringer, blant annet et behov for internettforbindelse for å kunne administrere nettverket. Videre finner vi at Cisco Meraki har et høyt automatiseringspotensiale, og at det finnes diverse verktøy som kan brukes for dette formålet. Eksempelvis har vi Meraki dashboard APIet, Meraki Python biblioteket, og et miljø for utviklere der de kan dele sine løsninger med andre og omvendt.Software Defined Networking is a new and growing networking architecture in which the network devices' control and data planes are separated. This has made it possible to administer and manage the devices through a single centralized controller rather than doing so individually, greatly simplifying the process. Due to this separation, the network is also made highly programmable, meaning that network administrators are free to use high-level programming languages to create programs and applications based on their needs. In this thesis, we explore deploying a Software Defined network using Cisco Meraki, and compare it to that of a traditional network setup, simulated using Cisco Packet Tracer. We do so to gain an understanding of each solution's ease of implementation, and whether it could be utilized by those with limited networking experience. We also explore Cisco Meraki's potential for network automation and programmability. Our results demonstrate that Cisco Meraki's solution is indeed easier to use overall when compared to a traditional network solution. However, it is not without its own set of challenges and possible downsides, such as its heavy reliance on having an internet connection to manage the network. Moreover, we find that a Cisco Meraki network has a high automation potential, and that there are a variety of tools that can be used for this purpose. Examples of these are the Meraki dashboard API, the Meraki Python library, and a community for developers to share their solution with others and vise versa

    Quality and Traffic Flow in Videoconferencing Infrastructures

    Get PDF
    Målet med denne bacheloroppgaven har vært å gi innsikt i de to platformene Microsoft Teams og Cisco Webex. For å oppnå målet har forskjellige typer virituelle møter blitt kjørt og analysert. De virtuelle møtene er forskjellige i antall deltakere, bredbånd brukt, nettverksoppsett og mer. Ved å samle data fra Cisco Meraki cloud og lignende instrumenter har data blitt sammenlignet med et møte kalt baseline. Alle de virtuelle møtene/testscenarioer blir i tillegg sammenlignet med dokumenter fra Webex og Teams som sier noe om møtekvalitet. Gjennom disse sammenligningene ble det funnet at møtekvaliteten er avhengig av type bredbånd brukt, første person som starter et møte, en platforms gjennopprettings-mekanismer og generelle nettverksparametere.The aim of this thesis was to provide insight into the two platforms Microsoft Teams and Cisco Webex from a network perspective. To achieve this, different virtual meetings were run and analysed. The virtual meetings differ in the number of clients, network topology and more. By collecting data from Cisco Meraki cloud and other instruments, data is compared to a basic virtual meeting, which is called baseline. All of the virtual meetings/test scenarios are also compared to documents which states what is good or poor meeting quality. From the comparison it was found that meeting quality is reliant on the several factors including the first client joining a meeting, the platforms recovery mechanisms and general network parameters

    Detection and Characterisation of Conductive Objects Using Electromagnetic Induction and a Fluxgate Magnetometer

    Get PDF
    Eddy currents induced in electrically conductive objects can be used to locate metallic objects as well as to assess the properties of materials non-destructively without physical contact. This technique is useful for material identification, such as measuring conductivity and for discriminating whether a sample is magnetic or non-magnetic. In this study, we carried out experiments and numerical simulations for the evaluation of conductive objects. We investigated the frequency dependence of the secondary magnetic field generated by induced eddy currents when a conductive object is placed in a primary oscillating magnetic field. According to electromagnetic theory, conductive objects have different responses at different frequencies. Using a table-top setup consisting of a fluxgate magnetometer and a primary coil generating a magnetic field with frequency up to 1 kHz, we were able to detect aluminium and steel cylinders using the principle of electromagnetic induction. The experimental results were compared to numerical simulations, with good overall agreement. This technique enables the identification and characterisation of objects using their electrical conductivity and magnetic permeability

    Blocks\u27 Network: Redesign Architecture based on Blockchain Technology

    Get PDF
    The Internet is a global network that uses communication protocols. It is considered the most important system reached by humanity, which no one can abandon. However, this technology has become a weapon that threatens the privacy of users, especially in the client-server model, where data is stored and managed privately. Additionally, users have no power over their data that store in a private server, which means users’ data may interrupt by government or might be sold via service provider for-profit purposes. Furthermore, blockchain is a technology that we can rely on to solve issues related to client-server model if appropriately used. However, blockchain technology uses consensus protocol, which is used for creating an incontrovertible system of agreement between members across a distributed network. Thus, the consensus protocol is used to slow all member down from generating too fast in order to control the network creation pattern, which leads to scalability and latency problems. The proposed system will present a platform that leverages modernize blockchain called Blocks’ Network. The system is taking into consideration the issues related to privacy and confidentiality from the client-side model, and scalability and latency issues from the blockchain technology side. Blocks’ network is a public and a permissioned network that use a multi-dimensional hash to generate multiple chains for the purpose of a systematic approach. The proposed platform is an assembly point for users to create a decentralized network using P2P protocols. The system has high data flow due to frequent use by participants (for example, the use of the Internet). Besides, the system will store all traffic of the network overtly via Blocks’ Network

    MICRO-FRONTENDS FOR WEB CONTENT MANAGEMENT SYSTEMS

    Get PDF
    Content Management Systems are a fundamental part of the modern world wide web. They are used to create various types of web applications. With the advent of service oriented architecture (SOA), it is commonplace for content management systems to be separate from the presentation layer that eventually displays the content. However, as the complexity of the system grows the frontend may become increasingly hard to maintain and scale. This study aims to apply the micro-frontend pattern to the presentation layer of headless web content management systems in order to provide improved maintainability to the frontend. A multivocal literature review which combines academic literature with grey literature is carried out in this study. The review is to determine the implementation strategies currently being used in research and industry as well as the approach to evaluation of micro-frontend architecture. This work provides a model architecture for applying micro-frontends to general purpose content management systems using WordPress as a case study. The success of the micro-frontend implementation is measured using system stability, web performance and code complexity metrics to compare against a functionally equivalent monolithic implementation. The results of the systematic review show the growing popularity of the micro-frontend approach as well as the different tools and techniques used in implementing the architecture. Client-side rendering and unified single page applications (SPA) are the dominant rendering and composition approaches of micro-frontend used in literature. The evaluation results that micro-frontends perform favourably compared to the headless approach. Micro-frontends had a maintainability index of 75.48 compared to an index of 74.64 for the monolithic version. In all the web performance metrics considered, micro-frontends posted a superior score than the monolithic versions. Micro-frontends did show a significant increase in the complexity of individual modules compared to the equivalent modules in the monolith

    Cooperation in open, decentralized, and heterogeneous computer networks

    Get PDF
    Community Networks (CN) are naturally open and decentralized structures, that grow organically with the addition of heterogeneous network devices, contributed and configured as needed by their participants. The continuous growth in popularity and dissemination of CNs in recent years has raised the perception of a mature and sustainable model for the provisioning of networking services. However, because such infrastructures include uncontrolled entities with non delimited responsibilities, every single network entity does indeed represent a potential single-point of failure that can stop the entire network from working, and that no other entity can prevent or even circumvent. Given the open and decentralized nature of CNs, that brings together individuals and organizations with different and even conflicting economic, political, and technical interests, the achievement of no more than basic consensus on the correctness of all network nodes is challenging. In such environment, the lack of self-determination for CN participants in terms of control and security of routing can be regarded as an obstacle for growth or even as a risk of collapse. To address this problem we first consider deployments of existing Wireless CN and we analyze their technology, characteristics, and performance. We perform an experimental evaluation of a production 802.11an Wireless CN, and compare to studies of other Wireless CN deployments in the literature. We compare experimentally obtained throughput traces with path-capacity calculations based on well-known conflict graph models. We observe that in the majority of cases the path chosen by the employed BMX6 routing protocol corresponds with the best identified path in our model. We analyze monitoring and interaction shortcomings of CNs and address these with Network Characterization Tool (NCT), a novel tool that allows users to assess network state and performance, and improve their quality of experience by individually modifying the routing parameters of their devices. We also evaluate performance outcomes when different routing policies are in use. Routing protocols provide self-management mechanisms that allow the continuous operation of a Community Mesh Network (CMN). We focus on three widely used proactive mesh routing protocols and their implementations: BMX6, OLSR, and Babel. We describe the core idea behind these protocols and study the implications of these in terms of scalability, performance, and stability by exposing them to typical but challenging network topologies and scenarios. Our results show the relative merits, costs, and limitations of the three protocols. Built upon the studied characteristics of typical CN deployments, their requirements on open and decentralized cooperation, and the potential controversy on the trustiness of particular components of a network infrastructure, we propose and evaluate SEMTOR, a novel routing-protocol that can satisfy these demands. SEMTOR allows the verifiable and undeniable definition and distributed application of individually trusted topologies for routing traffic towards each node. One unique advantage of SEMTOR is that it does not require a global consensus on the trustiness of any node and thus preserves cooperation among nodes with even oppositional defined trust specification. This gives each node admin the freedom to individually define the subset, and the resulting sub-topology, from the whole set of participating nodes that he considers sufficiently trustworthy to meet their security, data-delivery objectives and concerns. The proposed mechanisms have been realized as a usable and open-source implementation called BMX7, as successor of BMX6. We have evaluated its scalability, contributed robustness, and security. These results show that the usage of SEMTOR for securing trusted routing topologies is feasible, even when executed on real and very cheap (10 Euro, Linux SoC) routers as commonly used in Community Mesh Networks.Las Redes Comunitarias (CNs) son estructuras de naturaleza abierta y descentralizada, que crecen orgánicamente con la adición de dispositivos de red heterogéneos que aportan y configuran sus participantes según sea necesario. Sin embargo, debido a que estas infraestructuras incluyen entidades con responsabilidades poco delimitadas, cada entidad puede representar un punto de fallo que puede impedir que la red funcione y que ninguna otra entidad pueda prevenir o eludir. Dada la naturaleza abierta y descentralizada de las CNs, que agrupa individuos y organizaciones con diferentes e incluso contrapuestos intereses económicos, políticos y técnicos, conseguir poco más que un consenso básico sobre los nodos correctos en la red puede ser un reto. En este entorno, la falta de autodeterminación para los participantes de una CN en cuanto a control y seguridad del encaminamiento puede considerarse un obstáculo para el crecimiento o incluso un riesgo de colapso. Para abordar este problema consideramos las implementaciones de redes comunitarias inalámbricas (WCN) y se analiza su tecnología, características y desempeño. Realizamos una evaluación experimental de una WCN establecida y se compara con estudios de otros despliegues. Comparamos las trazas de rendimiento experimentales con cálculos de la capacidad de los caminos basados en modelos bien conocidos del grafo. Se observa que en la mayoría de los casos el camino elegido por el protocolo de encaminamiento BMX6 corresponde con el mejor camino identificado en nuestro modelo. Analizamos las limitaciones de monitorización e interacción en CNs y los tratamos con NCT, una nueva herramienta que permite evaluar el estado y rendimiento de la red, y mejorar la calidad de experiencia modificando los parámetros de sus dispositivos individuales. También evaluamos el rendimiento resultante para diferentes políticas de encaminamiento. Los protocolos de encaminamiento proporcionan mecanismos de autogestión que hacen posible el funcionamiento continuo de una red comunitaria mesh (CMN). Nos centramos en tres protocolos de encaminamiento proactivos para redes mesh ampliamente utilizados y sus implementaciones: BMX6, OLSR y Babel. Se describe la idea central de estos protocolos y se estudian la implicaciones de éstos en términos de escalabilidad, rendimiento y estabilidad al exponerlos a topologías y escenarios de red típicos pero exigentes. Nuestros resultados muestran los méritos, costes y limitaciones de los tres protocolos. A partir de las características analizadas en despliegues típicos de redes comunitarias, y de las necesidades en cuanto a cooperación abierta y descentralizada, y la esperable divergencia sobre la confiabilidad en ciertos componentes de la infraestructura de red, proponemos y evaluamos SEMTOR, un nuevo protocolo de encaminamiento que puede satisfacer estas necesidades. SEMTOR permite definir de forma verificable e innegable, así como aplicar de forma distribuida, topologías de confianza individualizadas para encaminar tráfico hacia cada nodo. Una ventaja única de SEMTOR es que no precisa de consenso global sobre la confianza en cualquier nodo y por tanto preserva la cooperación entre los nodos, incluso con especificaciones de confianza definidas por oposición. Esto proporciona a cada administrador de nodo la libertad para definir el subconjunto, y la sub-topología resultante, entre el conjunto de todos los nodos participantes que considere dignos de suficiente confianza para cumplir con su objetivo y criterio de seguridad y entrega de datos. Los mecanismos propuestos se han realizado en forma de una implementación utilizable de código abierto llamada BMX7. Se ha evaluado su escalabilidad, robustez y seguridad. Estos resultados demuestran que el uso de SEMTOR para asegurar topologías de encaminamiento de confianza es factible, incluso cuando se ejecuta en routers reales y muy baratos utilizados de forma habitual en WCN.Postprint (published version

    Propuesta de plan de recuperación para los activos críticos de TI

    Get PDF
    Proyecto de Graduación (Licenciatura en Administración de Tecnología de Información) Instituto Tecnológico de Costa Rica, Área Académica de Administración de Tecnologías de Información, 2022Esta investigación tiene como propósito proponer un plan de recuperación de los activos críticos de TI en la organización para la gestión del impacto de las operaciones ante la materialización de incidentes e interrupciones, durante el período comprendido en el segundo semestre del 2022. El estudio se basó en una metodología exploratoria aplicada, donde se combinan los enfoques cualitativo y cuantitativo. Para la recopilación de datos tanto cualitativos como cuantitativos se utilizó la revisión documental con el fin de identificar los activos de TI de la organización y definir estrategias de recuperación; además, se aplicó cuestionarios a los dueños de activos de TI que pertenecen a la gobernanza de TI; y finalmente se utiliza la observación para destacar el comportamiento de la organización y los encargados en las pruebas de interrupción. La investigación concluye que los activos críticos de TI son sistemas en la nube o sistemas como servicios (SaaS), lo cual implica un traslado de responsabilidades de recuperación al proveedor del sistema o soporte, dejando como tareas principales a la organización la gestión de la comunicación tanto interna como con los proveedores, proveer alternativas y manejo de la resiliencia organizacional. Se recomienda la implementación y utilización de la propuesta para la gestión de la recuperación ante la materialización de una interrupción en los activos críticos de TI en la organización.This research aims to propose a recuperation plan for critical IT assets in the organization to manage the impact of operations in case of disruptions and incidents, during the second half of 2022. The study was based on an applied exploratory methodology, combining qualitative and quantitative approaches. For both qualitative and quantitative data collection, documentary review was applied to identify the organization's IT assets and define recovery strategies; in addition, questionnaires were applied to the owners of IT assets that belong to IT governance; and finally, observation is used to highlight the behavior of the organization and those in charge in disruption testing. The research concludes that critical IT assets are cloud or SaaS systems, which implies a transfer of recovery responsibilities to the system provider or system support, leaving as main tasks to the organization the management of communication internally and with providers, supplying alternatives and managing organizational resilience. It is recommended the implementation and use this proposal for recuperation management in case of a disruption in the organization's critical IT assets

    Intelligent Secure Trustable Things

    Get PDF
    This open access book provides an overview about results of the InSecTT project. Artificial Intelligence of Things (AIoT) is the natural evolution for both Artificial Intelligence (AI) and Internet of Things (IoT) because they are mutually beneficial. AI increases the value of the IoT through machine learning by transforming the data into useful information, while the IoT increases the value of AI through connectivity and data exchange. Therefore, InSecTT—Intelligent Secure Trustable Things, a pan-European effort with over 50 key partners from 12 countries (EU and Turkey), provides intelligent, secure and trustworthy systems for industrial applications to provide comprehensive cost-efficient solutions of intelligent, end-to-end secure, trustworthy connectivity and interoperability to bring the Internet of Things and Artificial Intelligence together. InSecTT creates trust in AI-based intelligent systems and solutions as a major part of the AIoT. InSecTT fosters cooperation between big industrial players from various domains, a number of highly innovative SMEs distributed all over Europe and cutting-edge research organizations and universities. The project features a big variety of industry-driven use cases embedded into various application domains where Europe is in a leading position, i.e., smart infrastructure, building, manufacturing, automotive, aeronautics, railway, urban public transport, maritime as well as health. The demonstration of InSecTT solutions in well-known real-world environments like airports, trains, ports and the health sector shows their applicability on both high and broad level, going from citizens to European stakeholders. The first part of the book provides an introduction into the main topics of the InSecTT project: How to bring Internet of Things and Artificial Intelligence together to form the Artificial Intelligence of Things, a reference architecture for such kind of systems and how to develop trustworthy, ethical AI systems. In the second part, we show the development of essential technologies for creating trustworthy AIoT systems. The third part of the book is composed of a broad variety of examples on how to design, develop and validate trustworthy AIoT systems for industrial applications (including automotive, avionics, smart infrastructure, health care, manufacturing and railway)
    corecore