7,955 research outputs found

    A framework for modelling mobile radio access networks for intelligent fault management

    Get PDF
    Postprin

    Multi-protocol correlation : data record analyses and correlator design

    Get PDF
    This thesis has two main goals. The first one is to design a user configurable multiprotocol correlator and implement a prototype of said design. The second goal is to identify and propose a method to match different data records from different protocols. In essence, this thesis is about correlation of records which contain information about protocols or services that are generally used in telecommunication networks. In order to reach the two main goals of this thesis, we need to combine our knowledge from the programming world with our knowledge from the networking world. Correlation can be done on multiple levels; you can correlate protocol messages, and you can correlate whole calls or transactions which allows you to perform correlation across sections of a network. We approach this problem by gathering protocol signaling data, specifications on how the protocols work, and log files with examples. With this knowledge we were able to identify many of the problem areas related to correlation of the main protocols to be used in this thesis. We designed a configurable correlator that could be configured to overcome the problem areas related to correlation provided enough data was given. The prototype correlator was tested both on correctness and performance. Then, in order to validate the correctness and preciseness of our developed prototype correlator, we compare the correlation results obtained from our tool with the results obtained using Utel System ´s STINGA NGN Monitor. The comparison shows that the correlation results from our prototype correlator are satisfactor

    TOWARDS AN EFFICIENT MULTI-CLOUD OBSERVABILITY FRAMEWORK OF CONTAINERIZED MICROSERVICES IN KUBERNETES PLATFORM

    Get PDF
    A recent trend in software development adopts the paradigm of distributed microservices architecture (MA). Kubernetes, a container-based virtualization platform, has become a de facto environment in which to run MA applications. Organizations may choose to run microservices at several cloud providers to optimize cost and satisfy security concerns. This leads to increased complexity, due to the need to observe the performance characteristics of distributed MA systems. Following a decision guidance models (DGM) approach, this research proposes a decentralized and scalable framework to monitor containerized microservices that run on same or distributed Kubernetes clusters. The framework introduces efficient techniques to gather, distribute, and analyze the observed runtime telemetry data. It offers extensible and cloud-agnostic modules that can exchange data by using a multiplexing, reactive, and non-blocking data streaming approach. An experiment to observe samples of microservices deployed across different cloud platforms was used as a method to evaluate the efficacy and usefulness of the framework. The proposed framework suggests an innovative approach to the development and operations (DevOps) practitioners to observe services across different Kubernetes platforms. It could also serve as a reference architecture for researchers to guide further design options and analysis techniques

    Author Retains Full Rights

    Get PDF
    Software and systems complexity can have a profound impact on information security. Such complexity is not only imposed by the imperative technical challenges of monitored heterogeneous and dynamic (IP and VLAN assignments) network infrastructures, but also through the advances in exploits and malware distribution mechanisms driven by the underground economics. In addition, operational business constraints (disruptions and consequences, manpower, and end-user satisfaction), increase the complexity of the problem domain... Copyright SANS Institut

    Proactive cybersecurity tailoring through deception techniques

    Get PDF
    Dissertação de natureza científica para obtenção do grau de Mestre em Engenharia Informática e de ComputadoresUma abordagem proativa à cibersegurança pode complementar uma postura reativa ajudando as empresas a lidar com incidentes de segurança em fases iniciais. As organizações podem proteger-se ativamente contra a assimetria inerente à guerra cibernética através do uso de técnicas proativas, como por exemplo a ciber deception. A implantação intencional de artefactos enganosos para construir uma infraestrutura que permite a investigação em tempo real dos padrões e abordagens de um atacante sem comprometer a rede principal da organização é o propósito da deception cibernética. Esta metodologia pode revelar vulnerabilidades por descobrir, conhecidas como vulnerabilidades de dia-zero, sem interferir com as atividades de rotina da organização. Além disso, permite às empresas a extração de informações vitais sobre o atacante que, de outra forma, seriam difíceis de adquirir. No entanto, colocar estes conceitos em prática em circunstâncias reais constitui problemas de grande ordem. Este estudo propõe uma arquitetura para um sistema informático de deception, que culmina numa implementação que implanta e adapta dinamicamente uma rede enganosa através do uso de técnicas de redes definidas por software e de virtualização de rede. A rede ilusora é uma rede de ativos virtuais com uma topologia e especificações pré-planeadas, coincidentes com uma estratégia de deception. O sistema pode rastrear e avaliar a atividade do atacante através da monitorização contínua dos artefactos da rede. O refinamento em tempo real do plano de deception pode exigir alterações na topologia e nos artefactos da rede, possíveis devido às capacidades de modificação dinâmica das redes definidas por software. As organizações podem maximizar as suas capacidades de deception ao combinar estes processos com componentes avançados de deteção e classificação de ataques informáticos. A eficácia da solução proposta é avaliada usando vários casos de estudo que demonstram a sua utilidade.A proactive approach to cybersecurity can supplement a reactive posture by helping businesses to handle security incidents in the early phases of an attack. Organizations can actively protect against the inherent asymmetry of cyber warfare by using proactive techniques such as cyber deception. The intentional deployment of misleading artifacts to construct an infrastructure that allows real-time investigation of an attacker's patterns and approaches without compromising the organization's principal network is what cyber deception entails. This method can reveal previously undiscovered vulnerabilities, referred to as zero-day vulnerabilities, without interfering with routine corporate activities. Furthermore, it enables enterprises to collect vital information about the attacker that would otherwise be difficult to access. However, putting such concepts into practice in real-world circumstances involves major problems. This study proposes an architecture for a deceptive system, culminating in an implementation that deploys and dynamically customizes a deception grid using Software-Defined Networking (SDN) and network virtualization techniques. The deception grid is a network of virtual assets with a topology and specifications that are pre-planned to coincide with a deception strategy. The system can trace and evaluate the attacker's activity by continuously monitoring the artifacts within the deception grid. Real-time refinement of the deception plan may necessitate changes to the grid's topology and artifacts, which can be assisted by software-defined networking's dynamic modification capabilities. Organizations can maximize their deception capabilities by merging these processes with advanced cyber-attack detection and classification components. The effectiveness of the given solution is assessed using numerous use cases that demonstrate its utility.N/

    Data-driven conceptual modeling: how some knowledge drivers for the enterprise might be mined from enterprise data

    Get PDF
    As organizations perform their business, they analyze, design and manage a variety of processes represented in models with different scopes and scale of complexity. Specifying these processes requires a certain level of modeling competence. However, this condition does not seem to be balanced with adequate capability of the person(s) who are responsible for the task of defining and modeling an organization or enterprise operation. On the other hand, an enterprise typically collects various records of all events occur during the operation of their processes. Records, such as the start and end of the tasks in a process instance, state transitions of objects impacted by the process execution, the message exchange during the process execution, etc., are maintained in enterprise repositories as various logs, such as event logs, process logs, effect logs, message logs, etc. Furthermore, the growth rate in the volume of these data generated by enterprise process execution has increased manyfold in just a few years. On top of these, models often considered as the dashboard view of an enterprise. Models represents an abstraction of the underlying reality of an enterprise. Models also served as the knowledge driver through which an enterprise can be managed. Data-driven extraction offers the capability to mine these knowledge drivers from enterprise data and leverage the mined models to establish the set of enterprise data that conforms with the desired behaviour. This thesis aimed to generate models or knowledge drivers from enterprise data to enable some type of dashboard view of enterprise to provide support for analysts. The rationale for this has been started as the requirement to improve an existing process or to create a new process. It was also mentioned models can also serve as a collection of effectors through which an organization or an enterprise can be managed. The enterprise data refer to above has been identified as process logs, effect logs, message logs, and invocation logs. The approach in this thesis is to mine these logs to generate process, requirement, and enterprise architecture models, and how goals get fulfilled based on collected operational data. The above a research question has been formulated as whether it is possible to derive the knowledge drivers from the enterprise data, which represent the running operation of the enterprise, or in other words, is it possible to use the available data in the enterprise repository to generate the knowledge drivers? . In Chapter 2, review of literature that can provide the necessary background knowledge to explore the above research question has been presented. Chapter 3 presents how process semantics can be mined. Chapter 4 suggest a way to extract a requirements model. The Chapter 5 presents a way to discover the underlying enterprise architecture and Chapter 6 presents a way to mine how goals get orchestrated. Overall finding have been discussed in Chapter 7 to derive some conclusions
    corecore