1,036 research outputs found

    AI for IT Operations (AIOps) on Cloud Platforms: Reviews, Opportunities and Challenges

    Full text link
    Artificial Intelligence for IT operations (AIOps) aims to combine the power of AI with the big data generated by IT Operations processes, particularly in cloud infrastructures, to provide actionable insights with the primary goal of maximizing availability. There are a wide variety of problems to address, and multiple use-cases, where AI capabilities can be leveraged to enhance operational efficiency. Here we provide a review of the AIOps vision, trends challenges and opportunities, specifically focusing on the underlying AI techniques. We discuss in depth the key types of data emitted by IT Operations activities, the scale and challenges in analyzing them, and where they can be helpful. We categorize the key AIOps tasks as - incident detection, failure prediction, root cause analysis and automated actions. We discuss the problem formulation for each task, and then present a taxonomy of techniques to solve these problems. We also identify relatively under explored topics, especially those that could significantly benefit from advances in AI literature. We also provide insights into the trends in this field, and what are the key investment opportunities

    Segurança de contentores em ambiente de desenvolvimento contínuo

    Get PDF
    The rising of the DevOps movement and the transition from a product economy to a service economy drove significant changes in the software development life cycle paradigm, among which the dropping of the waterfall in favor of agile methods. Since DevOps is itself an agile method, it allows us to monitor current releases, receiving constant feedback from clients, and improving the next software releases. Despite its extraordinary development, DevOps still presents limitations concerning security, which needs to be included in the Continuous Integration or Continuous Deployment pipelines (CI/CD) used in software development. The massive adoption of cloud services and open-source software, the widely spread containers and related orchestration, as well as microservice architectures, broke all conventional models of software development. Due to these new technologies, packaging and shipping new software is done in short periods nowadays and becomes almost instantly available to users worldwide. The usual approach to attach security at the end of the software development life cycle (SDLC) is now becoming obsolete, thus pushing the adoption of DevSecOps or SecDevOps, by injecting security into SDLC processes earlier and preventing security defects or issues from entering into production. This dissertation aims to reduce the impact of microservices’ vulnerabilities by examining the respective images and containers through a flexible and adaptable set of analysis tools running in dedicated CI/CD pipelines. This approach intends to provide a clean and secure collection of microservices for later release in cloud production environments. To achieve this purpose, we have developed a solution that allows programming and orchestrating a battery of tests. There is a form where we can select several security analysis tools, and the solution performs this set of tests in a controlled way according to the defined dependencies. To demonstrate the solution’s effectiveness, we program a battery of tests for different scenarios, defining the security analysis pipeline to incorporate various tools. Finally, we will show security tools working locally, which subsequently integrated into our solution return the same results.A ascensão da estratégia DevOps e a transição de uma economia de produto para uma economia de serviços conduziu a mudanças significativas no paradigma do ciclo de vida do desenvolvimento de software, entre as quais o abandono do modelo em cascata em favor de métodos ágeis. Uma vez que o DevOps é parte integrante de um método ágil, permite-nos monitorizar as versões actuais, recebendo feedback constante dos clientes, e melhorando as próximas versões de software. Apesar do seu extraordinário desenvolvimento, o DevOps ainda apresenta limitações relativas à segurança, que necessita de ser incluída nas pipelines de integração contínua ou implantação contínua (CI/CD) utilizadas no desenvolvimento de software. A adopção em massa de serviços na nuvem e software aberto, a ampla difusão de contentores e respectiva orquestração bem como das arquitecturas de micro-serviços, quebraram assim todos os modelos convencionais de desenvolvimento de software. Devido a estas novas tecnologias, a preparação e expedição de novo software é hoje em dia feita em curtos períodos temporais e ficando disponível quase instantaneamente a utilizadores em todo o mundo. Face a estes fatores, a abordagem habitual que adiciona segurança ao final do ciclo de vida do desenvolvimento de software está a tornar-se obsoleta, sendo crucial adotar metodologias DevSecOps ou SecDevOps, injetando a segurança mais cedo nos processos de desenvolvimento de software e impedindo que defeitos ou problemas de segurança fluam para os ambientes de produção. O objectivo desta dissertação é reduzir o impacto de vulnerabilidades em micro-serviços através do exame das respectivas imagens e contentores por um conjunto flexível e adaptável de ferramentas de análise que funcionam em pipelines CI/CD dedicadas. Esta abordagem pretende fornecer uma coleção limpa e segura de micro-serviços para posteriormente serem lançados em ambientes de produção na nuvem. Para atingir este objectivo, desenvolvemos uma solução que permite programar e orquestrar uma bateria de testes. Existe um formulário onde podemos seleccionar várias ferramentas de análise de segurança, e a solução executa este conjunto de testes de uma forma controlada de acordo com as dependências definidas. Para demonstrar a eficácia da solução, programamos um conjunto de testes para diferentes cenários, definindo as pipelines de análise de segurança para incorporar várias ferramentas. Finalmente, mostraremos ferramentas de segurança a funcionar localmente, que posteriormente integradas na nossa solução devolvem os mesmos resultados.Mestrado em Engenharia Informátic

    Service Now: CMDB Research

    Get PDF
    The MAPFRE Capstone team has been tasked with reviewing and recommending roadmap on the existing CMDB configuration. Paper discusses the team’s overall research on ServiceNow CMDB, Client’s deliverables and introduction to the latest technological innovations. Based on given objectives and team’s analysis we have recommended key solutions for the client to better understand the IT environment areas of business service impact, asset management, compliance, and configuration management. In addition, our research has covered all the majority of the technical and functional areas to provide greater visibility and insight into existing CMDB and IT environment

    Vulnerability detection in device drivers

    Get PDF
    Tese de doutoramento, Informática (Ciência da Computação), Universidade de Lisboa, Faculdade de Ciências, 2017The constant evolution in electronics lets new equipment/devices to be regularly made available on the market, which has led to the situation where common operating systems (OS) include many device drivers(DD) produced by very diverse manufactures. Experience has shown that the development of DD is error prone, as a majority of the OS crashes can be attributed to flaws in their implementation. This thesis addresses the challenge of designing methodologies and tools to facilitate the detection of flaws in DD, contributing to decrease the errors in this kind of software, their impact in the OS stability, and the security threats caused by them. This is especially relevant because it can help developers to improve the quality of drivers during their implementation or when they are integrated into a system. The thesis work started by assessing how DD flaws can impact the correct execution of the Windows OS. The employed approach used a statistical analysis to obtain the list of kernel functions most used by the DD, and then automatically generated synthetic drivers that introduce parameter errors when calling a kernel function, thus mimicking a faulty interaction. The experimental results showed that most targeted functions were ineffective in the defence of the incorrect parameters. A reasonable number of crashes and a small number of hangs were observed suggesting a poor error containment capability of these OS functions. Then, we produced an architecture and a tool that supported the automatic injection of network attacks in mobile equipment (e.g., phone), with the objective of finding security flaws (or vulnerabilities) in Wi-Fi drivers. These DD were selected because they are of easy access to an external adversary, which simply needs to create malicious traffic to exploit them, and therefore the flaws in their implementation could have an important impact. Experiments with the tool uncovered a previously unknown vulnerability that causes OS hangs, when a specific value was assigned to the TIM element in the Beacon frame. The experiments also revealed a potential implementation problem of the TCP-IP stack by the use of disassociation frames when the target device was associated and authenticated with a Wi-Fi access point. Next, we developed a tool capable of registering and instrumenting the interactions between a DD and the OS. The solution used a wrapper DD around the binary of the driver under test, enabling full control over the function calls and parameters involved in the OS-DD interface. This tool can support very diverse testing operations, including the log of system activity and to reverse engineer the driver behaviour. Some experiments were performed with the tool, allowing to record the insights of the behaviour of the interactions between the DD and the OS, the parameter values and return values. Results also showed the ability to identify bugs in drivers, by executing tests based on the knowledge obtained from the driver’s dynamics. Our final contribution is a methodology and framework for the discovery of errors and vulnerabilities in Windows DD by resorting to the execution of the drivers in a fully emulated environment. This approach is capable of testing the drivers without requiring access to the associated hardware or the DD source code, and has a granular control over each machine instruction. Experiments performed with Off the Shelf DD confirmed a high dependency of the correctness of the parameters passed by the OS, identified the precise location and the motive of memory leaks, the existence of dormant and vulnerable code.A constante evolução da eletrónica tem como consequência a disponibilização regular no mercado de novos equipamentos/dispositivos, levando a uma situação em que os sistemas operativos (SO) mais comuns incluem uma grande quantidade de gestores de dispositivos (GD) produzidos por diversos fabricantes. A experiência tem mostrado que o desenvolvimento dos GD é sujeito a erros uma vez que a causa da maioria das paragens do SO pode ser atribuída a falhas na sua implementação. Esta tese centra-se no desafio da criação de metodologias e ferramentas que facilitam a deteção de falhas nos GD, contribuindo para uma diminuição nos erros neste tipo de software, o seu impacto na estabilidade do SO, e as ameaças de segurança por eles causadas. Isto é especialmente relevante porque pode ajudar a melhorar a qualidade dos GD tanto na sua implementação como quando estes são integrados em sistemas. Este trabalho inicia-se com uma avaliação de como as falhas nos GD podem levar a um funcionamento incorreto do SO Windows. A metodologia empregue usa uma análise estatística para obter a lista das funções do SO que são mais utilizadas pelos GD, e posteriormente constrói GD sintéticos que introduzem erros nos parâmetros passados durante a chamada às funções do SO, e desta forma, imita a integração duma falta. Os resultados das experiências mostraram que a maioria das funções testadas não se protege eficazmente dos parâmetros incorretos. Observou-se a ocorrência de um número razoável de paragens e um pequeno número de bloqueios, o que sugere uma pobre capacidade das funções do SO na contenção de erros. Posteriormente, produzimos uma arquitetura e uma ferramenta que suporta a injeção automática de ataques em equipamentos móveis (e.g., telemóveis), com o objetivo de encontrar falhas de segurança (ou vulnerabilidades) em GD de placas de rede Wi-Fi. Estes GD foram selecionados porque são de fácil acesso a um atacante remoto, o qual apenas necessita de criar tráfego malicioso para explorar falhas na sua implementação podendo ter um impacto importante. As experiências realizadas com a ferramenta revelaram uma vulnerabilidade anteriormente desconhecida que provoca um bloqueio no SO quando é atribuído um valor específico ao campo TIM da mensagem de Beacon. As experiências também revelaram um potencial problema na implementação do protocolo TCP-IP no uso das mensagens de desassociação quando o dispositivo alvo estava associado e autenticado com o ponto de acesso Wi-Fi. A seguir, desenvolvemos uma ferramenta com a capacidade de registar e instrumentar as interações entre os GD e o SO. A solução usa um GD que envolve o código binário do GD em teste, permitindo um controlo total sobre as chamadas a funções e aos parâmetros envolvidos na interface SO-GD. Esta ferramenta suporta diversas operações de teste, incluindo o registo da atividade do sistema e compreensão do comportamento do GD. Foram realizadas algumas experiências com esta ferramenta, permitindo o registo das interações entre o GD e o SO, os valores dos parâmetros e os valores de retorno das funções. Os resultados mostraram a capacidade de identificação de erros nos GD, através da execução de testes baseados no conhecimento da dinâmica do GD. A nossa contribuição final é uma metodologia e uma ferramenta para a descoberta de erros e vulnerabilidades em GD Windows recorrendo à execução do GD num ambiente totalmente emulado. Esta abordagem permite testar GD sem a necessidade do respetivo hardware ou o código fonte, e possuí controlo granular sobre a execução de cada instrução máquina. As experiências realizadas com GD disponíveis comercialmente confirmaram a grande dependência que os GD têm nos parâmetros das funções do SO, e identificaram o motivo e a localização precisa de fugas de memória, a existência de código não usado e vulnerável

    Resilience Strategies for Network Challenge Detection, Identification and Remediation

    Get PDF
    The enormous growth of the Internet and its use in everyday life make it an attractive target for malicious users. As the network becomes more complex and sophisticated it becomes more vulnerable to attack. There is a pressing need for the future internet to be resilient, manageable and secure. Our research is on distributed challenge detection and is part of the EU Resumenet Project (Resilience and Survivability for Future Networking: Framework, Mechanisms and Experimental Evaluation). It aims to make networks more resilient to a wide range of challenges including malicious attacks, misconfiguration, faults, and operational overloads. Resilience means the ability of the network to provide an acceptable level of service in the face of significant challenges; it is a superset of commonly used definitions for survivability, dependability, and fault tolerance. Our proposed resilience strategy could detect a challenge situation by identifying an occurrence and impact in real time, then initiating appropriate remedial action. Action is autonomously taken to continue operations as much as possible and to mitigate the damage, and allowing an acceptable level of service to be maintained. The contribution of our work is the ability to mitigate a challenge as early as possible and rapidly detect its root cause. Also our proposed multi-stage policy based challenge detection system identifies both the existing and unforeseen challenges. This has been studied and demonstrated with an unknown worm attack. Our multi stage approach reduces the computation complexity compared to the traditional single stage, where one particular managed object is responsible for all the functions. The approach we propose in this thesis has the flexibility, scalability, adaptability, reproducibility and extensibility needed to assist in the identification and remediation of many future network challenges

    Information Continuity: A Temporal Approach to Assessing Metadata and Organizational Quality in an Institutional Repository

    Get PDF
    This paper was presented at the 8th annual Metadata and Semantics Research Conference held in Karlsruhe, Germany, November 27-29, 2014.Repositories provide a vital infrastructure for an institution to aggregate and disseminate creative output, yet this task is only as successful as the effective organization of its content. The University of Kansas is currently undergoing a systematic review to analyze metadata and content organization in its own repository. This paper argues that for a full assessment to be achieved it is necessary to not view the repository as a fixed item, but as an entity with its own continuity. This temporal approach has a significant impact on establishing resource provenance for metadata policy adjustments, disciplinary migration, and resource extensibility. For any repository it is essential for ensuring long-term viability
    • …
    corecore