857 research outputs found

    Development of Incident Response Playbooks and Runbooks for Amazon Web Services Ransomware Scenarios

    Get PDF
    In today’s digital landscape, enterprises encounter myriad cybersecurity challenges that jeopardize their critical digital assets. Modern cyber threats have evolved drastically, adapting to the proliferation of cloud technologies that drive organizations towards platforms like AWS that offer convenience, cost-reduction, and reliability. However, this transition introduces new security risks because threat actors are motivated to craft and deploy advanced malware explicitly targeting the cloud. Ransomware emerged as one of the most impactful and dangerous cyber threats, still in 2023, encrypting data and demanding payment (usually in untraceable tokens) for the decryption key. Confidentiality, integrity, and availability of cloud assets stand perpetually vulnerable, and sometimes, unprepared businesses suddenly hit by ransomware cannot find a way out. Besides financial loss and operation disruption, the breach of sensitive information compromises trust, leading to reputational damage that's hard to mend. Corporations are urged to develop robust defensive strategies to identify, contain, and recover from ransomware and other cloud threat exploitation. Traditional cybersecurity approaches must rapidly reshape to manage emerging menaces. Hence, they require new specialized and well-structured incident response plans to become the bedrock of the security tactics. This thesis dives into the complexities of designing and implementing accurate incident response Playbooks and Runbooks, focusing on handling the common danger of ransomware, especially within Amazon Web Services (AWS). This research journey is strictly connected to the real-world context, resulting from a six-month internship within Bynder, a digital asset management leader company. This experience culminated in conceptualizing the step-by-step procedures against ransomware incidents in cloud infrastructures, improving communication, and coordinating actions during high-pressure situations

    IIoT Data Ness: From Streaming to Added Value

    Get PDF
    In the emerging Industry 4.0 paradigm, the internet of things has been an innovation driver, allowing for environment visibility and control through sensor data analysis. However the data is of such volume and velocity that data quality cannot be assured by conventional architectures. It has been argued that the quality and observability of data are key to a project’s success, allowing users to interact with data more effectively and rapidly. In order for a project to become successful in this context, it is of imperative importance to incorporate data quality mechanisms in order to extract the most value out of data. If this goal is achieved one can expect enormous advantages that could lead to financial and innovation gains for the industry. To cope with this reality, this work presents a data mesh oriented methodology based on the state-of-the-art data management tools that exist to design a solution which leverages data quality in the Industrial Internet of Things (IIoT) space, through data contextualization. In order to achieve this goal, practices such as FAIR data principles and data observability concepts were incorporated into the solution. The result of this work allowed for the creation of an architecture that focuses on data and metadata management to elevate data context, ownership and quality.O conceito de Internet of Things (IoT) é um dos principais fatores de sucesso para a nova Indústria 4.0. Através de análise de dados sobre os valores que os sensores coletam no seu ambiente, é possível a construção uma plataforma capaz de identificar condições de sucesso e eventuais problemas antes que estes ocorram, resultando em ganho monetário relevante para as empresas. No entanto, este caso de uso não é de fácil implementação, devido à elevada quantidade e velocidade de dados proveniente de um ambiente de IIoT (Industrial Internet of Things)

    Road to Microservices

    Get PDF
    This document intends to elucidate the complexity of microservices decomposition and this its inherent process of implementation. Developments in technology and design, achieving higher performance, reliability, or lowering costs are valid reasons to consider controlling the product’s quality by guaranteeing its conformance with established characteristics and standards. Control is made possible by adding quality control, inspection routines, and trend analysis to a manufacturing process. These techniques are established in the Quality field academically and business-wise. Repeat Part Management (RPM) is software that allows users to apply these techniques efficiently and has brought value to the company. However, RPM has been growing, and issues have emerged due to customer needs - accumulated technical debt. These ever-growing modifications are common through different business areas, and architectures’ research evolution has accompanied them. This demand for highly modifiable, rapid development, and independent systems has resulted in the adoption of microservices. There is a concern for existing systems for decomposition due to the characteristics of microservices, which encourages approach/technique research. This architecture promotes legacy system analysis to map current functionalities and dependencies between components. Furthermore, critical concepts in a microservices architecture are researched and implemented in a new system that encompasses Repeat Part Management’s functionalities. This thesis explores the microservices’ architecture evolution with an already defined mature domain in quality assessment.Este documento pretende especificar a complexidade do processo de decomposição em microsserviços e do seu processo de implementação. Avanços na tecnologia e design, alcançar melhor performance, ou a confiabilidade do produto, ou diminuir custos são justificações válidas para considerar controlar a qualidade do produto e, inerentemente, garantir a sua conformidade com as características previamente definidas e com padrões da indústria. É possível garantir controlo sob os produtos ao acrescentar, ao processo produtivo, métodos de controlo de qualidade, rotinas de inspeção e análise de tendências (de desvio). Estas técnicas estão bem estabelecidas academicamente e, de um ponto de vista do mercado, na área da Qualidade e garantia da qualidade. O Repeat Part Management (RPM) é um software que permite aos seus utilizadores a utilização eficiente destas técnicas de qualidade, o que resulta numa adição de valor para a empresa. Porém, devido às crescentes necessidades dos clientes, alguns problemas têm sido identificados - relacionados com o conceito de acumulação de technical debt. Esta crescente necessidade de alteração é comum em diversas áreas de negócio, e a investigação de soluções arquiteturais tem acompanhado esta pesquisa. Esta solução arquitetura pode ser caracterizada pela facilidade de sistemas facilmente modificáveis, pelo rápido desenvolvimento e implementação, e pela independência dos serviços decompostos. Aquando de uma migração para microsserviços num sistema maturo, existe uma maior preocupação na decomposição da aplicação e definição dos serviços dada a característica dos microsserviços, o que incentiva a uma pesquisa detalhada sobre as técnicas de decomposição. Pela mesma razão, esta arquitetura incentiva ao mapeamento e documentação das dependências entre serviços e componentes e o estudo da aplicação legacy. Para além disto, estes conceitos, e a sua implementação devem ser planeados, justificados e documentados, o que explica a complexidade do processo de migração e implementação, que deve ter em consideração as funcionalidades existentes no Repeat Part Management. Dessa forma, esta tese explora a implementação desta arquitetura numa aplicação matura na área de Garantia de Qualidade

    Utilizing AI/ML methods for measuring data quality

    Get PDF
    Kvalitní data jsou zásadní pro důvěryhodná rozhodnutí na datech založená. Značná část současných přístupů k měření kvality dat je spojena s náročnou, odbornou a časově náročnou prací, která vyžaduje manuální přístup k dosažení odpovídajících výsledků. Tyto přístupy jsou navíc náchylné k chybám a nevyužívají plně potenciál umělé inteligence (AI). Možným řešením je prozkoumat inovativní nové metody založené na strojovém učení (ML), které využívají potenciál AI k překonání těchto problémů. Významná část práce se zabývá teorií kvality dat, která poskytuje komplexní vhled do této oblasti. V existující literatuře byly objeveny čtyři moderní metody založené na ML a byla navržena jedna nová metoda založená na autoenkodéru (AE). Byly provedeny experimenty s AE a dolováním asociačních pravidel za pomoci metod zpracování přirozeného jazyka. Navrhované metody založené na AE prokázaly schopnost detekce potenciálních problémů s kvalitou dat na datasetech z reálného světa. Dolování asociačních pravidel dokázalo extrahovat byznys pravidla pro stanovený problém, ale vyžadovalo značné úsilí s předzpracováním dat. Alternativní metody nezaložené na AI byly také podrobeny analýze, ale vyžadovaly odborné znalosti daného problému a domény.High-quality data is crucial for trusted data-based decisions. A considerable part of current data quality measuring approaches is associated with expensive, expert and time-consuming work that includes manual effort to achieve adequate results. Furthermore, these approaches are prone to error and do not take full advantage of the AI potential. A possible solution is to explore ML-based state-of-the-art methods that are using the potential of AI to overcome these issues. A significant part of the thesis deals with data quality theory which provides a comprehensive insight into the field of data quality. Four ML-based state-of-the-art methods were discovered in the existing literature, and one novel method based on Autoencoders (AE) was proposed. Experiments with AE and Association Rule Mining using NLP were conducted. Proposed methods based on AE proved to detect potential data quality defects in real-world datasets. Association Rule Mining approach was able to extract business rules for a given business question, but the required significant preprocessing effort. Alternative non-AI methods were also analyzed but required reliance on expert and domain knowledge

    Industrial Applications: New Solutions for the New Era

    Get PDF
    This book reprints articles from the Special Issue "Industrial Applications: New Solutions for the New Age" published online in the open-access journal Machines (ISSN 2075-1702). This book consists of twelve published articles. This special edition belongs to the "Mechatronic and Intelligent Machines" section

    Methodological Framework to Collect, Process, Analyze and Visualize Cyber Threat Intelligence Data

    Get PDF
    Cyber attacks have increased in frequency in recent years, affecting small, medium and large companies, creating an urgent need for tools capable of helping the mitigation of such threats. Thus, with the increasing number of cyber attacks, we have a large amount of threat data from heterogeneous sources that needs to be ingested, processed and analyzed in order to obtain useful insights for their mitigation. This study proposes a methodological framework to collect, organize, filter, share and visualize cyber-threat data to mitigate attacks and fix vulnerabilities, based on an eight-step cyber threat intelligence model with timeline visualization of threats information and analytic data insights. We developed a tool to address needs in which the cyber security analyst can insert threat data, analyze them and create a timeline to obtain insights and a better contextualization of a threat. Results show the facilitation of understanding the context in which the threats are inserted, rendering the mitigation of vulnerabilities more effective

    Proceedings Ocean Biodiversity Informatics: International Conference on Marine Biodiversity Data Management, Hamburg, Germany 29 November to 1 December, 2004

    Get PDF
    The International conference on Marine Biodiversity Data management ‘Ocean Biodiversity Informatics’ was held in Hamburg, Germany, from 29 November to 1 December 2004. Its objective was to offer a forum to marine biological data managers to discuss the state of the field, and to exchange ideas on how to further develop marine biological data systems. Many marine biologists are actively gathering knowledge, as they have been doing for a long time. What is new is that many of these scientists are willing to share their knowledge, including basic data, with others over the Internet. Our challenge now is to try and manage this trend, avoid confusing users with a multitude of contradicting sources of information, and make sure different data systems can be and are effectively integrated
    corecore