1,412 research outputs found

    Configuration Management of Distributed Systems over Unreliable and Hostile Networks

    Get PDF
    Economic incentives of large criminal profits and the threat of legal consequences have pushed criminals to continuously improve their malware, especially command and control channels. This thesis applied concepts from successful malware command and control to explore the survivability and resilience of benign configuration management systems. This work expands on existing stage models of malware life cycle to contribute a new model for identifying malware concepts applicable to benign configuration management. The Hidden Master architecture is a contribution to master-agent network communication. In the Hidden Master architecture, communication between master and agent is asynchronous and can operate trough intermediate nodes. This protects the master secret key, which gives full control of all computers participating in configuration management. Multiple improvements to idempotent configuration were proposed, including the definition of the minimal base resource dependency model, simplified resource revalidation and the use of imperative general purpose language for defining idempotent configuration. Following the constructive research approach, the improvements to configuration management were designed into two prototypes. This allowed validation in laboratory testing, in two case studies and in expert interviews. In laboratory testing, the Hidden Master prototype was more resilient than leading configuration management tools in high load and low memory conditions, and against packet loss and corruption. Only the research prototype was adaptable to a network without stable topology due to the asynchronous nature of the Hidden Master architecture. The main case study used the research prototype in a complex environment to deploy a multi-room, authenticated audiovisual system for a client of an organization deploying the configuration. The case studies indicated that imperative general purpose language can be used for idempotent configuration in real life, for defining new configurations in unexpected situations using the base resources, and abstracting those using standard language features; and that such a system seems easy to learn. Potential business benefits were identified and evaluated using individual semistructured expert interviews. Respondents agreed that the models and the Hidden Master architecture could reduce costs and risks, improve developer productivity and allow faster time-to-market. Protection of master secret keys and the reduced need for incident response were seen as key drivers for improved security. Low-cost geographic scaling and leveraging file serving capabilities of commodity servers were seen to improve scaling and resiliency. Respondents identified jurisdictional legal limitations to encryption and requirements for cloud operator auditing as factors potentially limiting the full use of some concepts

    A Survey on Enterprise Network Security: Asset Behavioral Monitoring and Distributed Attack Detection

    Full text link
    Enterprise networks that host valuable assets and services are popular and frequent targets of distributed network attacks. In order to cope with the ever-increasing threats, industrial and research communities develop systems and methods to monitor the behaviors of their assets and protect them from critical attacks. In this paper, we systematically survey related research articles and industrial systems to highlight the current status of this arms race in enterprise network security. First, we discuss the taxonomy of distributed network attacks on enterprise assets, including distributed denial-of-service (DDoS) and reconnaissance attacks. Second, we review existing methods in monitoring and classifying network behavior of enterprise hosts to verify their benign activities and isolate potential anomalies. Third, state-of-the-art detection methods for distributed network attacks sourced from external attackers are elaborated, highlighting their merits and bottlenecks. Fourth, as programmable networks and machine learning (ML) techniques are increasingly becoming adopted by the community, their current applications in network security are discussed. Finally, we highlight several research gaps on enterprise network security to inspire future research.Comment: Journal paper submitted to Elseive

    SUTMS - Unified Threat Management Framework for Home Networks

    Get PDF
    Home networks were initially designed for web browsing and non-business critical applications. As infrastructure improved, internet broadband costs decreased, and home internet usage transferred to e-commerce and business-critical applications. Today’s home computers host personnel identifiable information and financial data and act as a bridge to corporate networks via remote access technologies like VPN. The expansion of remote work and the transition to cloud computing have broadened the attack surface for potential threats. Home networks have become the extension of critical networks and services, hackers can get access to corporate data by compromising devices attacked to broad- band routers. All these challenges depict the importance of home-based Unified Threat Management (UTM) systems. There is a need of unified threat management framework that is developed specifically for home and small networks to address emerging security challenges. In this research, the proposed Smart Unified Threat Management (SUTMS) framework serves as a comprehensive solution for implementing home network security, incorporating firewall, anti-bot, intrusion detection, and anomaly detection engines into a unified system. SUTMS is able to provide 99.99% accuracy with 56.83% memory improvements. IPS stands out as the most resource-intensive UTM service, SUTMS successfully reduces the performance overhead of IDS by integrating it with the flow detection mod- ule. The artifact employs flow analysis to identify network anomalies and categorizes encrypted traffic according to its abnormalities. SUTMS can be scaled by introducing optional functions, i.e., routing and smart logging (utilizing Apriori algorithms). The research also tackles one of the limitations identified by SUTMS through the introduction of a second artifact called Secure Centralized Management System (SCMS). SCMS is a lightweight asset management platform with built-in security intelligence that can seamlessly integrate with a cloud for real-time updates

    How Organizations Sustain and Navigate Between (De)centralization Equilibria: A Process Model

    Get PDF
    Finding the ‘right’ balance between centralization and decentralization in organizational processes, governance, and IT can be difficult. To navigate this tension field, organizations need to find (de)centralization equilibria that are often dynamic and depend on organizational strategy and context. However, little is known about how organizations should respond once an old equilibrium is punctuated or breaks down. In this paper, we thus conduct an inductive multiple-case study to investigate how organizations sustain and transition between (de)centralization equilibria. We synthesize our insights into a process model that paints the transition as an iterative recalibration process subject to centralization and decentralization tensions. Often, this process will require local and temporary compromises. Our work contributes a much-needed process perspective to the IS literature on (de)centralization

    Evaluación del proceso de seguridad digital en empresas del sector eléctrico en Colombia

    Get PDF
    Los generadores y operadores del sistema eléctrico nacional de Colombia se encuentran integrando tecnologías digitales avanzadas para automatizar y controlar las funciones físicas de las infraestructuras criticas para mejorar el rendimiento interconectando los dispositivos digitales de control y medida a una red de datos que puede estar expuesta a amenazas cibernéticas. Las empresas del sector eléctrico en Colombia han desarrollado estrategias de seguridad de la información y ciberseguridad que buscan desarrollar las capacidades de: gestión de la ciberseguridad, conciencia situacional, respuesta a incidentes y ciber defensa, gestionar la identidad y control de acceso, definir nuevas estrategias de ciberseguridad y una nueva gestión de ciberseguridad orientada al riesgo. Por lo tanto, el proceso de seguridad digital debe ser constantemente evaluado y actualmente carece de la definición de un plan para garantizar la mejora continua del proceso. El presente proyecto, tiene como propósito evaluar el proceso de seguridad digital definido en el sistema de gestión integral de una empresa del sector eléctrico en Colombia. Inicialmente se identificará el estado actual del proceso, mediante la verificación de los planes de tratamiento los mecanismos de la operación y controles de la seguridad digital. Teniendo en cuenta lo anterior, se aplicará el modelo de madurez de la capacidad cibernética C2M2 para evaluar el proceso de seguridad digital y finalmente definir un plan de manera apropiada para garantizar la mejora continua del proceso.Generators and operators of Colombia's national electrical system are integrating advanced digital technologies to automate and control the physical functions of critical infrastructure to improve performance by interconnecting digital control and measurement devices to a data network that may be exposed to threats. cybernetics. Companies in the electricity sector in Colombia have developed information security and cybersecurity strategies that seek to develop the capabilities of: cybersecurity management, situational awareness, incident response and cyber defense, manage identity and access control, define new strategies cybersecurity and a new risk-oriented cybersecurity management. Therefore, the digital security process must be constantly evaluated and currently lacks the definition of a plan to guarantee continuous improvement of the process. The purpose of this project is to evaluate the digital security process defined in the comprehensive management system of a company in the electricity sector in Colombia. Initially, the current state of the process will be identified by verifying the treatment plans, operation mechanisms and digital security controls. Taking into account the above, the C2M2 cyber capability maturity model will be applied to evaluate the digital security process and finally define a plan appropriately to ensure continuous improvement of the process

    SOC ATTACKER CENTRIC - Analysis of a prevention oriented SOC

    Get PDF
    This thesis will explain what a Security Operation Center (SOC) is and how it works, analyzing all the different phases and modules that make up the final product. Typically, a SOC centralizes all of the company’s information in one place where it can constantly keep an eye on the data and monitor the system. The IT infrastructure is analyzed in real time for anomalies, malicious activities, or intrusion attempts. Not only the data sent from one machine to another, but also the physical state and resources (e.g., memory and CPU) are constantly monitored. Through the creation and use of multiple detection rules, various alerts are generated and are then reviewed by the SOC analyst team, which promptly informs customers in case of need. The State of the Art will be explored to study current SOCs and best practices adopted. Then the innovative SOC Attacker Centric developed by the company Wuerth Phoenix will be analyzed. The functioning of the SOC-AC will be studied and explained, highlighting how it adds to the classic suite of services offered by a SOC an extra part, focused on the attacker’s point of view. This SOC-AC is capable of covering the reconnaissance phase, usually neglected by SOCs, in which attackers gather information about a target in order to find the best strategy to break in and successfully carry out the attack. In the last part of the thesis, the design and implementation of an automatic SOC reporting functionality will be shown. An important feature is to have an efficient communication channel with the customer and to provide them with data on the status of the SOC they are paying for. Initially, this procedure was a static, manually executed, error-prone process. The procedure was improved by creating a semi-automatic system of report generation and delivery using the Elastic SIEM and several languages such as python, bash, Lucene, Elastic, and Kibana Query Languages, leaving the reporter with fewer parts to analyze and document, saving time and resources

    Compliance analysis for cyber security marine standards : Evaluation of compliance using application lifecycle management tools

    Get PDF
    The aim of this thesis is to analyse cyber security requirements and notations from marine classification societies and other entities to understand how to meet compliance in current cyber security requirements from maritime class societies and other maritime organizations. The methods used in this research involved a desk review of cyber security requirements from IACS members, IACS UR E 27 and IEC 62443, a survey questionnaire of relevant cyber security standards pertinent to maritime product development, and Polarion, an application lifecycle management solution used to synthesize the cyber security requirements from the maritime class societies and determine their correlations to IEC 62443 as a baseline. Results indicate that IEC 62443 correlates to the standards from DNV and IACS (UR E 27) and majority of the requirements were deemed compliant in compliance gap assessments of a maritime product. The conclusion is that IEC 62443 can be utilised as a baseline cyber requirement with a requirements management tool like Polarion to analyse and satisfy compliance requirements from maritime class societies and maritime organizations that base their cyber security requirements according to IACS UR E27 and IEC 62443-3-3 and should be adopted in addressing future compliance analysis of cyber requirements focusing on autonomous shipping

    Uncertainty in runtime verification : a survey

    Get PDF
    Runtime Verification can be defined as a collection of formal methods for studying the dynamic evaluation of execution traces against formal specifications. Aside from creating a monitor from specifications and building algorithms for the evaluation of the trace, the process of gathering events and making them available for the monitor and the communication between the system under analysis and the monitor are critical and important steps in the runtime verification process. In many situations and for a variety of reasons, the event trace could be incomplete or could contain imprecise events. When a missing or ambiguous event is detected, the monitor may be unable to deliver a sound verdict. In this survey, we review the literature dealing with the problem of monitoring with incomplete traces. We list the different causes of uncertainty that have been identified, and analyze their effect on the monitoring process. We identify and compare the different methods that have been proposed to perform monitoring on such traces, highlighting the advantages and drawbacks of each method

    Reinforcing Digital Trust for Cloud Manufacturing Through Data Provenance Using Ethereum Smart Contracts

    Get PDF
    Cloud Manufacturing(CMfg) is an advanced manufacturing model that caters to fast-paced agile requirements (Putnik, 2012). For manufacturing complex products that require extensive resources, manufacturers explore advanced manufacturing techniques like CMfg as it becomes infeasible to achieve high standards through complete ownership of manufacturing artifacts (Kuan et al., 2011). CMfg, with other names such as Manufacturing as a Service (MaaS) and Cyber Manufacturing (NSF, 2020), addresses the shortcoming of traditional manufacturing by building a virtual cyber enterprise of geographically distributed entities that manufacture custom products through collaboration. With manufacturing venturing into cyberspace, Digital Trust issues concerning product quality, data, and intellectual property security, become significant concerns (R. Li et al., 2019). This study establishes a trust mechanism through data provenance for ensuring digital trust between various stakeholders involved in CMfg. A trust model with smart contracts built on the Ethereum blockchain implements data provenance in CMfg. The study covers three data provenance models using Ethereum smart contracts for establishing digital trust in CMfg. These are Product Provenance, Order Provenance, and Operational Provenance. The models of provenance together address the most important questions regarding CMfg: What goes into the product, who manufactures the product, who transports the products, under what conditions the products are manufactured, and whether regulatory constraints/requisites are met

    Validation and Verification of Safety-Critical Systems in Avionics

    Get PDF
    This research addresses the issues of safety-critical systems verification and validation. Safety-critical systems such as avionics systems are complex embedded systems. They are composed of several hardware and software components whose integration requires verification and testing in compliance with the Radio Technical Commission for Aeronautics standards and their supplements (RTCA DO-178C). Avionics software requires certification before its deployment into an aircraft system, and testing is mandatory for certification. Until now, the avionics industry has relied on expensive manual testing. The industry is searching for better (quicker and less costly) solutions. This research investigates formal verification and automatic test case generation approaches to enhance the quality of avionics software systems, ensure their conformity to the standard, and to provide artifacts that support their certification. The contributions of this thesis are in model-based automatic test case generations approaches that satisfy MC/DC criterion, and bidirectional requirement traceability between low-level requirements (LLRs) and test cases. In the first contribution, we integrate model-based verification of properties and automatic test case generation in a single framework. The system is modeled as an extended finite state machine model (EFSM) that supports both the verification of properties and automatic test case generation. The EFSM models the control and dataflow aspects of the system. For verification, we model the system and some properties and ensure that properties are correctly propagated to the implementation via mandatory testing. For testing, we extended an existing test case generation approach with MC/DC criterion to satisfy RTCA DO-178C requirements. Both local test cases for each component and global test cases for their integration are generated. The second contribution is a model checking-based approach for automatic test case generation. In the third contribution, we developed an EFSM-based approach that uses constraints solving to handle test case feasibility and addresses bidirectional requirements traceability between LLRs and test cases. Traceability elements are determined at a low-level of granularity, and then identified, linked to their source artifact, created, stored, and retrieved for several purposes. Requirements’ traceability has been extensively studied but not at the proposed low-level of granularity
    • …
    corecore