202 research outputs found

    Securing critical utility systems & network infrastructures

    Get PDF
    Tese de mestrado, Segurança Informática, Universidade de Lisboa, Faculdade de Ciências, 2009As infra-estruturas críticas de TI para serviços públicos são apoiadas por inúmeros sistemas complexos. Estes sistemas permitem a gestão e recolha de informação em tempo-real, constituindo a base para a gestão eficiente das operações. A utilização, cada vez mais frequente, de software e hardware (Commercial Off-The-Shelf, COTS) em sistemas SCADA permitiu grandes beneficios financeiros na aquisição e desenvolvimento de soluções técnicas que suportam os serviços públicos. O uso de hardware e software COTS em sistemas SCADA transferiu para as infra-estruturas críticas os problemas de segurança de uma infraestrutura de TI empresarial. Neste contexto, um desafio para as equipas de gestão operacional dos sistemas de TI é a gestão eficaz dos sistemas e redes que compõem as infra-estruturas críticas dos serviços públicos. Apesar de estas organizações adoptarem, cada vez mais, normas e melhores práticas que visam melhorar a gestão, operações e processos de configuração. Este projecto de investigação propõe-se a desenvolver um estudo comparativo de plataformas de gestão integrada no contexto dos sistemas SCADA que suportam serviços públicos. Adicionalmente, este projecto de investigação irá desenvolver estudos acerca de perfis operacionais dos Sistemas Operativos que suportam a infra-estrutura IT dos serviços públicos críticos. Este projecto de investigação irá descrever como as decisões estratégicas de gestão têm impacto nas operações de gestão de uma infra-estrutura TI.Modern critical utility IT infrastructures are supported by numerous complex systems. These systems allow real-time management and information collection, which is the basis of efficient service management operations. The usage of commercial off-the-shelf (COTS) hardware and software in SCADA systems allowed for major financial advantages in purchasing and developing technical solutions. On the other hand, this COTS hardware and software generalized usage in SCADA systems, exposed critical infrastructures to the security problems of a corporate IT infrastructure. A significant challenge for IT teams is managing critical utility IT infrastructures even upon adopting security best practices that help management, operations and configuration of the systems and network components that comprise those infrastructures. This research project proposes to survey integrated management software that can address the specific security constraints of a SCADA infrastructure supported by COTS software. Additionally, this research project proposes to investigate techniques that will allow the creation of operational profiles of Operating Systems supporting critical utility IT infrastructures. This research project will describe how the strategic management decisions impact tactical operations management of an IT environment. We will investigate desirable technical management elements in support of the operational management

    Performance Evaluation of Network Anomaly Detection Systems

    Get PDF
    Nowadays, there is a huge and growing concern about security in information and communication technology (ICT) among the scientific community because any attack or anomaly in the network can greatly affect many domains such as national security, private data storage, social welfare, economic issues, and so on. Therefore, the anomaly detection domain is a broad research area, and many different techniques and approaches for this purpose have emerged through the years. Attacks, problems, and internal failures when not detected early may badly harm an entire Network system. Thus, this thesis presents an autonomous profile-based anomaly detection system based on the statistical method Principal Component Analysis (PCADS-AD). This approach creates a network profile called Digital Signature of Network Segment using Flow Analysis (DSNSF) that denotes the predicted normal behavior of a network traffic activity through historical data analysis. That digital signature is used as a threshold for volume anomaly detection to detect disparities in the normal traffic trend. The proposed system uses seven traffic flow attributes: Bits, Packets and Number of Flows to detect problems, and Source and Destination IP addresses and Ports, to provides the network administrator necessary information to solve them. Via evaluation techniques, addition of a different anomaly detection approach, and comparisons to other methods performed in this thesis using real network traffic data, results showed good traffic prediction by the DSNSF and encouraging false alarm generation and detection accuracy on the detection schema. The observed results seek to contribute to the advance of the state of the art in methods and strategies for anomaly detection that aim to surpass some challenges that emerge from the constant growth in complexity, speed and size of today’s large scale networks, also providing high-value results for a better detection in real time.Atualmente, existe uma enorme e crescente preocupação com segurança em tecnologia da informação e comunicação (TIC) entre a comunidade científica. Isto porque qualquer ataque ou anomalia na rede pode afetar a qualidade, interoperabilidade, disponibilidade, e integridade em muitos domínios, como segurança nacional, armazenamento de dados privados, bem-estar social, questões econômicas, e assim por diante. Portanto, a deteção de anomalias é uma ampla área de pesquisa, e muitas técnicas e abordagens diferentes para esse propósito surgiram ao longo dos anos. Ataques, problemas e falhas internas quando não detetados precocemente podem prejudicar gravemente todo um sistema de rede. Assim, esta Tese apresenta um sistema autônomo de deteção de anomalias baseado em perfil utilizando o método estatístico Análise de Componentes Principais (PCADS-AD). Essa abordagem cria um perfil de rede chamado Assinatura Digital do Segmento de Rede usando Análise de Fluxos (DSNSF) que denota o comportamento normal previsto de uma atividade de tráfego de rede por meio da análise de dados históricos. Essa assinatura digital é utilizada como um limiar para deteção de anomalia de volume e identificar disparidades na tendência de tráfego normal. O sistema proposto utiliza sete atributos de fluxo de tráfego: bits, pacotes e número de fluxos para detetar problemas, além de endereços IP e portas de origem e destino para fornecer ao administrador de rede as informações necessárias para resolvê-los. Por meio da utilização de métricas de avaliação, do acrescimento de uma abordagem de deteção distinta da proposta principal e comparações com outros métodos realizados nesta tese usando dados reais de tráfego de rede, os resultados mostraram boas previsões de tráfego pelo DSNSF e resultados encorajadores quanto a geração de alarmes falsos e precisão de deteção. Com os resultados observados nesta tese, este trabalho de doutoramento busca contribuir para o avanço do estado da arte em métodos e estratégias de deteção de anomalias, visando superar alguns desafios que emergem do constante crescimento em complexidade, velocidade e tamanho das redes de grande porte da atualidade, proporcionando também alta performance. Ainda, a baixa complexidade e agilidade do sistema proposto contribuem para que possa ser aplicado a deteção em tempo real

    Website Performance Evaluation and Estimation in an E-business Environment

    Get PDF
    This thesis introduces a new Predictus-model for performance evaluation and estimation in a multi-layer website environment. The model is based on soft computing ideas, i.e. simulation and statistical analysis. The aim is to improve energy consumption of the website's hardware and investment efficiency and to avoid loss of availability. The aim of optimised exploitation is reduced energy and maintenance costs on the one hand and increased end-user satisfaction due to robust and stable web services on the other. A method based on simulation of user requests is described. Instead of ordinary static parameter set, the dynamic extraction from previous log files is used. The distribution of existing requests is exploited to generate the actual based natural load. By loading the server system with valid and well-known requests, the behaviour of the server system is natural. The control back loop on the generation of work load assures the validity of the work load in the long-term. A method for identifying the actual performance of the website is described. Using the well-known load in simulation of usage by a large number of virtual users and observing the utilisation rate of server resources ensure the best information for the internal state of the system. The disturbance of the service website usage can be avoided using the mathematical extrapolation method to reach the saturation point on the single server resource

    Network Performance Management Using Application-centric Key Performance Indicators

    Get PDF
    The Internet and intranets are viewed as capable of supplying Anything, Anywhere, Anytime and e-commerce, e-government, e-community, and military C4I are now deploying many and varied applications to serve their needs. Network management is currently centralized in operations centers. To assure customer satisfaction with the network performance they typically plan, configure and monitor the network devices to insure an excess of bandwidth, that is overprovision. If this proves uneconomical or if complex and poorly understood interactions of equipment, protocols and application traffic degrade performance creating customer dissatisfaction, another more application-centric, way of managing the network will be needed. This research investigates a new qualitative class of network performance measures derived from the current quantitative metrics known as quality of service (QOS) parameters. The proposed class of qualitative indicators focuses on utilizing current network performance measures (QOS values) to derive abstract quality of experience (QOE) indicators by application class. These measures may provide a more user or application-centric means of assessing network performance even when some individual QOS parameters approach or exceed specified levels. The mathematics of functional analysis suggests treating QOS performance values as a vector, and, by mapping the degradation of the application performance to a characteristic lp-norm curve, a qualitative QOE value (good/poor) can be calculated for each application class. A similar procedure could calculate a QOE node value (satisfactory/unsatisfactory) to represent the service level of the switch or router for the current mix of application traffic. To demonstrate the utility of this approach a discrete event simulation (DES) test-bed, in the OPNET telecommunications simulation environment, was created modeling the topology and traffic of three semi-autonomous networks connected by a backbone. Scenarios, designed to degrade performance by under-provisioning links or nodes, are run to evaluate QOE for an access network. The application classes and traffic load are held constant. Future research would include refinement of the mathematics, many additional simulations and scenarios varying other independent variables. Finally collaboration with researchers in areas as diverse as human computer interaction (HCI), software engineering, teletraffic engineering, and network management will enhance the concepts modeled

    Network traffic management for the next generation Internet

    Get PDF
    Measurement-based performance evaluation of network traffic is a fundamental prerequisite for the provisioning of managed and controlled services in short timescales, as well as for enabling the accountability of network resources. The steady introduction and deployment of the Internet Protocol Next Generation (IPNG-IPv6) promises a network address space that can accommodate any device capable of generating a digital heart-beat. Under such a ubiquitous communication environment, Internet traffic measurement becomes of particular importance, especially for the assured provisioning of differentiated levels of service quality to the different application flows. The non-identical response of flows to the different types of network-imposed performance degradation and the foreseeable expansion of networked devices raise the need for ubiquitous measurement mechanisms that can be equally applicable to different applications and transports. This thesis introduces a new measurement technique that exploits native features of IPv6 to become an integral part of the Internet's operation, and to provide intrinsic support for performance measurements at the universally-present network layer. IPv6 Extension Headers have been used to carry both the triggers that invoke the measurement activity and the instantaneous measurement indicators in-line with the payload data itself, providing a high level of confidence that the behaviour of the real user traffic flows is observed. The in-line measurements mechanism has been critically compared and contrasted to existing measurement techniques, and its design and a software-based prototype implementation have been documented. The developed system has been used to provisionally evaluate numerous performance properties of a diverse set of application flows, over different-capacity IPv6 experimental configurations. Through experimentation and theoretical argumentation, it has been shown that IPv6-based, in-line measurements can form the basis for accurate and low-overhead performance assessment of network traffic flows in short time-scales, by being dynamically deployed where and when required in a multi-service Internet environment.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    SUTMS - Unified Threat Management Framework for Home Networks

    Get PDF
    Home networks were initially designed for web browsing and non-business critical applications. As infrastructure improved, internet broadband costs decreased, and home internet usage transferred to e-commerce and business-critical applications. Today’s home computers host personnel identifiable information and financial data and act as a bridge to corporate networks via remote access technologies like VPN. The expansion of remote work and the transition to cloud computing have broadened the attack surface for potential threats. Home networks have become the extension of critical networks and services, hackers can get access to corporate data by compromising devices attacked to broad- band routers. All these challenges depict the importance of home-based Unified Threat Management (UTM) systems. There is a need of unified threat management framework that is developed specifically for home and small networks to address emerging security challenges. In this research, the proposed Smart Unified Threat Management (SUTMS) framework serves as a comprehensive solution for implementing home network security, incorporating firewall, anti-bot, intrusion detection, and anomaly detection engines into a unified system. SUTMS is able to provide 99.99% accuracy with 56.83% memory improvements. IPS stands out as the most resource-intensive UTM service, SUTMS successfully reduces the performance overhead of IDS by integrating it with the flow detection mod- ule. The artifact employs flow analysis to identify network anomalies and categorizes encrypted traffic according to its abnormalities. SUTMS can be scaled by introducing optional functions, i.e., routing and smart logging (utilizing Apriori algorithms). The research also tackles one of the limitations identified by SUTMS through the introduction of a second artifact called Secure Centralized Management System (SCMS). SCMS is a lightweight asset management platform with built-in security intelligence that can seamlessly integrate with a cloud for real-time updates

    Overcoming Data Breaches and Human Factors in Minimizing Threats to Cyber-Security Ecosystems

    Get PDF
    This mixed-methods study focused on the internal human factors responsible for data breaches that could cause adverse impacts on organizations. Based on the Swiss cheese theory, the study was designed to examine preventative measures that managers could implement to minimize potential data breaches resulting from internal employees\u27 behaviors. The purpose of this study was to provide insight to managers about developing strategies that could prevent data breaches from cyber-threats by focusing on the specific internal human factors responsible for data breaches, the root causes, and the preventive measures that could minimize threats from internal employees. Data were collected from 10 managers and 12 employees from the business sector, and 5 government managers in Ivory Coast, Africa. The mixed methodology focused on the why and who using the phenomenological approach, consisting of a survey, face-to-face interviews using open-ended questions, and a questionnaire to extract the experiences and perceptions of the participants about preventing the adverse consequences from cyber-threats. The results indicated the importance of top managers to be committed to a coordinated, continuous effort throughout the organization to ensure cyber security awareness, training, and compliance of security policies and procedures, as well as implementing and upgrading software designed to detect and prevent data breaches both internally and externally. The findings of this study could contribute to social change by educating managers about preventing data breaches who in turn may implement information accessibility without retribution. Protecting confidential data is a major concern because one data breach could impact many people as well as jeopardize the viability of the entire organization

    ANALYSIS OF DATA & COMPUTER NETWORKS IN STUDENTS' RESIDENTIAL AREA IN UNIVERSITI TEKNOLOGI PETRONAS

    Get PDF
    In Universiti Teknologi Petronas (UTP), most of the students depend on the Internet and computer network connection to gain academics information and share educational resources. Even though the Internet connections and computers networks are provided, the service always experience interruption, such as slow Internet access, viruses and worms distribution, and network abuse by irresponsible students. Since UTP organization keeps on expanding, the need for a better service in UTP increases. Several approaches were put into practice to address the problems. Research on data and computer network was performed to understand the network technology applied in UTP. A questionnaire forms were distributed among the students to obtain feedback and statistical data about UTP's network in Students' Residential Area. The studies concentrate only on Students' Residential Area as it is where most of the users reside. From the survey, it can be observed that 99% of the students access the network almost 24 hours a day. In 2005, the 2 Mbps allocated bandwidth was utilized 100% almost continuously but in 2006, the bottleneck of Internet access has reduced significantly since the bandwidth allocated have been increased to 8 Mbps. Server degradation due to irresponsible acts by users also adds burden to the main server. In general, if the proposal to ITMS (Information Technology & Media Services) Department for them to improve their Quality of Service (QoS) and established UTP Computer Emergency Response Team (UCert), most of the issues addressed in this report can be solved

    Automated IT Service Fault Diagnosis Based on Event Correlation Techniques

    Get PDF
    In the previous years a paradigm shift in the area of IT service management could be witnessed. IT management does not only deal with the network, end systems, or applications anymore, but is more and more concerned with IT services. This is caused by the need of organizations to monitor the efficiency of internal IT departments and to have the possibility to subscribe IT services from external providers. This trend has raised new challenges in the area of IT service management, especially with respect to service level agreements laying down the quality of service to be guaranteed by a service provider. Fault management is also facing new challenges which are related to ensuring the compliance to these service level agreements. For example, a high utilization of network links in the infrastructure can imply a delay increase in the delivery of services with respect to agreed time constraints. Such relationships have to be detected and treated in a service-oriented fault diagnosis which therefore does not deal with faults in a narrow sense, but with service quality degradations. This thesis aims at providing a concept for service fault diagnosis which is an important part of IT service fault management. At first, a motivation of the need of further examinations regarding this issue is given which is based on the analysis of services offered by a large IT service provider. A generalization of the scenario forms the basis for the specification of requirements which are used for a review of related research work and commercial products. Even though some solutions for particular challenges have already been provided, a general approach for service fault diagnosis is still missing. For addressing this issue, a framework is presented in the main part of this thesis using an event correlation component as its central part. Event correlation techniques which have been successfully applied to fault management in the area of network and systems management are adapted and extended accordingly. Guidelines for the application of the framework to a given scenario are provided afterwards. For showing their feasibility in a real world scenario, they are used for both example services referenced earlier

    Service-oriented architecture for device lifecycle support in industrial automation

    Get PDF
    Dissertação para obtenção do Grau de Doutor em Engenharia Electrotécnica e de Computadores Especialidade: Robótica e Manufactura IntegradaThis thesis addresses the device lifecycle support thematic in the scope of service oriented industrial automation domain. This domain is known for its plethora of heterogeneous equipment encompassing distinct functions, form factors, network interfaces, or I/O specifications supported by dissimilar software and hardware platforms. There is then an evident and crescent need to take every device into account and improve the agility performance during setup, control, management, monitoring and diagnosis phases. Service-oriented Architecture (SOA) paradigm is currently a widely endorsed approach for both business and enterprise systems integration. SOA concepts and technology are continuously spreading along the layers of the enterprise organization envisioning a unified interoperability solution. SOA promotes discoverability, loose coupling, abstraction, autonomy and composition of services relying on open web standards – features that can provide an important contribution to the industrial automation domain. The present work seized industrial automation device level requirements, constraints and needs to determine how and where can SOA be employed to solve some of the existent difficulties. Supported by these outcomes, a reference architecture shaped by distributed, adaptive and composable modules is proposed. This architecture will assist and ease the role of systems integrators during reengineering-related interventions throughout system lifecycle. In a converging direction, the present work also proposes a serviceoriented device model to support previous architecture vision and goals by including embedded added-value in terms of service-oriented peer-to-peer discovery and identification, configuration, management, as well as agile customization of device resources. In this context, the implementation and validation work proved not simply the feasibility and fitness of the proposed solution to two distinct test-benches but also its relevance to the expanding domain of SOA applications to support device lifecycle in the industrial automation domain
    • …
    corecore