92 research outputs found

    A Macroscopic Study of Network Security Threats at the Organizational Level.

    Full text link
    Defenders of today's network are confronted with a large number of malicious activities such as spam, malware, and denial-of-service attacks. Although many studies have been performed on how to mitigate security threats, the interaction between attackers and defenders is like a game of Whac-a-Mole, in which the security community is chasing after attackers rather than helping defenders to build systematic defensive solutions. As a complement to these studies that focus on attackers or end hosts, this thesis studies security threats from the perspective of the organization, the central authority that manages and defends a group of end hosts. This perspective provides a balanced position to understand security problems and to deploy and evaluate defensive solutions. This thesis explores how a macroscopic view of network security from an organization's perspective can be formed to help measure, understand, and mitigate security threats. To realize this goal, we bring together a broad collection of reputation blacklists. We first measure the properties of the malicious sources identified by these blacklists and their impact on an organization. We then aggregate the malicious sources to Internet organizations and characterize the maliciousness of organizations and their evolution over a period of two and half years. Next, we aim to understand the cause of different maliciousness levels in different organizations. By examining the relationship between eight security mismanagement symptoms and the maliciousness of organizations, we find a strong positive correlation between mismanagement and maliciousness. Lastly, motivated by the observation that there are organizations that have a significant fraction of their IP addresses involved in malicious activities, we evaluate the tradeoff of one type of mitigation solution at the organization level --- network takedowns.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/116714/1/jingzj_1.pd

    Cyber resilience meta-modelling: The railway communication case study

    Get PDF
    Recent times have demonstrated how much the modern critical infrastructures (e.g., energy, essential services, people and goods transportation) depend from the global communication networks. However, in the current Cyber-Physical World convergence, sophisticated attacks to the cyber layer can provoke severe damages to both physical structures and the operations of infrastructure affecting not only its functionality and safety, but also triggering cascade effects in other systems because of the tight interdependence of the systems that characterises the modern society. Hence, critical infrastructure must integrate the current cyber-security approach based on risk avoidance with a broader perspective provided by the emerging cyber-resilience paradigm. Cyber resilience is aimed as a way absorb the consequences of these attacks and to recover the functionality quickly and safely through adaptation. Several high-level frameworks and conceptualisations have been proposed but a formal definition capable of translating cyber resilience into an operational tool for decision makers considering all aspects of such a multifaceted concept is still missing. To this end, the present paper aims at providing an operational formalisation for cyber resilience starting from the Cyber Resilience Ontology presented in a previous work using model-driven principles. A domain model is defined to cope with the different aspects and “resilience-assurance” processes that it can be valid in various application domains. In this respect, an application case based on critical transportation communications systems, namely the railway communication system, is provided to prove the feasibility of the proposed approach and to identify future improvements

    UNIX Administrator Information Security Policy Compliance: The Influence of a Focused SETA Workshop and Interactive Security Challenges on Heuristics and Biases

    Get PDF
    Information Security Policy (ISP) compliance is crucial to the success of healthcare organizations due to security threats and the potential for security breaches. UNIX Administrators (UXAs) in healthcare Information Technology (IT) maintain critical servers that house Protected Health Information (PHI). Their compliance with ISP is crucial to the confidentiality, integrity, and availability of PHI data housed or accessed by their servers. The use of cognitive heuristics and biases may negatively influence threat appraisal, coping appraisal, and ultimately ISP compliance behavior. These failures may result in insufficiently protected servers and put organizations at greater risk of data breaches and financial loss. The goal was to empirically assess the effect of a focused Security Education, Training, and Awareness (SETA) workshop, an Interactive Security Challenge (ISC), and periodic security update emails on UXAs knowledge sharing, use of cognitive heuristics and biases, and ISP compliance behavior. This quantitative study employed a pretest and posttest experimental design to evaluate the effectiveness of a SETA workshop and an ISC on the ISP compliance of UXAs. The survey instrument was developed based on prior validated instrument questions and augmented with newly designed questions related to the use of cognitive heuristics and biases. Forty-two participants completed the survey prior to and following the SETA, ISC, and security update emails. Actual compliance (AC) behavior was assessed by comparing the results of security scans on administrator’s servers prior to and 90 days following the SETA workshop and ISC. SmartPLS was used to analyze the pre-workshop data, post-workshop data, and combined data to evaluate the proposed structural and measurement models. The results indicated that Confirmation Bias (CB) and the Availability Heuristic (AH) were significantly influenced by the Information Security Knowledge Sharing (ISKS). Optimism Bias (OB) did not reach statistically significant levels relating to ISKS. OB did, however, significantly influence on perceived severity (TA-PS), perceived vulnerability (TA-PV), response-efficacy (CA-RE), and self-efficacy (CA-SE). Also, it was noted that all five security implementation data points collected to assess pre- and post-workshop compliance showed statistically significant change. A total of eight hypotheses were accepted and nine hypotheses were rejected

    A Deep Learning-based Approach to Identifying and Mitigating Network Attacks Within SDN Environments Using Non-standard Data Sources

    Get PDF
    Modern society is increasingly dependent on computer networks, which are essential to delivering an increasing number of key services. With this increasing dependence, comes a corresponding increase in global traffic and users. One of the tools administrators are using to deal with this growth is Software Defined Networking (SDN). SDN changes the traditional distributed networking design to a more programmable centralised solution, based around the SDN controller. This allows administrators to respond more quickly to changing network conditions. However, this change in paradigm, along with the growing use of encryption can cause other issues. For many years, security administrators have used techniques such as deep packet inspection and signature analysis to detect malicious activity. These methods are becoming less common as artificial intelligence (AI) and deep learning technologies mature. AI and deep learning have advantages in being able to cope with 0-day attacks and being able to detect malicious activity despite the use of encryption and obfuscation techniques. However, SDN reduces the volume of data that is available for analysis with these machine learning techniques. Rather than packet information, SDN relies on flows, which are abstract representations of network activity. Security researchers have been slow to move to this new method of networking, in part because of this reduction in data, however doing so could have advantages in responding quickly to malicious activity. This research project seeks to provide a way to reconcile the contradiction apparent, by building a deep learning model that can achieve comparable results to other state-of-the-art models, while using 70% fewer features. This is achieved through the creation of new data from logs, as well as creation of a new risk-based sampling method to prioritise suspect flows for analysis, which can successfully prioritise over 90% of malicious flows from leading datasets. Additionally, provided is a mitigation method that can work with a SDN solution to automatically mitigate attacks after they are found, showcasing the advantages of closer integration with SDN

    Anomaly Detection Algorithms and Techniques for Network Intrusion Detection Systems

    Get PDF
    In recent years, many deep learning-based models have been proposed for anomaly detection. This thesis presents a comparison of selected deep autoencoding models and classical anomaly detection methods on three modern network intrusion detection datasets. We experiment with different configurations and architectures of the selected models, as well as aggregation techniques for input preprocessing and output postprocessing. We propose a methodology for creating benchmark datasets for the evaluation of the methods in different settings. We provide a statistical comparison of the performance of the selected techniques. We conclude that the deep autoencoding models, in particular AE and VAE, systematically outperform the classic methods. Furthermore, we show that aggregating input network flow data improves the overall performance. In general, the tested techniques are promising regarding their application in network intrusion detection systems. However, secondary techniques must be employed to reduce the high numbers of generated false alarms

    Current Safety Nets Within the U.S. National Airspace System

    Get PDF
    There are over 70,000 flights managed per day in the National Airspace System, with approximately 7,000 aircraft in the air over the United States at any given time. Operators of each of these flights would prefer to fly a user-defined 4D trajectory (4DT), which includes arrival and departure times; preferred gates and runways at the airport; efficient, wind-optimal routes for departure, cruise and arrival phase of flight; and fuel efficient altitude profiles. To demonstrate the magnitude of this achievement a single flight from Los Angeles to Baltimore, accesses over 35 shared or constrained resources that are managed by roughly 30 air traffic controllers (at towers, approach control and en route sectors); along with traffic managers at 12 facilities, using over 22 different, independent automation system (including TBFM, ERAM, STARS, ASDE-X, FSM, TSD, GPWS, TCAS, etc.). In addition, dispatchers, ramp controllers and others utilize even more systems to manage each flights access to operator-managed resources. Flying an ideal 4DT requires successful coordination of all flight constraints among all flights, facilities, operators, pilots and controllers. Additionally, when conditions in the NAS change, the trajectories of one or more aircraft may need to be revised to avoid loss of flight efficiency, predictability, separation or system throughput. The Aviation Safety Network has released the 2016 airliner accident statistics showing a very low total of 19 fatal airliner accidents, resulting in 325 fatalities1. Despite several high profile accidents, the year 2016 turned out to be a very safe year for commercial aviation, Aviation Safety Network data show. Over the year 2016 the Aviation Safety Network recorded a total of 19 fatal airliner accidents [1], resulting in 325 fatalities. This makes 2016 the second safest year ever, both by number of fatal accidents as well as in terms of fatalities. In 2015 ASN recorded 16 accidents while in 2013 a total of 265 lives were lost. How can we keep it that way and not upset the apple cart by premature insertion of innovative technologies, functions, and procedures? In aviation, safety nets function as the last system defense against incidents and accidents. Current ground-based and airborne safety nets are well established and development to make them more efficient and reliable continues. Additionally, future air traffic control safety nets may emerge from new operational concepts

    Design and Deployment of an Access Control Module for Data Lakes

    Get PDF
    Nowadays big data is considered an extremely valued asset for companies, which are discovering new avenues to use it for their business profit. However, an organization’s ability to effectively extract valuable information from data is based on its knowledge management infrastructure. Thus, most organizations are transitioning from data warehouse (DW) storages to data lake (DL) infrastructures, from which further insights are derived. The present work is carried out as part of a cybersecurity project in a financial institution that manages vast volumes and variety of data that is kept in a data lake. Although DL is presented as the answer to the current big data scenario, this infrastructure presents certain flaws on authentication and access control. Preceding work on DL access control points out that the main goal is to avoid fraudulent behaviors derived from user’s access, such as secondary use1, that could result in business data being exposed to third parties. To overcome the risk, traditional mechanisms attempt to identify these behaviors based on rules, however, they cannot reveal all different kinds of fraud because they only look for known patterns of misuse. The present work proposes a novel access control system for data lakes, assisted by Oracle’s database audit trail and based on anomaly detection mechanisms, that automatically looks for events that do not conform the normal or expected behavior. Thus, the overall aim of this project is to develop and deploy an automated system for identifying abnormal accesses to the DL, which can be separated into four subgoals: explore the different technologies that could be applied in the domain of anomaly detection, design the solution, deploy it, and evaluate the results. For the purpose, feature engineering is performed, and four different unsupervised ML models are built and evaluated. According to the quality of the results, the better model is finally productionalized with Docker. To conclude, although anomaly detection has been a lasting yet active research area for several decades, there are still some unique problem complexities and challenges that leave the way open for the proposed solution to be further improved.Doble Grado en Ingeniería Informática y Administración de Empresa

    Three Empirical Essays on Health Informatics and Analytics

    Get PDF
    Health Information Technology (HIT) has an important and widely acknowledged role in enhancing healthcare performance in the healthcare industry today. A great amount of literature has focused on the impact of HIT implementation, yet the studies provide mixed and inconclusive results on whether HIT implementation actually helps healthcare providers enhance healthcare performance. Here, we identify three possible research gaps that lead to these mixed and inclusive results. First, prior IS research has exclusively examined HIT complementarity simultaneously, but ignored the temporal perspective. Second, extant HIT research has primarily examined the relationship between HIT implementation and healthcare performance in a static framework, which may neglect the dynamic relationship between HIT and healthcare performance. Third, prior HIT value studies have typically examined HIT’s impact on hospital-level outcomes, but no extant studies consider HIT impact on transition-level outcomes as disease progresses over time. This dissertation addresses these gaps in three essays that draw upon three different lenses to study HIT implementation’s impact on healthcare performance using three analytics methods. The first essay applies econometrics to study how various types of HIT complementarities simultaneously and sequentially impact diverse healthcare outcomes. In so doing, we find evidence of simultaneous and sequential complementarity wherein HIT applications are synergistic—not only within the same time period, but also across periods. The second essay uses advanced latent growth modeling to explore the dynamic, longitudinal relationship between HIT and healthcare outcomes after incorporating the nonlinear trajectory change of different HIT functions and the various dimensions of hospital performance. The third essay applies multi-state and hidden Markov models to examine how HIT functions’ implementation levels impact a finer, more-granular-level healthcare outcome. This approach includes the dynamics of the transitions, including observable transitions (chronic to acute, acute to chronic, chronic to death, and acute to death) and underlying and unobservable transitions (minor to major disease and major disease to death). This essay examines how different types of HIT can improve different transitions types as diseases progress over time
    • …
    corecore