5,798 research outputs found
An Insider Misuse Threat Detection and Prediction Language
Numerous studies indicate that amongst the various types of security threats, the
problem of insider misuse of IT systems can have serious consequences for the health
of computing infrastructures. Although incidents of external origin are also dangerous,
the insider IT misuse problem is difficult to address for a number of reasons. A
fundamental reason that makes the problem mitigation difficult relates to the level of
trust legitimate users possess inside the organization. The trust factor makes it difficult
to detect threats originating from the actions and credentials of individual users. An
equally important difficulty in the process of mitigating insider IT threats is based on
the variability of the problem. The nature of Insider IT misuse varies amongst
organizations. Hence, the problem of expressing what constitutes a threat, as well as
the process of detecting and predicting it are non trivial tasks that add up to the multi-
factorial nature of insider IT misuse.
This thesis is concerned with the process of systematizing the specification of insider
threats, focusing on their system-level detection and prediction. The design of suitable
user audit mechanisms and semantics form a Domain Specific Language to detect and
predict insider misuse incidents. As a result, the thesis proposes in detail ways to
construct standardized descriptions (signatures) of insider threat incidents, as means
of aiding researchers and IT system experts mitigate the problem of insider IT misuse.
The produced audit engine (LUARM â Logging User Actions in Relational Mode) and
the Insider Threat Prediction and Specification Language (ITPSL) are two utilities that
can be added to the IT insider misuse mitigation arsenal. LUARM is a novel audit
engine designed specifically to address the needs of monitoring insider actions. These
needs cannot be met by traditional open source audit utilities. ITPSL is an XML based
markup that can standardize the description of incidents and threats and thus make use
of the LUARM audit data. Its novelty lies on the fact that it can be used to detect as
well as predict instances of threats, a task that has not been achieved to this date by a
domain specific language to address threats.
The research project evaluated the produced language using a cyber-misuse
experiment approach derived from real world misuse incident data. The results of the
experiment showed that the ITPSL and its associated audit engine LUARM
provide a good foundation for insider threat specification and prediction. Some
language deficiencies relate to the fact that the insider threat specification process
requires a good knowledge of the software applications used in a computer system. As
the language is easily expandable, future developments to improve the language
towards this direction are suggested
Reachability-based impact as a measure for insiderness
Insider threats pose a difficult problem for many organisations. While organisations in principle would like to judge the risk posed by a specific insider threat, this is in general not possible. This limitation is caused partly by the lack of models for human behaviour, partly by restrictions on how much and what may be monitored, and by our inability to identify relevant features in large amounts of logged data. To overcome this, the notion of insiderness has been proposed, which measures the degree of access an actor has to a certain resource. We extend this notion with the concept of impact of an insider, and present different realisations of impact. The suggested approach results in readily usable techniques that allow to get a quick overview of potential insider threats based on locations and assets reachable by employees. We present several variations ranging from pure reachability to potential damage to assets causable by an insider
Mitigating Insider Threat Risks in Cyber-physical Manufacturing Systems
Cyber-Physical Manufacturing System (CPMS)âa next generation manufacturing systemâseamlessly integrates digital and physical domains via the internet or computer networks. It will enable drastic improvements in production flexibility, capacity, and cost-efficiency. However, enlarged connectivity and accessibility from the integration can yield unintended security concerns. The major concern arises from cyber-physical attacks, which can cause damages to the physical domain while attacks originate in the digital domain. Especially, such attacks can be performed by insiders easily but in a more critical manner: Insider Threats.
Insiders can be defined as anyone who is or has been affiliated with a system. Insiders have knowledge and access authentications of the system\u27s properties, therefore, can perform more serious attacks than outsiders. Furthermore, it is hard to detect or prevent insider threats in CPMS in a timely manner, since they can easily bypass or incapacitate general defensive mechanisms of the system by exploiting their physical access, security clearance, and knowledge of the system vulnerabilities.
This thesis seeks to address the above issues by developing an insider threat tolerant CPMS, enhanced by a service-oriented blockchain augmentation and conducting experiments & analysis. The aim of the research is to identify insider threat vulnerabilities and improve the security of CPMS.
Blockchain\u27s unique distributed system approach is adopted to mitigate the insider threat risks in CPMS. However, the blockchain limits the system performance due to the arbitrary block generation time and block occurrence frequency. The service-oriented blockchain augmentation is providing physical and digital entities with the blockchain communication protocol through a service layer. In this way, multiple entities are integrated by the service layer, which enables the services with less arbitrary delays while retaining their strong security from the blockchain. Also, multiple independent service applications in the service layer can ensure the flexibility and productivity of the CPMS.
To study the effectiveness of the blockchain augmentation against insider threats, two example models of the proposed system have been developed: Layer Image Auditing System (LIAS) and Secure Programmable Logic Controller (SPLC). Also, four case studies are designed and presented based on the two models and evaluated by an Insider Attack Scenario Assessment Framework. The framework investigates the system\u27s security vulnerabilities and practically evaluates the insider attack scenarios.
The research contributes to the understanding of insider threats and blockchain implementations in CPMS by addressing key issues that have been identified in the literature. The issues are addressed by EBIS (Establish, Build, Identify, Simulation) validation process with numerical experiments and the results, which are in turn used towards mitigating insider threat risks in CPMS
Risk Assessment Framework for Evaluation of Cybersecurity Threats and Vulnerabilities in Medical Devices
Medical devices are vulnerable to cybersecurity exploitation and, while they can provide improvements to clinical care, they can put healthcare organizations and their patients at risk of adverse impacts. Evidence has shown that the proliferation of devices on medical networks present cybersecurity challenges for healthcare organizations due to their lack of built-in cybersecurity controls and the inability for organizations to implement security controls on them. The negative impacts of cybersecurity exploitation in healthcare can include the loss of patient confidentiality, risk to patient safety, negative financial consequences for the organization, and loss of business reputation. Assessing the risk of vulnerabilities and threats to medical devices can inform healthcare organizations toward prioritization of resources to reduce risk most effectively. In this research, we build upon a database-driven approach to risk assessment that is based on the elements of threat, vulnerability, asset, and control (TVA-C). We contribute a novel framework for the cybersecurity risk assessment of medical devices. Using a series of papers, we answer questions related to the risk assessment of networked medical devices. We first conducted a case study empirical analysis that determined the scope of security vulnerabilities in a typical computerized medical environment. We then created a cybersecurity risk framework to identify threats and vulnerabilities to medical devices and produce a quantified risk assessment. These results supported actionable decision making at managerial and operational levels of a typical healthcare organization. Finally, we applied the framework using a data set of medical devices received from a partnering healthcare organization. We compare the assessment results of our framework to a commercial risk assessment vulnerability management system used to analyze the same assets. The study also compares our framework results to the NIST Common Vulnerability Scoring System (CVSS) scores related to identified vulnerabilities reported through the Common Vulnerability and Exposure (CVE) program. As a result of these studies, we recognize several contributions to the area of healthcare cybersecurity. To begin with, we provide the first comprehensive vulnerability assessment of a robotic surgical environment, using a da Vinci surgical robot along with its supporting computing assets. This assessment supports the assertion that networked computer environments are at risk of being compromised in healthcare facilities. Next, our framework, known as MedDevRisk, provides a novel method for risk quantification. In addition, our assessment approach uniquely considers the assets that are of value to a medical organization, going beyond the medical device itself. Finally, our incorporation of risk scenarios into the framework represents a novel approach to medical device risk assessment, which was synthesized from other well-known standards. To our knowledge, our research is the first to apply a quantified assessment framework to the problem area of healthcare cybersecurity and medical networked devices. We would conclude that a reduction in the uncertainty about the riskiness of the cybersecurity status of medical devices can be achieved using this framework
Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning
Deep Learning has recently become hugely popular in machine learning,
providing significant improvements in classification accuracy in the presence
of highly-structured and large databases.
Researchers have also considered privacy implications of deep learning.
Models are typically trained in a centralized manner with all the data being
processed by the same training algorithm. If the data is a collection of users'
private data, including habits, personal pictures, geographical positions,
interests, and more, the centralized server will have access to sensitive
information that could potentially be mishandled. To tackle this problem,
collaborative deep learning models have recently been proposed where parties
locally train their deep learning structures and only share a subset of the
parameters in the attempt to keep their respective training sets private.
Parameters can also be obfuscated via differential privacy (DP) to make
information extraction even more challenging, as proposed by Shokri and
Shmatikov at CCS'15.
Unfortunately, we show that any privacy-preserving collaborative deep
learning is susceptible to a powerful attack that we devise in this paper. In
particular, we show that a distributed, federated, or decentralized deep
learning approach is fundamentally broken and does not protect the training
sets of honest participants. The attack we developed exploits the real-time
nature of the learning process that allows the adversary to train a Generative
Adversarial Network (GAN) that generates prototypical samples of the targeted
training set that was meant to be private (the samples generated by the GAN are
intended to come from the same distribution as the training data).
Interestingly, we show that record-level DP applied to the shared parameters of
the model, as suggested in previous work, is ineffective (i.e., record-level DP
is not designed to address our attack).Comment: ACM CCS'17, 16 pages, 18 figure
Insider Threats in Emerging Mobility-as-a-Service Scenarios
Mobility as a Service (MaaS) applies the everything-as- \ a-service paradigm of Cloud Computing to transportation: a MaaS \ provider offers to its users the dynamic composition of solutions of \ different travel agencies into a single, consistent interface. \ Traditionally, transits and data on mobility belong to a scattered \ plethora of operators. Thus, we argue that the economic model of \ MaaS is that of federations of providers, each trading its resources to \ coordinate multi-modal solutions for mobility. Such flexibility comes \ with many security and privacy concerns, of which insider threat is \ one of the most prominent. In this paper, we follow a tiered structure \ â from individual operators to markets of federated MaaS providers \ â to classify the potential threats of each tier and propose the \ appropriate countermeasures, in an effort to mitigate the problems
A Privacy-Preserving, Context-Aware, Insider Threat prevention and prediction model (PPCAITPP)
The insider threat problem is extremely challenging to address, as it is committed by insiders who are
trusted and authorized to access the information resources of the organization. The problem is further
complicated by the multifaceted nature of insiders, as human beings have various motivations and
fluctuating behaviours. Additionally, typical monitoring systems may violate the privacy of insiders.
Consequently, there is a need to consider a comprehensive approach to mitigate insider threats. This
research presents a novel insider threat prevention and prediction model, combining several approaches,
techniques and tools from the fields of computer science and criminology. The model is a Privacy-
Preserving, Context-Aware, Insider Threat Prevention and Prediction model (PPCAITPP). The model is
predicated on the Fraud Diamond (a theory from Criminology) which assumes there must be four elements
present in order for a criminal to commit maleficence. The basic elements are pressure (i.e. motive),
opportunity, ability (i.e. capability) and rationalization. According to the Fraud Diamond, malicious
employees need to have a motive, opportunity and the capability to commit fraud. Additionally, criminals
tend to rationalize their malicious actions in order for them to ease their cognitive dissonance towards
maleficence. In order to mitigate the insider threat comprehensively, there is a need to consider all the
elements of the Fraud Diamond because insider threat crime is also related to elements of the Fraud
Diamond similar to crimes committed within the physical landscape.
The model intends to act within context, which implies that when the model offers predictions about threats,
it also reacts to prevent the threat from becoming a future threat instantaneously. To collect information
about insiders for the purposes of prediction, there is a need to collect current information, as the motives
and behaviours of humans are transient. Context-aware systems are used in the model to collect current
information about insiders related to motive and ability as well as to determine whether insiders exploit any
opportunity to commit a crime (i.e. entrapment). Furthermore, they are used to neutralize any
rationalizations the insider may have via neutralization mitigation, thus preventing the insider from
committing a future crime. However, the model collects private information and involves entrapment that
will be deemed unethical. A model that does not preserve the privacy of insiders may cause them to feel
they are not trusted, which in turn may affect their productivity in the workplace negatively. Hence, this
thesis argues that an insider prediction model must be privacy-preserving in order to prevent further
cybercrime. The model is not intended to be punitive but rather a strategy to prevent current insiders from
being tempted to commit a crime in future.
The model involves four major components: context awareness, opportunity facilitation, neutralization
mitigation and privacy preservation. The model implements a context analyser to collect information related
to an insider who may be motivated to commit a crime and his or her ability to implement an attack plan.
The context analyser only collects meta-data such as search behaviour, file access, logins, use of keystrokes
and linguistic features, excluding the content to preserve the privacy of insiders. The model also employs
keystroke and linguistic features based on typing patterns to collect information about any change in an
insiderâs emotional and stress levels. This is indirectly related to the motivation to commit a cybercrime.
Research demonstrates that most of the insiders who have committed a crime have experienced a negative
emotion/pressure resulting from dissatisfaction with employment measures such as terminations, transfers
without their consent or denial of a wage increase. However, there may also be personal problems such as a
divorce. The typing pattern analyser and other resource usage behaviours aid in identifying an insider who
may be motivated to commit a cybercrime based on his or her stress levels and emotions as well as the
change in resource usage behaviour. The model does not identify the motive itself, but rather identifies those
individuals who may be motivated to commit a crime by reviewing their computer-based actions. The model
also assesses the capability of insiders to commit a planned attack based on their usage of computer
applications and measuring their sophistication in terms of the range of knowledge, depth of knowledge and
skill as well as assessing the number of systems errors and warnings generated while using the applications.
The model will facilitate an opportunity to commit a crime by using honeypots to determine whether a
motivated and capable insider will exploit any opportunity in the organization involving a criminal act.
Based on the insiderâs reaction to the opportunity presented via a honeypot, the model will deploy an
implementation strategy based on neutralization mitigation. Neutralization mitigation is the process of
nullifying the rationalizations that the insider may have had for committing the crime. All information about
insiders will be anonymized to remove any identifiers for the purpose of preserving the privacy of insiders.
The model also intends to identify any new behaviour that may result during the course of implementation.
This research contributes to existing scientific knowledge in the insider threat domain and can be used as a
point of departure for future researchers in the area. Organizations could use the model as a framework to
design and develop a comprehensive security solution for insider threat problems. The model concept can
also be integrated into existing information security systems that address the insider threat problemInformation ScienceD. Phil. (Information Systems
- âŠ