84 research outputs found
Buffalo Habitat for Humanity: The Challenges and Prospects of Green Building
Habitat for Humanity Buffalo has operated since 1985, and in that time has rehabilitated or built more than 150 homes in the cities of Buffalo and Lackawanna. An affiliate of Habitat for Humanity International (HFHI), Habitat builds affordable housing for qualified low-income people. Once approved, homeowners must put 500 hours of “sweat equity” into Habitat projects, including their homeowner education. In return, they receive a zero-interest mortgage, the proceeds of which pay their property taxes and homeowner’s insurance, as well as support the rehabilitation or construction of more Habitat homes in the Buffalo area
Matching Possible Mitigations to Cyber Threats: A Document-Driven Decision Support Systems Approach
Cyber systems are ubiquitous in all aspects of society. At the same time, breaches to cyber systems continue to be front-page news (Calfas, 2018; Equifax, 2017) and, despite more than a decade of heightened focus on cybersecurity, the threat continues to evolve and grow, costing globally up to $575 billion annually (Center for Strategic and International Studies, 2014; Gosler & Von Thaer, 2013; Microsoft, 2016; Verizon, 2017). To address possible impacts due to cyber threats, information system (IS) stakeholders must assess the risks they face. Following a risk assessment, the next step is to determine mitigations to counter the threats that pose unacceptably high risks. The literature contains a robust collection of studies on optimizing mitigation selections, but they universally assume that the starting list of appropriate mitigations for specific threats exists from which to down-select. In current practice, producing this starting list is largely a manual process and it is challenging because it requires detailed cybersecurity knowledge from highly decentralized sources, is often deeply technical in nature, and is primarily described in textual form, leading to dependence on human experts to interpret the knowledge for each specific context. At the same time cybersecurity experts remain in short supply relative to the demand, while the delta between supply and demand continues to grow (Center for Cyber Safety and Education, 2017; Kauflin, 2017; Libicki, Senty, & Pollak, 2014). Thus, an approach is needed to help cybersecurity experts (CSE) cut through the volume of available mitigations to select those which are potentially viable to offset specific threats.
This dissertation explores the application of machine learning and text retrieval techniques to automate matching of relevant mitigations to cyber threats, where both are expressed as unstructured or semi-structured English language text. Using the Design Science Research Methodology (Hevner & March, 2004; Peffers, Tuunanen, Rothenberger, & Chatterjee, 2007), we consider a number of possible designs for the matcher, ultimately selecting a supervised machine learning approach that combines two techniques: support vector machine classification and latent semantic analysis. The selected approach demonstrates high recall for mitigation documents in the relevant class, bolstering confidence that potentially viable mitigations will not be overlooked. It also has a strong ability to discern documents in the non-relevant class, allowing approximately 97% of non-relevant mitigations to be excluded automatically, greatly reducing the CSE’s workload over purely manual matching. A false v positive rate of up to 3% prevents totally automated mitigation selection and requires the CSE to reject a few false positives.
This research contributes to theory a method for automatically mapping mitigations to threats when both are expressed as English language text documents. This artifact represents a novel machine learning approach to threat-mitigation mapping. The research also contributes an instantiation of the artifact for demonstration and evaluation. From a practical perspective the artifact benefits all threat-informed cyber risk assessment approaches, whether formal or ad hoc, by aiding decision-making for cybersecurity experts whose job it is to mitigate the identified cyber threats. In addition, an automated approach makes mitigation selection more repeatable, facilitates knowledge reuse, extends the reach of cybersecurity experts, and is extensible to accommodate the continued evolution of both cyber threats and mitigations. Moreover, the selection of mitigations applicable to each threat can serve as inputs into multifactor analyses of alternatives, both automated and manual, thereby bridging the gap between cyber risk assessment and final mitigation selection
Estimating Software Vulnerability Counts in the Context of Cyber Risk Assessments
Stakeholders often conduct cyber risk assessments as a first step towards understanding and managing their risks due to cyber use. Many risk assessment methods in use today include some form of vulnerability analysis. Building on prior research and combining data from several sources, this paper develops and applies a metric to estimate the proportion of latent vulnerabilities to total vulnerabilities in a software system and applies the metric to five scenarios involving software on the scale of operating systems. The findings suggest caution in interpreting the results of cyber risk methodologies that depend on enumerating known software vulnerabilities because the number of unknown vulnerabilities in large-scale software tends to exceed known vulnerabilities
Towards an Organizationally-Relevant Quantification of Cyber Resilience
Given the difficulty of fully securing complex cyber systems, there is growing interest in making cyber systems resilient to the cyber threat. However, quantifying the resilience of a system in an organizationally-relevant manner remains a challenge. This paper describes initial research into a novel metric for quantifying the resilience of a system to cyber threats called the Resilience Index (RI). We calculate the RI via an effects-based discrete event stochastic simulation that runs a large number of trials over a designated mission timeline. During the trials, adverse cyber events (ACEs) occur against cyber assets in a target system. We consider a trial a failure if an ACE causes the performance of any of the target system’s mission essential functions (MEFs) to fall below its assigned threshold level. Once all trials have completed, the simulator computes the ratio of successful trials to the total number of trials, yielding RI. The linkage of ACEs to MEFs provides the organizational tie
Multi-Criteria Selection of Capability-Based Cybersecurity Solutions
Given the increasing frequency and severity of cyber attacks on information systems of all kinds, there is interest in rationalized approaches for selecting the “best” set of cybersecurity mitigations. However, what is best for one target environment is not necessarily best for another. This paper examines an approach to the selection that uses a set of weighted criteria, where the security engineer sets the weights based on organizational priorities and constraints. The approach is based on a capability-based representation for defensive solutions. The paper discusses a group of artifacts that compose the approach through the lens of Design Science research and reports performance results of an instantiation artifact
Rigorous Validation of Systems Security Engineering Analytics
In response to the asymmetric advantage that attackers enjoy over defenders in cyber systems, the cyber community has generated a steady stream of cybersecurity-related frameworks, methodologies, analytics, and “best practices” lists. However, these artifacts almost never under-go rigorous validation of their efficacy but instead tend to be accepted on faith, to, we suggest, our collective detriment based on evidence of continued attacker success. But what would rigorous validation look like, and can we afford it? This paper describes the design and estimates the cost of a controlled experiment whose goal is to deter-mine the effectiveness of an exemplar systems security analytic. Given the significant footprint that humans play in cyber systems (e.g., their design, use, attack, and defense), any such experiment must necessarily take into account and control for variable human behavior. Thus, the paper reinforces the argument that cybersecurity can be understood as a hybrid discipline with strong technical and human dimensions
BluGen: An Analytic Framework for Mission-Cyber Risk Assessment and Mitigation Recommendation
Systems security engineering (SSE) is a complex, manually intensive process, with implications for cost, time required, and repeatability/reproducibility. This paper describes BluGen, an analytic framework that generates risk plots and recommends prioritized mitigations for a target mission/system environment based on a stated level of threat and risk tolerance. The goal is to give working system security engineers a head start in their analysis. We describe BluGen in the context of Design Science Research and evaluate accordingly
Matching Possible Mitigations to Cyber Threats: A Document-Driven Decision Support Systems Approach
Despite more than a decade of heightened focus on cybersecurity, the threat continues. To address possible impacts, cyber threats must be addressed. Mitigation catalogs exist in practice today, but these do not map mitigations to the specific threats they counter. Currently, mitigations are manually selected by cybersecurity experts (CSE) who are in short supply. To reduce labor and improve repeatability, an automated approach is needed for matching mitigations to cyber threats. This research explores the application of supervised machine learning and text retrieval techniques to automate matching of relevant mitigations to cyber threats where both are expressed as text, resulting in a novel method that combines two techniques: support vector machine classification and latent semantic analysis. In five test cases, the approach demonstrates high recall for known relevant mitigation documents, bolstering confidence that potentially relevant mitigations will not be overlooked. It automatically excludes 97% of non-relevant mitigations, greatly reducing the CSE’s workload over purely manual matching
J8955
ABSTRACT: Walnuts (Juglans regia L.) were collected during the 1997 harvest from 13 different cultivars of trees grown in a replicated trial in an experimental orchard at Lincoln University. Two U.S. commercial cultivars (Tehama and Vina), three European commercial cultivars (Esterhazy, G139, G120), and eight New Zealand selections (Rex, Dublin's Glory, Meyric, Stanley, Mckinster, 150, 151, 153) were evaluated. Total lipids were analyzed for fatty acids by capillary gas chromatography, tocopherols by high-performance liquid chromatography, and oxidation stability by Rancimat. The total oil content of the nuts ranged from 64.2 to 68.9% while the stability of the oil ranged from 3.9 to 7.8 h. The oleic acid content of the oils ranged from 12.7 to 20.4% of the total fatty acids, while 18:2 content ranged from 57.0 to 62.5% and the 18:3 contents ranged from 10.7 to 16.2%. Reduced stability of the oil as measured by the Rancimat method appears to be correlated to higher levels of 18:2 in the extracted oil. The total tocopherol contents of these nuts ranged from 268.5 to 436.0 µg/g oil. γ-Tocopherol dominated the profile while α-tocopherol was only 6% of the total content. Peroxide values of the fresh oil were measured spectrophotometrically to give an indication of the overall stability. The levels of total tocopherols when combined with the level of unsaturation in the oil in a multiple regression analysis had a significant relationship (R 2 = 45.2%, P < 0.001) with the peroxide value in the oil
- …