4,169 research outputs found

    An Empirical Analysis of Cyber Deception Systems

    Get PDF

    Herding Vulnerable Cats: A Statistical Approach to Disentangle Joint Responsibility for Web Security in Shared Hosting

    Full text link
    Hosting providers play a key role in fighting web compromise, but their ability to prevent abuse is constrained by the security practices of their own customers. {\em Shared} hosting, offers a unique perspective since customers operate under restricted privileges and providers retain more control over configurations. We present the first empirical analysis of the distribution of web security features and software patching practices in shared hosting providers, the influence of providers on these security practices, and their impact on web compromise rates. We construct provider-level features on the global market for shared hosting -- containing 1,259 providers -- by gathering indicators from 442,684 domains. Exploratory factor analysis of 15 indicators identifies four main latent factors that capture security efforts: content security, webmaster security, web infrastructure security and web application security. We confirm, via a fixed-effect regression model, that providers exert significant influence over the latter two factors, which are both related to the software stack in their hosting environment. Finally, by means of GLM regression analysis of these factors on phishing and malware abuse, we show that the four security and software patching factors explain between 10\% and 19\% of the variance in abuse at providers, after controlling for size. For web-application security for instance, we found that when a provider moves from the bottom 10\% to the best-performing 10\%, it would experience 4 times fewer phishing incidents. We show that providers have influence over patch levels--even higher in the stack, where CMSes can run as client-side software--and that this influence is tied to a substantial reduction in abuse levels

    Acquisition and diffusion of technology innovation

    Get PDF
    In the first essay, I examine value created through external acquisition of nascent technology innovation. External acquisition of new technology is a growing trend in the innovation process, particularly in high technology industries, as firms complement internal efforts with aggressive acquisition programs. Yet, despite its importance, there is little empirical research on the timing of acquisition decisions in high technology environments. I examine the impact of target age on value created for the buyer. Applying an event study methodology to technology acquisitions in the telecommunications industry from 1995 to 2001, empirical evidence supports acquiring early in the face of uncertainty. The equity markets reward the acquisition of younger companies. In sharp contrast to the first essay, the second essay examines the diffusion of negative innovations. While destruction can be creative, certainly not all destruction is creative. Some is just destruction. I examine two fundamentally different paths to information security compromise an opportunistic path and a deliberate path. Through a grounded approach using interviews, observations, and secondary data, I advance a model of the information security compromise process. Using one year of alert data from intrusion detection devices, empirical analysis provides evidence that these paths follow two distinct, but interrelated diffusion patterns. Although distinct, I find empirical evidence that these paths both converge and escalate. Beyond the specific findings in the Internet security context, the study leads to a richer understanding of the diffusion of negative technological innovation. In the third essay, I build on the second essay by examining the effectiveness of reward-based mechanisms in restricting the diffusion of negative innovations. Concerns have been raised that reward-based private infomediaries introduce information leakage which decreases social welfare. Using two years of alert data, I find evidence of their effectiveness despite any leakage which may be occurring. While reward-based disclosures are just as likely to be exploited as non-reward-baed disclosures, exploits from reward-based disclosures are less likely to occur in the first week after disclosure. Further the overall volume of alerts is reduced. This research helps determine the effectiveness of reward mechanisms and provides guidance for security policy makers.Ph.D.Committee Chair: Sabyasachi Mitra; Committee Member: Frank Rothaermel; Committee Member: Sandra Slaughter; Committee Member: Sridhar Narasimhan; Committee Member: Vivek Ghosa

    The Technologization of Insurance: An Empirical Analysis of Big Data and Artificial Intelligence’s Impact on Cybersecurity and Privacy

    Get PDF
    This Article engages one of the biggest issues debated among privacy and technology scholars by offering an empirical examination of how big data and emerging technologies influence society. Although scholars explore the ways that code, technology, and information regulate society, existing research primarily focuses on the theoretical and normative challenges of big data and emerging technologies. To our knowledge, there has been very little empirical analysis of precisely how big data and technology influence society. This is not due to a lack of interest but rather a lack of disclosure by data providers and corporations that collect and use these technologies. Specifically, we focus on one of the biggest problems for businesses and individuals in society: cybersecurity risks and data breach events. Due to the lack of stringent legal regulations and preparation by organizations, insurance companies are stepping in and offering not only cyber insurance but also risk management services aimed at trying to improve organizations’ cybersecurity profile and reduce their risk. Drawing from sixty interviews of the cyber insurance field, a quantitative analysis of a “big data” set we obtained from a data provider, and observations at cyber insurance conferences, we explore the effects of what we refer to as the “technologization of insurance,” the process whereby technology influences and shapes the delivery of insurance. Our study makes two primary findings. First, we show how big data, artificial intelligence, and emerging technologies are transforming the way insurers underwrite, price insurance, and engage in risk management. Second, we show how the impact of these technological interventions is largely symbolic. Insurtech innovations are ineffective at enhancing organizations’ cybersecurity, promoting the role of insurers as regulators, and helping insurers manage uncertainty. We conclude by offering recommendations on how society can help technology to assure algorithmic justice and greater security of consumer information as opposed to greater efficiency and profit

    Predicting Exploitation of Disclosed Software Vulnerabilities Using Open-source Data

    Full text link
    Each year, thousands of software vulnerabilities are discovered and reported to the public. Unpatched known vulnerabilities are a significant security risk. It is imperative that software vendors quickly provide patches once vulnerabilities are known and users quickly install those patches as soon as they are available. However, most vulnerabilities are never actually exploited. Since writing, testing, and installing software patches can involve considerable resources, it would be desirable to prioritize the remediation of vulnerabilities that are likely to be exploited. Several published research studies have reported moderate success in applying machine learning techniques to the task of predicting whether a vulnerability will be exploited. These approaches typically use features derived from vulnerability databases (such as the summary text describing the vulnerability) or social media posts that mention the vulnerability by name. However, these prior studies share multiple methodological shortcomings that inflate predictive power of these approaches. We replicate key portions of the prior work, compare their approaches, and show how selection of training and test data critically affect the estimated performance of predictive models. The results of this study point to important methodological considerations that should be taken into account so that results reflect real-world utility

    The Federal Information Security Management Act of 2002: A Potemkin Village

    Get PDF
    Due to the daunting possibilities of cyberwarfare, and the ease with which cyberattacks may be conducted, the United Nations has warned that the next world war could be initiated through worldwide cyberattacks between countries. In response to the growing threat of cyberwarfare and the increasing importance of information security, Congress passed the Federal Information Security Management Act of 2002 (FISMA). FISMA recognizes the importance of information security to the national economic and security interests of the United States. However, this Note argues that FISMA has failed to significantly bolster information security, primarily because FISMA treats information security as a technological problem and not an economic problem. This Note analyzes existing proposals to incentivize heightened software quality assurance, and proposes a new solution designed to strengthen federal information security in light of the failings of FISMA and the trappings of Congress’s 2001 amendment to the Computer Fraud and Abuse Act

    The Software Vulnerability Ecosystem: Software Development In The Context Of Adversarial Behavior

    Get PDF
    Software vulnerabilities are the root cause of many computer system security fail- ures. This dissertation addresses software vulnerabilities in the context of a software lifecycle, with a particular focus on three stages: (1) improving software quality dur- ing development; (2) pre- release bug discovery and repair; and (3) revising software as vulnerabilities are found. The question I pose regarding software quality during development is whether long-standing software engineering principles and practices such as code reuse help or hurt with respect to vulnerabilities. Using a novel data-driven analysis of large databases of vulnerabilities, I show the surprising result that software quality and software security are distinct. Most notably, the analysis uncovered a counterintu- itive phenomenon, namely that newly introduced software enjoys a period with no vulnerability discoveries, and further that this “Honeymoon Effect” (a term I coined) is well-explained by the unfamiliarity of the code to malicious actors. An important consequence for code reuse, intended to raise software quality, is that protections inherent in delays in vulnerability discovery from new code are reduced. The second question I pose is the predictive power of this effect. My experimental design exploited a large-scale open source software system, Mozilla Firefox, in which two development methodologies are pursued in parallel, making that the sole variable in outcomes. Comparing the methodologies using a novel synthesis of data from vulnerability databases, These results suggest that the rapid-release cycles used in agile software development (in which new software is introduced frequently) have a vulnerability discovery rate equivalent to conventional development. Finally, I pose the question of the relationship between the intrinsic security of software, stemming from design and development, and the ecosystem into which the software is embedded and in which it operates. I use the early development lifecycle to examine this question, and again use vulnerability data as the means of answering it. Defect discovery rates should decrease in a purely intrinsic model, with software maturity making vulnerabilities increasingly rare. The data, which show that vulnerability rates increase after a delay, contradict this. Software security therefore must be modeled including extrinsic factors, thus comprising an ecosystem

    The Economic Impact of Cyber-Attacks

    Get PDF
    • …
    corecore