286 research outputs found

    DAG-Based Attack and Defense Modeling: Don't Miss the Forest for the Attack Trees

    Full text link
    This paper presents the current state of the art on attack and defense modeling approaches that are based on directed acyclic graphs (DAGs). DAGs allow for a hierarchical decomposition of complex scenarios into simple, easily understandable and quantifiable actions. Methods based on threat trees and Bayesian networks are two well-known approaches to security modeling. However there exist more than 30 DAG-based methodologies, each having different features and goals. The objective of this survey is to present a complete overview of graphical attack and defense modeling techniques based on DAGs. This consists of summarizing the existing methodologies, comparing their features and proposing a taxonomy of the described formalisms. This article also supports the selection of an adequate modeling technique depending on user requirements

    Faculty Publications and Creative Works 2002

    Get PDF
    Introduction One of the ways in which we recognize our faculty at the University of New Mexico is through Faculty Publications & Creative Works. An annual publication, it highlights our faculty\u27s scholarly and creative activities and achievements and serves as a compendium of UNM faculty efforts during the 2001 calendar year. Faculty Publications & Creative Works strives to illustrate the depth and breadth of research activities performed throughout our University\u27s laboratories, studios and classrooms. We believe that the communication of individual research is a significant method of sharing concepts and thoughts and ultimately inspiring the birth of new ideas. In support of this, UNM faculty during 2002 produced over 2,278 works, including 1,735 scholarly papers and articles, 64 books, 195 book chapters, 174 reviews, 84 creative works and 26 patented works. We are proud of the accomplishments of our faculty which are in part reflected in this book, which illustrates the diversity of intellectual pursuits in support of research and education at the University of New Mexico. Terry Yates Vice Provost for Researc

    Annual Research Report, 2009-2010

    Get PDF
    Annual report of collaborative research projects of Old Dominion University faculty and students in partnership with business, industry and governmenthttps://digitalcommons.odu.edu/or_researchreports/1001/thumbnail.jp

    Diversification and obfuscation techniques for software security: A systematic literature review

    Get PDF
    Context: Diversification and obfuscation are promising techniques for securing software and protecting computers from harmful malware. The goal of these techniques is not removing the security holes, but making it difficult for the attacker to exploit security vulnerabilities and perform successful attacks.Objective: There is an increasing body of research on the use of diversification and obfuscation techniques for improving software security; however, the overall view is scattered and the terminology is unstructured. Therefore, a coherent review gives a clear statement of state-of-the-art, normalizes the ongoing discussion and provides baselines for future research.Method: In this paper, systematic literature review is used as the method of the study to select the studies that discuss diversification/obfuscation techniques for improving software security. We present the process of data collection, analysis of data, and report the results.Results: As the result of the systematic search, we collected 357 articles relevant to the topic of our interest, published between the years 1993 and 2017. We studied the collected articles, analyzed the extracted data from them, presented classification of the data, and enlightened the research gaps.Conclusion: The two techniques have been extensively used for various security purposes and impeding various types of security attacks. There exist many different techniques to obfuscate/diversify programs, each of which targets different parts of the programs and is applied at different phases of software development life-cycle. Moreover, we pinpoint the research gaps in this field, for instance that there are still various execution environments that could benefit from these two techniques, including cloud computing, Internet of Things (IoT), and trusted computing. We also present some potential ideas on applying the techniques on the discussed environments.</p

    Faculty Publications & Presentations, 2003-2004

    Get PDF

    Faculty Publications & Presentations, 2003-2004

    Get PDF

    Doctor of Philosophy

    Get PDF
    dissertationStochastic methods, dense free-form mapping, atlas construction, and total variation are examples of advanced image processing techniques which are robust but computationally demanding. These algorithms often require a large amount of computational power as well as massive memory bandwidth. These requirements used to be ful lled only by supercomputers. The development of heterogeneous parallel subsystems and computation-specialized devices such as Graphic Processing Units (GPUs) has brought the requisite power to commodity hardware, opening up opportunities for scientists to experiment and evaluate the in uence of these techniques on their research and practical applications. However, harnessing the processing power from modern hardware is challenging. The di fferences between multicore parallel processing systems and conventional models are signi ficant, often requiring algorithms and data structures to be redesigned signi ficantly for efficiency. It also demands in-depth knowledge about modern hardware architectures to optimize these implementations, sometimes on a per-architecture basis. The goal of this dissertation is to introduce a solution for this problem based on a 3D image processing framework, using high performance APIs at the core level to utilize parallel processing power of the GPUs. The design of the framework facilitates an efficient application development process, which does not require scientists to have extensive knowledge about GPU systems, and encourages them to harness this power to solve their computationally challenging problems. To present the development of this framework, four main problems are described, and the solutions are discussed and evaluated: (1) essential components of a general 3D image processing library: data structures and algorithms, as well as how to implement these building blocks on the GPU architecture for optimal performance; (2) an implementation of unbiased atlas construction algorithms|an illustration of how to solve a highly complex and computationally expensive algorithm using this framework; (3) an extension of the framework to account for geometry descriptors to solve registration challenges with large scale shape changes and high intensity-contrast di fferences; and (4) an out-of-core streaming model, which enables developers to implement multi-image processing techniques on commodity hardware
    corecore