149 research outputs found

    Developing Secure Systems: A Comparative Study of Existing Methodologies

    Full text link
    With the increasing demand for developing high-quality and more reliable systems, the process of developing trustworthy computer software is a challenging one. In this paper, we review various approaches to producing more secure systems. This includes established general principles for designing secure systems. It also provides an introduction to general software quality measurements including existing software security metrics. This paper also includes a comparison of the various security metrics for developing secure systems (i.e., architectural, design, and code-level metrics). Lastly, the paper examines the approach of refactoring, illustrates its objectives, and shows how refactoring is generally used for enhancing the quality of existing programs from the perspective of information security. At the end of this paper, we provide a discussion of these three approaches and how they can be used to provide guidance for future secure software development processes

    Fuzzy based component reusability evaluation approach to support component based software development

    Get PDF
    One of the contributions of Component Based Software Development (CBSD) is the reuse of software components across multiple systems by software developers. However, the developers often face a difficulty to determine the reusability of the components during the component selection process. Similarly, the component developers also have a problem to measure the component reusability during component development. Nowadays, even though many studies have been conducted in this field, which the researchers suggested many approaches with metrics but they still lack in empirical confirmation and evidences. Therefore, the aim of this study is to investigate and develop the component reusability evaluation approach to support CBSD. The proposed approach, which is called Component Reusability Evaluation Approach (CREA), is supported by the developed automated tool (CREATool) that may automate the reusability evaluation. CREA is then evaluated by applying five Java component in this approach and CREATool to the selected software components. The results from the application approach and then validated with results from the controlled experiment using statistical analysis. The results indicated that CREA able to provide an acceptable reusability measure, which it is confirmed by similarity results between evaluation using statistical analysis through the controlled experiment and by applying the CREATool. It shows that the proposed approach could be used as an alternative approach in component reusability evaluation. Although the developed approach are not intended to make a holistic and an ultimate decision whether the components can be reused or not, but it is useful enough to be considered as a guide for both component users and developers in making decisions related to reusable components

    The Southeastern Librarian v 64, no. 4 (Winter 2017) Complete Issue

    Get PDF

    Advanced Methods for Botnet Intrusion Detection Systems

    Get PDF

    Annual Report, 2013-2014

    Get PDF
    Beginning in 2004/2005- issued in online format onl

    Security and Privacy Issues in Wireless Mesh Networks: A Survey

    Full text link
    This book chapter identifies various security threats in wireless mesh network (WMN). Keeping in mind the critical requirement of security and user privacy in WMNs, this chapter provides a comprehensive overview of various possible attacks on different layers of the communication protocol stack for WMNs and their corresponding defense mechanisms. First, it identifies the security vulnerabilities in the physical, link, network, transport, application layers. Furthermore, various possible attacks on the key management protocols, user authentication and access control protocols, and user privacy preservation protocols are presented. After enumerating various possible attacks, the chapter provides a detailed discussion on various existing security mechanisms and protocols to defend against and wherever possible prevent the possible attacks. Comparative analyses are also presented on the security schemes with regards to the cryptographic schemes used, key management strategies deployed, use of any trusted third party, computation and communication overhead involved etc. The chapter then presents a brief discussion on various trust management approaches for WMNs since trust and reputation-based schemes are increasingly becoming popular for enforcing security in wireless networks. A number of open problems in security and privacy issues for WMNs are subsequently discussed before the chapter is finally concluded.Comment: 62 pages, 12 figures, 6 tables. This chapter is an extension of the author's previous submission in arXiv submission: arXiv:1102.1226. There are some text overlaps with the previous submissio

    Quality of service based data-aware scheduling

    Get PDF
    Distributed supercomputers have been widely used for solving complex computational problems and modeling complex phenomena such as black holes, the environment, supply-chain economics, etc. In this work we analyze the use of these distributed supercomputers for time sensitive data-driven applications. We present the scheduling challenges involved in running deadline sensitive applications on shared distributed supercomputers running large parallel jobs and introduce a ``data-aware\u27\u27 scheduling paradigm that overcomes these challenges by making use of Quality of Service classes for running applications on shared resources. We evaluate the new data-aware scheduling paradigm using an event-driven hurricane simulation framework which attempts to run various simulations modeling storm surge, wave height, etc. in a timely fashion to be used by first responders and emergency officials. We further generalize the work and demonstrate with examples how data-aware computing can be used in other applications with similar requirements

    Identifying Human Trafficking Networks in Louisiana by Using Authorship Attribution and Network Modeling

    Get PDF
    Human trafficking or modern slavery is a problem that has plagued every U.S. state, in both urban and rural areas. During the past decades, online advertisements for sex trafficking have rapidly increased in numbers. The advancement of the Internet and smart phones have made it easier for sex traffickers to contact and recruit their victims and advertise and sell them online. Also, they have made it more difficult for law enforcement to trace the victims and identify the traffickers. Sadly, more than fifty percent of the victims of sex trafficking are children, many of which are exploited through the Internet. The first step for preventing and fighting human trafficking is to identify the traffickers. The primary goal of this study is to identify potential organized sex trafficking networks in Louisiana by analyzing the ads posted online in Louisiana and its five neighboring states. The secondary goal of this study is to examine the possibility of using authorship attribution techniques (in addition to phone numbers and ad IDs) to group together the online advertisements that may have been posted by the same entity. The data used in this study was collected from the website Backpage for a time period of ten months. After cleaning the data set, we were left with 123,436 ads from 47 cities in the specified area. Through the application of network analysis, we found many entities that are potentially such networks, all of which posted a large number of ads with many phone numbers in different cities. Also, we identified the time period that each phone number was used in and the cities and states that each entity posted ads for, which shows how these entities moved around between different cities and states. The four supervised machine learning methods that we used to classify the collected advertisements are Support Vector Machines (SVMs), the Naïve Bayesian classifier, Logistic Regression, and Neural Networks. We calculated 40 accuracy rates, 35 of which were over 90% for classifying any number of ads per entity, as long as each entity (or author) posted more than 10 ads
    corecore