70 research outputs found

    Forensic Evidence Identification and Modeling for Attacks against a Simulated Online Business Information System

    Get PDF
    Forensic readiness of business information systems can support future forensics investigation or auditing on external/internal attacks, internal sabotage and espionage, and business fraud. To establish forensics readiness, it is essential for an organization to identify which fingerprints are relevant and where they can be located, to determine whether they are logged in a forensically sound way and whether all the needed fingerprints are available to reconstruct the events successfully. Also, a fingerprint identification and locating mechanism should be provided to guide potential forensics investigation in the future. Furthermore, mechanisms should be established to automate the security incident tracking and reconstruction processes. In this research, external and internal attacks are first modeled as augmented attack trees based on the vulnerabilities of business information systems. Then, modeled attacks are conducted against a honeynet that simulates an online business information system, and a forensic investigation follows each attack. Finally, an evidence tree, which is expected to provide the necessary contextual information to automate the attack tracking and reconstruction process in the future, is built for each attack based on fingerprints identified and located within the system

    Security Testing with Misuse Case Modeling

    Get PDF
    Having a comprehensive model of security requirements is a crucial step towards developing a reliable software system. An effective model of security requirements which describes the possible scenarios that may affect the security aspects of the system under development can be an effective approach for subsequent use in generating security test cases. Misuse case was first proposed by Sinder and Opdahl as an approach to extract the security requirements of the system under development [1]. A misuse case is a use case representing scenarios that might be followed by a system adversary in order to compromise the system; that is a behavior that should not happen in a system. As an effective approach used to model potential threats to the system under development, misuse cases are an effective approach for suggesting mitigation mechanisms. A mitigation use case is a use case that represents the countermeasure requirements of a misuse case. By describing the security threats that may be exploited from the adversary’s point of view, a misuse case provides an effective basis for security testing that addresses the interactions between the adversary and the system under development. Security testing also needs to verify the security mechanisms of the system against misuse cases. Thus, by representing the security requirements of the system, mitigation use cases can also be a good basis for security testing. Misuse cases and mitigation use cases are ordinarily described in natural language. Unfortunately, this approach has difficulties and limits the ability to generate security test cases from the misuse cases and mitigation use cases. This thesis presents a new, structured approach to generating security test cases based on the extracted security test model from the textual description of the misuse cases accompanying mitigation use cases, represented as a Predicate/Transition (PrT) net. This approach will enable the system developers to model the misuse cases accompanying mitigation use cases and then generating security test cases based on the resulting security test models, ensuring that the potential attacks are mitigated appropriately in the software development process. This approach has been applied to two real-world applications, FileZilla Server, a popular FTP server [19] in C++ and a Grant Proposal Management System (GPMS) in Java. Experiment results show that the generated security test cases are efficient test cases that can reveal many security vulnerabilities during the development of GPMS and can kill the majority of the FileZilla Server mutants with seeded vulnerabilities

    SIGL:Securing Software Installations Through Deep Graph Learning

    Get PDF
    Many users implicitly assume that software can only be exploited after it is installed. However, recent supply-chain attacks demonstrate that application integrity must be ensured during installation itself. We introduce SIGL, a new tool for detecting malicious behavior during software installation. SIGL collects traces of system call activity, building a data provenance graph that it analyzes using a novel autoencoder architecture with a graph long short-term memory network (graph LSTM) for the encoder and a standard multilayer perceptron for the decoder. SIGL flags suspicious installations as well as the specific installation-time processes that are likely to be malicious. Using a test corpus of 625 malicious installers containing real-world malware, we demonstrate that SIGL has a detection accuracy of 96%, outperforming similar systems from industry and academia by up to 87% in precision and recall and 45% in accuracy. We also demonstrate that SIGL can pinpoint the processes most likely to have triggered malicious behavior, works on different audit platforms and operating systems, and is robust to training data contamination and adversarial attack. It can be used with application-specific models, even in the presence of new software versions, as well as application-agnostic meta-models that encompass a wide range of applications and installers.Comment: 18 pages, to appear in the 30th USENIX Security Symposium (USENIX Security '21

    tool for automated test code generation from high-level Petri nets

    Get PDF
    Abstract. Automated software testing has gained much attention because it is expected to improve testing productivity and reduce testing cost. Automated generation and execution of tests, however, are still very limited. This paper presents a tool, ISTA (Integration and System Test Automation), for automated test generation and execution by using high-level Petri nets as finite state test models. ISTA has several unique features. It allows executable test code to be generated automatically from a MID (Model-Implementation Description) specification -including a high-level Petri net as the test model and a mapping from the Petri net elements to implementation constructs. The test code can be executed immediately against the system under test. It supports a variety of languages of test code, including Java, C/C++, C#, VB, and html/Selenium IDE (for web applications). It also supports automated test generation for various coverage criteria of Petri nets. ISTA is useful not only for function testing but also for security testing by using Petri nets as threat models. It has been applied to several industry-strength systems

    Digital Forensic Acquisition of Virtual Private Servers Hosted in Cloud Providers that Use KVM as a Hypervisor

    Get PDF
    Kernel-based Virtual Machine (KVM) is one of the most popular hypervisors used by cloud providers to offer virtual private servers (VPSs) to their customers. A VPS is just a virtual machine (VM) hired and controlled by a customer but hosted in the cloud provider infrastructure. In spite of the fact that the usage of VPS is popular and will continue to grow in the future, it is rare to fnd technical publications in the digital forensic feld related to the acquisition process of a VPS involved in a crime. For this research, four VMs were created in a KVM virtualization node, simulating four independent VPSs and running different operating systems and applications. The utilities virsh and tcpdump were used at the hypervisor level to collect digital data from the four VPSs. The utility virsh was employed to take snapshots of the hard drives and to create images of the RAM content while the utility tcpdump was employed to capture in real-time the network traffc. The results generated by these utilities were evaluated in terms of effciency, integrity, and completeness. The analysis of these results suggested both utilities were capable of collecting digital data from the VPSs in an effcient manner, respecting the integrity and completeness of the data acquired. Therefore, these tools can be used to acquire forensically-sound digital evidence from a VPS hosted in a cloud provider’s virtualization node that uses KVM as a hypervisor

    A SYSTEMATIC LITERATURE SURVEY: INTERNET OF THINGS

    Get PDF
    IoT has given a novel form of communication called Machine-to-Machine communication which has quite promising future in India. According to WATconsult report, IoT will impact all the businesses in India by 2020. The Growth of IoT is due to the adoption of the internet, smart phones and social networks by humans. Agriculture and Healthcare are the major sectors where IoT can play a vital role for change and better quality of service. IoT suffers from many challenges in India like lack of consumer awareness, poor high speed data and high manufacturing cost. Internet of things has the ability to transform real world objects into smart objects. The main aim of this systematic study is to provide an overview of Internet of Things, its applications, security, architecture, and vital technologies for future research in the area of Internet of Thing

    New directions for remote data integrity checking of cloud storage

    Get PDF
    Cloud storage services allow data owners to outsource their data, and thus reduce their workload and cost in data storage and management. However, most data owners today are still reluctant to outsource their data to the cloud storage providers (CSP), simply because they do not trust the CSPs, and have no confidence that the CSPs will secure their valuable data. This dissertation focuses on Remote Data Checking (RDC), a collection of protocols which can allow a client (data owner) to check the integrity of data outsourced at an untrusted server, and thus to audit whether the server fulfills its contractual obligations. Robustness has not been considered for the dynamic RDCs in the literature. The R-DPDP scheme being designed is the first RDC scheme that provides robustness and, at the same time, supports dynamic data updates, while requiring small, constant, client storage. The main challenge that has to be overcome is to reduce the client-server communication during updates under an adversarial setting. A security analysis for R-DPDP is provided. Single-server RDCs are useful to detect server misbehavior, but do not have provisions to recover damaged data. Thus in practice, they should be extended to a distributed setting, in which the data is stored redundantly at multiple servers. The client can use RDC to check each server and, upon having detected a corrupted server, it can repair this server by retrieving data from healthy servers, so that the reliability level can be maintained. Previously, RDC has been investigated for replication-based and erasure coding-based distributed storage systems. However, RDC has not been investigated for network coding-based distributed storage systems that rely on untrusted servers. RDC-NC is the first RDC scheme for network coding-based distributed storage systems to ensure data remain intact when faced with data corruption, replay, and pollution attacks. Experimental evaluation shows that RDC-NC is inexpensive for both the clients and the servers. The setting considered so far outsources the storage of the data, but the data owner is still heavily involved in the data management process (especially during the repair of damaged data). A new paradigm is proposed, in which the data owner fully outsources both the data storage and the management of the data. In traditional distributed RDC schemes, the repair phase imposes a significant burden on the client, who needs to expend a significant amount of computation and communication, thus, it is very difficult to keep the client lightweight. A new self-repairing concept is developed, in which the servers are responsible to repair the corruption, while the client acts as a lightweight coordinator during repair. To realize this new concept, two novel RDC schemes, RDC-SR and ERDC-SR, are designed for replication-based distributed storage systems, which enable Server-side Repair and minimize the load on the client side. Version control systems (VCS) provide the ability to track and control changes made to the data over time. The changes are usually stored in a VCS repository which, due to its massive size, is often hosted at an untrusted CSP. RDC can be used to address concerns about the untrusted nature of the VCS server by allowing a data owner to periodically check that the server continues to store the data. The RDC-AVCS scheme being designed relies on RDC to ensure all the data versions are retrievable from the untrusted server over time. The RDC-AVCS prototype built on top of Apache SVN only incurs a modest decrease in performance compared to a regular (non-secure) SVN system

    A Perception of the Practice of Software Security and Performance Verification

    Get PDF
    Security and performance are critical nonfunctional requirements for software systems. Thus, it is crucial to include verification activities during software development to identify defects related to such requirements, avoiding their occurrence after release. Software verification, including testing and reviews, encompasses a set of activities that have a purpose of analyzing the software searching for defects. Security and performance verification are activities that look at defects related to these specific quality attributes. Few empirical studies have been focused on how is the state of the practice in security and performance verification. This paper presents the results of a case study performed in the context of Brazilian organizations aiming to characterize security and performance verification practices. Additionally, it provides a set of conjectures indicating recommendations to improve security and performance verification activities.acceptedVersio
    • …
    corecore