866 research outputs found

    Application to Security Testing

    Get PDF
    In a world where software gradually plays a key role daily, a failure may bring unpleasant consequences for its users. An example of a serious failure was the case Apple iCloud security exploit in 2014 where several private photos of celebrities have been accessed without permission[icl14a][icl14b]. Apart from economic and commercial implications, these faults lead to loss of trust in software by users, thus leading to the consequent search for an alternative and even result in leaving the old software for a new alternative. To address these shortcomings, the software industry started to use software testing to make sure that the software contains the minimum possible failures before is deployment. Software tests are used to analyse the program, namely to search some bugs. This analysis can be done without program execution (static analysis) or during execution (dynamic analysis). Static analysis tools can be used to check for potential execution of the program that have not been prematurely aborted due to unexpected event at runtime, not ensuring that the program will display the correct result. We studied some static analysis tools, JSFlow, JSPrime and TAJS, which analyse JavaScript code. These tools have been modified so they can be integrated into the Nibiru framework. Nibiru is a modular framework that aims to help in the implementation of software testing. It uses a micro-services architecture, enabling the use of multiple programming languages in his modules and has the ability to enable the implementation of its modules on multiple machines. So far the Nibiru has three operating modules and its ready to start growing with the community, so they can contribute in the construction of new modules or make small adjustments on the existing testing software to integrate the Nibiru framework.Num mundo onde cada vez mais o software tem um papel fundamental nas atividades do dia-a-dia, uma falha pode trazer consequências desagradáveis para os seus utilizadores. Como exemplo de uma falha grave, temos o caso Apple iCloud security exploit em 2014 [icl14a][icl14b], onde várias fotos de celebridades foram acedidas sem permissão. Para além de repercussões económicas e comerciais estas falhas levam à perca de confiança no software por parte dos utilizadores, levando assim à consequente procura de alternativas ao mesmo, podendo até resultar no abandono do software antigo. Para colmatar estas falhas, hoje em dia a indústria cada vez aposta mais nos testes de software para certificar-se que o software contém o mínimo de falhas possíveis antes de sair para o mercado. Os testes de software servem para analisar o programa, nomeadamente na obtenção de bugs. Esta análise pode ser feita sem execução do programa (análise estática) ou durante a sua execução (análise dinâmica). As ferramentas de análise estática são utilizadas para verificar se existem potenciais execuções do programa que possam falhar durante a sua execução devido a eventos inesperados, isto faz com que o programa apresente um resultado incorreto ou até mesmo bloqueie. Foram estudadas algumas ferramentas de análise estática, JSFlow, JSPrime e TAJS, que analisam código JavaScript. Estas ferramentas foram modificadas para serem integradas na framework Nibiru. O Nibiru é uma framework modular que tem como intuito ajudar na execução de testes de software. Esta utiliza uma arquitetura de micro-serviços, possibilitando o uso de múltiplas linguagens de programação nos seus módulos e tem a capacidade de possibilitar a execução dos seus módulos em várias máquinas. Até ao momento o Nibiru conta com três módulos operacionais, encontrando-se pronto para crescer com a comunidade informática, podendo esta contribuir na construção de novos módulos

    Man-machine partial program analysis for malware detection

    Get PDF
    With the meteoric rise in popularity of the Android platform, there is an urgent need to combat the accompanying proliferation of malware. Existing work addresses the area of consumer malware detection, but cannot detect novel, sophisticated, domain-specific malware that is targeted specifically at one aspect of an organization (eg. ground operations of the US Military). Adversaries can exploit domain knowledge to camoflauge malice within the legitimate behaviors of an app and behind a domain-specific trigger, rendering traditional approaches such as signature-matching, machine learning, and dynamic monitoring ineffective. Manual code inspections are also inadequate, scaling poorly and introducing human error. Yet, there is a dire need to detect this kind of malware before it causes catastrophic loss of life and property. This dissertation presents the Security Toolbox, our novel solution for this challenging new problem posed by DARPA\u27s Automated Program Analysis for Cybersecurity (APAC) program. We employ a human-in-the-loop approach to amplify the natural intelligence of our analysts. Our automation detects interesting program behaviors and exposes them in an analysis Dashboard, allowing the analyst to brainstorm flaw hypotheses and ask new questions, which in turn can be answered by our automated analysis primitives. The Security Toolbox is built on top of Atlas, a novel program analysis platform made by EnSoft. Atlas uses a graph-based mathematical abstraction of software to produce a unified property multigraph, exposes a powerful API for writing analyzers using graph traversals, and provides both automated and interactive capabilities to facilitate program comprehension. The Security Toolbox is also powered by FlowMiner, a novel solution to mine fine-grained, compact data flow summaries of Java libraries. FlowMiner allows the Security Toolbox to complete a scalable and accurate partial program analysis of an application without including all of the libraries that it uses (eg. Android). This dissertation presents the Security Toolbox, Atlas, and FlowMiner. We provide empirical evidence of the effectiveness of the Security Toolbox for detecting novel, sophisticated, domain-specific Android malware, demonstrating that our approach outperforms other cutting-edge research tools and state-of-the-art commercial programs in both time and accuracy metrics. We also evaluate the effectiveness of Atlas as a program analysis platform and FlowMiner as a library summary tool

    Evidence-enabled verification for the Linux kernel

    Get PDF
    Formal verification of large software has been an elusive target, riddled with problems of low accuracy and high computational complexity. With growing dependence on software in embedded and cyber-physical systems where vulnerabilities and malware can lead to disasters, an efficient and accurate verification has become a crucial need. The verification should be rigorous, computationally efficient, and automated enough to keep the human effort within reasonable limits, but it does not have to be completely automated. The automation should actually enable and simplify human cross-checking which is especially important when the stakes are high. Unfortunately, formal verification methods work mostly as automated black boxes with very little support for cross-checking. This thesis is about a different way to approach the software verification problem. It is about creating a powerful fusion of automation and human intelligence by incorporating algorithmic innovations to address the major challenges to advance the state of the art for accurate and scalable software verification where complete automation has remained intractable. The key is a mathematically rigorous notion of verification-critical evidence that the machine abstracts from software to empower human to reason with. The algorithmic innovation is to discover the patterns the developers have applied to manage complexity and leverage them. A pattern-based verification is crucial because the problem is intractable otherwise. We call the overall approach Evidence-Enabled Verification (EEV). This thesis presents the EEV with two challenging applications: (1) EEV for Lock/Unlock Pairing to verify the correct pairing of mutex lock and spin lock with their corresponding unlocks on all feasible execution paths, and (2) EEV for Allocation/Deallocation Pairing to verify the correct pairing of memory allocation with its corresponding deallocations on all feasible execution paths. We applied the EEV approach to verify recent versions of the Linux kernel. The results include a comparison with the state-of-the-art Linux Driver Verification (LDV) tool, effectiveness of the proposed visual models as verification-critical evidence, representative examples of verification, the discovered bugs, and limitations of the proposed approach

    HaskHOL: A Haskell Hosted Domain Specific Language for Higher-Order Logic Theorem Proving

    Get PDF
    HaskHOL is an implementation of a HOL theorem proving capability in Haskell. Motivated by a need to integrate theorem proving capabilities into a Haskell-based tool suite, HaskHOL began as a simple port of HOL Light to Haskell. However, Haskell's laziness, immutable data, and monadic extensions both complicate an implementation and enable a new feature class. This thesis describes HaskHOL, its motivation and implementation. Its use to implement a primitive, interactive theorem prover is explored and its performance is evaluated using a collection of intuitionistically valid problems

    Mining complex trees for hidden fruit : a graph–based computational solution to detect latent criminal networks : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information Technology at Massey University, Albany, New Zealand.

    Get PDF
    The detection of crime is a complex and difficult endeavour. Public and private organisations – focusing on law enforcement, intelligence, and compliance – commonly apply the rational isolated actor approach premised on observability and materiality. This is manifested largely as conducting entity-level risk management sourcing ‘leads’ from reactive covert human intelligence sources and/or proactive sources by applying simple rules-based models. Focusing on discrete observable and material actors simply ignores that criminal activity exists within a complex system deriving its fundamental structural fabric from the complex interactions between actors - with those most unobservable likely to be both criminally proficient and influential. The graph-based computational solution developed to detect latent criminal networks is a response to the inadequacy of the rational isolated actor approach that ignores the connectedness and complexity of criminality. The core computational solution, written in the R language, consists of novel entity resolution, link discovery, and knowledge discovery technology. Entity resolution enables the fusion of multiple datasets with high accuracy (mean F-measure of 0.986 versus competitors 0.872), generating a graph-based expressive view of the problem. Link discovery is comprised of link prediction and link inference, enabling the high-performance detection (accuracy of ~0.8 versus relevant published models ~0.45) of unobserved relationships such as identity fraud. Knowledge discovery uses the fused graph generated and applies the “GraphExtract” algorithm to create a set of subgraphs representing latent functional criminal groups, and a mesoscopic graph representing how this set of criminal groups are interconnected. Latent knowledge is generated from a range of metrics including the “Super-broker” metric and attitude prediction. The computational solution has been evaluated on a range of datasets that mimic an applied setting, demonstrating a scalable (tested on ~18 million node graphs) and performant (~33 hours runtime on a non-distributed platform) solution that successfully detects relevant latent functional criminal groups in around 90% of cases sampled and enables the contextual understanding of the broader criminal system through the mesoscopic graph and associated metadata. The augmented data assets generated provide a multi-perspective systems view of criminal activity that enable advanced informed decision making across the microscopic mesoscopic macroscopic spectrum
    corecore