7,758 research outputs found

    FuzzTheREST - Intelligent Automated Blackbox RESTful API Fuzzer

    Get PDF
    In recent years, the pervasive influence of technology has deeply intertwined with human life, impacting diverse fields. This relationship has evolved into a dependency, with software systems playing a pivotal role, necessitating a high level of trust. Today, a substantial portion of software is accessed through Application Programming Interfaces, particularly web APIs, which predominantly adhere to the Representational State Transfer architecture. However, this architectural choice introduces a wide range of potential vulnerabilities, which are available and accessible at a network level. The significance of Software testing becomes evident when considering the widespread use of software in various daily tasks that impact personal safety and security, making the identification and assessment of faulty software of paramount importance. In this thesis, FuzzTheREST, a black-box RESTful API fuzzy testing framework, is introduced with the primary aim of addressing the challenges associated with understanding the context of each system under test and conducting comprehensive automated testing using diverse inputs. Operating from a black-box perspective, this fuzzer leverages Reinforcement Learning to efficiently uncover vulnerabilities in RESTful APIs by optimizing input values and combinations, relying on mutation methods for input exploration. The system's value is further enhanced through the provision of a thoroughly documented vulnerability discovery process for the user. This proposal stands out for its emphasis on explainability and the application of RL to learn the context of each API, thus eliminating the necessity for source code knowledge and expediting the testing process. The developed solution adheres rigorously to software engineering best practices and incorporates a novel Reinforcement Learning algorithm, comprising a customized environment for API Fuzzy Testing and a Multi-table Q-Learning Agent. The quality and applicability of the tool developed are also assessed, relying on the results achieved on two case studies, involving the Petstore API and an Emotion Detection module which was part of the CyberFactory#1 European research project. The results demonstrate the tool's effectiveness in discovering vulnerabilities, having found 7 different vulnerabilities and the agents' ability to learn different API contexts relying on API responses while maintaining reasonable code coverage levels.Ultimamente, a influência da tecnologia espalhou-se pela vida humana de uma forma abrangente, afetando uma grande diversidade dos seus aspetos. Com a evolução tecnológica esta acabou por se tornar uma dependência. Os sistemas de software começam assim a desempenhar um papel crucial, o que em contrapartida obriga a um elevado grau de confiança. Atualmente, uma parte substancial do software é implementada em formato de Web APIs, que na sua maioria seguem a arquitetura de transferência de estado representacional. No entanto, esta introduz uma série vulnerabilidade. A importância dos testes de software torna-se evidente quando consideramos o amplo uso de software em várias tarefas diárias que afetam a segurança, elevando ainda mais a importância da identificação e mitigação de falhas de software. Nesta tese é apresentado o FuzzTheREST, uma framework de teste fuzzy de APIs RESTful num modelo caixa preta, com o objetivo principal de abordar os desafios relacionados com a compreensão do contexto de cada sistema sob teste e a realização de testes automatizados usando uma variedade de possíveis valores. Este fuzzer utiliza aprendizagem por reforço de forma a compreender o contexto da API que está sob teste de forma a guiar a geração de valores de teste, recorrendo a métodos de mutação, para descobrir vulnerabilidades nas mesmas. Todo o processo desempenhado pelo sistema é devidamente documentado para que o utilizador possa tomar ações mediante os resultados obtidos. Esta explicabilidade e aplicação de inteligência artificial para aprender o contexto de cada API, eliminando a necessidade de analisar código fonte e acelerando o processo de testagem, enaltece e distingue a solução proposta de outras. A solução desenvolvida adere estritamente às melhores práticas de engenharia de software e inclui um novo algoritmo de aprendizagem por reforço, que compreende um ambiente personalizado para testagem Fuzzy de APIs e um Agente de QLearning com múltiplas Q-tables. A qualidade e aplicabilidade da ferramenta desenvolvida também são avaliadas com base nos resultados obtidos em dois casos de estudo, que envolvem a conhecida API Petstore e um módulo de Deteção de Emoções que fez parte do projeto de investigação europeu CyberFactory#1. Os resultados demonstram a eficácia da ferramenta na descoberta de vulnerabilidades, tendo identificado 7 vulnerabilidades distintas, e a capacidade dos agentes em aprender diferentes contextos de API com base nas respostas da mesma, mantendo níveis de cobertura aceitáveis

    Enabling self organisation for future cellular networks.

    Get PDF
    The rapid growth in mobile communications due to the exponential demand for wireless access is causing the distribution and maintenance of cellular networks to become more complex, expensive and time consuming. Lately, extensive research and standardisation work has been focused on the novel paradigm of self-organising network (SON). SON is an automated technology that allows the planning, deployment, operation, optimisation and healing of the network to become faster and easier by reducing the human involvement in network operational tasks, while optimising the network coverage, capacity and quality of service. However, these SON autonomous features cannot be achieved with the current drive test coverage assessment approach due to its lack of automaticity which results in huge delays and cost. Minimization of drive test (MDT) has recently been standardized by 3GPP as a key self- organising network (SON) feature. MDT allows coverage to be estimated at the base station using user equipment (UE) measurement reports with the objective to eliminate the need for drive tests. However, most MDT based coverage estimation methods recently proposed in literature assume that UE position is known at the base station with 100% accuracy, an assumption that does not hold in reality. In this work, we develop a novel and accurate analytical model that allows the quantification of error in MDT based autonomous coverage estimation (ACE) as a function of error in UE as well as base station (user deployed cell) positioning. We first consider a circular cell with an omnidirectional antenna and then we use a three-sectored cell and see how the system is going to be affected by the UE and the base station (user deployed cell) geographical location information errors. Our model also allows characterization of error in ACE as function of standard deviation of shadowing in addition to the path-loss

    Optimising non-destructive examination of newbuilding ship hull structures by developing a data-centric risk and reliability framework based on fracture mechanics

    Get PDF
    This thesis was previously held under moratorium from 18/11/19 to 18/11/21Ship structures are made of steel members that are joined with welds. Welded connections may contain various imperfections. These imperfections are inherent to this joining technology. Design rules and standards are based on the assumption that welds are made to good a workmanship level. Hence, a ship is inspected during construction to make sure it is reasonably defect-free. However, since 100% inspection coverage is not feasible, only partial inspection has been required by classification societies. Classification societies have developed rules, standards, and guidelines specifying the extent to which inspection should be performed. In this research, a review of rules and standards from classification bodies showed some limitations in current practices. One key limitation is that the rules favour a “one-size-fits-all” approach. In addition to that, a significant discrepancy exists between rules of different classification societies. In this thesis, an innovative framework is proposed, which combines a risk and reliability approach with a statistical sampling scheme achieving targeted and cost-effective inspections. The developed reliability model predicts the failure probability of the structure based on probabilistic fracture mechanics. Various uncertain variables influencing the predictive reliability model are identified, and their effects are considered. The data for two key variables, namely, defect statistics and material toughness are gathered and analysed using appropriate statistical analysis methods. A reliability code is developed based Convolution Integral (CI), which estimates the predictive reliability using the analysed data. Statistical sampling principles are then used to specify the number required NDT checkpoints to achieve a certain statistical confidence about the reliability of structure and the limits set by statistical process control (SPC). The framework allows for updating the predictive reliability estimation of the structure using the inspection findings by employing a Bayesian updating method. The applicability of the framework is clearly demonstrated in a case study structure.Ship structures are made of steel members that are joined with welds. Welded connections may contain various imperfections. These imperfections are inherent to this joining technology. Design rules and standards are based on the assumption that welds are made to good a workmanship level. Hence, a ship is inspected during construction to make sure it is reasonably defect-free. However, since 100% inspection coverage is not feasible, only partial inspection has been required by classification societies. Classification societies have developed rules, standards, and guidelines specifying the extent to which inspection should be performed. In this research, a review of rules and standards from classification bodies showed some limitations in current practices. One key limitation is that the rules favour a “one-size-fits-all” approach. In addition to that, a significant discrepancy exists between rules of different classification societies. In this thesis, an innovative framework is proposed, which combines a risk and reliability approach with a statistical sampling scheme achieving targeted and cost-effective inspections. The developed reliability model predicts the failure probability of the structure based on probabilistic fracture mechanics. Various uncertain variables influencing the predictive reliability model are identified, and their effects are considered. The data for two key variables, namely, defect statistics and material toughness are gathered and analysed using appropriate statistical analysis methods. A reliability code is developed based Convolution Integral (CI), which estimates the predictive reliability using the analysed data. Statistical sampling principles are then used to specify the number required NDT checkpoints to achieve a certain statistical confidence about the reliability of structure and the limits set by statistical process control (SPC). The framework allows for updating the predictive reliability estimation of the structure using the inspection findings by employing a Bayesian updating method. The applicability of the framework is clearly demonstrated in a case study structure

    A Comprehensive Empirical Investigation on Failure Clustering in Parallel Debugging

    Full text link
    The clustering technique has attracted a lot of attention as a promising strategy for parallel debugging in multi-fault scenarios, this heuristic approach (i.e., failure indexing or fault isolation) enables developers to perform multiple debugging tasks simultaneously through dividing failed test cases into several disjoint groups. When using statement ranking representation to model failures for better clustering, several factors influence clustering effectiveness, including the risk evaluation formula (REF), the number of faults (NOF), the fault type (FT), and the number of successful test cases paired with one individual failed test case (NSP1F). In this paper, we present the first comprehensive empirical study of how these four factors influence clustering effectiveness. We conduct extensive controlled experiments on 1060 faulty versions of 228 simulated faults and 141 real faults, and the results reveal that: 1) GP19 is highly competitive across all REFs, 2) clustering effectiveness decreases as NOF increases, 3) higher clustering effectiveness is easier to achieve when a program contains only predicate faults, and 4) clustering effectiveness remains when the scale of NSP1F is reduced to 20%

    Models, Techniques, and Metrics for Managing Risk in Software Engineering

    Get PDF
    The field of Software Engineering (SE) is the study of systematic and quantifiable approaches to software development, operation, and maintenance. This thesis presents a set of scalable and easily implemented techniques for quantifying and mitigating risks associated with the SE process. The thesis comprises six papers corresponding to SE knowledge areas such as software requirements, testing, and management. The techniques for risk management are drawn from stochastic modeling and operational research. The first two papers relate to software testing and maintenance. The first paper describes and validates novel iterative-unfolding technique for filtering a set of execution traces relevant to a specific task. The second paper analyzes and validates the applicability of some entropy measures to the trace classification described in the previous paper. The techniques in these two papers can speed up problem determination of defects encountered by customers, leading to improved organizational response and thus increased customer satisfaction and to easing of resource constraints. The third and fourth papers are applicable to maintenance, overall software quality and SE management. The third paper uses Extreme Value Theory and Queuing Theory tools to derive and validate metrics based on defect rediscovery data. The metrics can aid the allocation of resources to service and maintenance teams, highlight gaps in quality assurance processes, and help assess the risk of using a given software product. The fourth paper characterizes and validates a technique for automatic selection and prioritization of a minimal set of customers for profiling. The minimal set is obtained using Binary Integer Programming and prioritized using a greedy heuristic. Profiling the resulting customer set leads to enhanced comprehension of user behaviour, leading to improved test specifications and clearer quality assurance policies, hence reducing risks associated with unsatisfactory product quality. The fifth and sixth papers pertain to software requirements. The fifth paper both models the relation between requirements and their underlying assumptions and measures the risk associated with failure of the assumptions using Boolean networks and stochastic modeling. The sixth paper models the risk associated with injection of requirements late in development cycle with the help of stochastic processes
    corecore