610 research outputs found
AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments
This report considers the application of Articial Intelligence (AI) techniques to
the problem of misuse detection and misuse localisation within telecommunications
environments. A broad survey of techniques is provided, that covers inter alia
rule based systems, model-based systems, case based reasoning, pattern matching,
clustering and feature extraction, articial neural networks, genetic algorithms, arti
cial immune systems, agent based systems, data mining and a variety of hybrid
approaches. The report then considers the central issue of event correlation, that
is at the heart of many misuse detection and localisation systems. The notion of
being able to infer misuse by the correlation of individual temporally distributed
events within a multiple data stream environment is explored, and a range of techniques,
covering model based approaches, `programmed' AI and machine learning
paradigms. It is found that, in general, correlation is best achieved via rule based approaches,
but that these suffer from a number of drawbacks, such as the difculty of
developing and maintaining an appropriate knowledge base, and the lack of ability
to generalise from known misuses to new unseen misuses. Two distinct approaches
are evident. One attempts to encode knowledge of known misuses, typically within
rules, and use this to screen events. This approach cannot generally detect misuses
for which it has not been programmed, i.e. it is prone to issuing false negatives.
The other attempts to `learn' the features of event patterns that constitute normal
behaviour, and, by observing patterns that do not match expected behaviour, detect
when a misuse has occurred. This approach is prone to issuing false positives,
i.e. inferring misuse from innocent patterns of behaviour that the system was not
trained to recognise. Contemporary approaches are seen to favour hybridisation,
often combining detection or localisation mechanisms for both abnormal and normal
behaviour, the former to capture known cases of misuse, the latter to capture
unknown cases. In some systems, these mechanisms even work together to update
each other to increase detection rates and lower false positive rates. It is concluded
that hybridisation offers the most promising future direction, but that a rule or state
based component is likely to remain, being the most natural approach to the correlation
of complex events. The challenge, then, is to mitigate the weaknesses of
canonical programmed systems such that learning, generalisation and adaptation
are more readily facilitated
Genetic Improvement of Software: a Comprehensive Survey
Genetic improvement (GI) uses automated search to find improved versions of existing software. We present a comprehensive survey of this nascent field of research with a focus on the core papers in the area published between 1995 and 2015. We identified core publications including empirical studies, 96% of which use evolutionary algorithms (genetic programming in particular). Although we can trace the foundations of GI back to the origins of computer science itself, our analysis reveals a significant upsurge in activity since 2012. GI has resulted in dramatic performance improvements for a diverse set of properties such as execution time, energy and memory consumption, as well as results for fixing and extending existing system functionality. Moreover, we present examples of research work that lies on the boundary between GI and other areas, such as program transformation, approximate computing, and software repair, with the intention of encouraging further exchange of ideas between researchers in these fields
Search-based Unit Test Generation for Evolving Software
Search-based software testing has been successfully applied to generate unit test cases for object-oriented software. Typically, in search-based test generation approaches, evolutionary search algorithms are guided by code coverage criteria such as branch coverage to generate tests for individual coverage objectives. Although it has been shown that this approach can be effective, there remain fundamental open questions. In particular, which criteria should test generation use in order to produce the best test suites? Which evolutionary algorithms are more effective at generating test cases with high coverage? How to scale up search-based unit test generation to software projects consisting of large numbers of components, evolving and changing frequently over time? As a result, the applicability of search-based test generation techniques in practice is still fundamentally limited. In order to answer these fundamental questions, we investigate the following improvements to search-based testing. First, we propose the simultaneous optimisation of several coverage criteria at the same time using an evolutionary algorithm, rather than optimising for individual criteria. We then perform an empirical evaluation of different evolutionary algorithms to understand the influence of each one on the test optimisation problem. We then extend a coverage-based test generation with a non-functional criterion to increase the likelihood of detecting faults as well as helping developers to identify the locations of the faults. Finally, we propose several strategies and tools to efficiently apply search-based test generation techniques in large and evolving software projects. Our results show that, overall, the optimisation of several coverage criteria is efficient, there is indeed an evolutionary algorithm that clearly works better for test generation problem than others, the extended coverage-based test generation is effective at revealing and localising faults, and our proposed strategies, specifically designed to test entire software projects in a continuous way, improve efficiency and lead to higher code coverage. Consequently, the techniques and toolset presented in this thesis - which provides support to all contributions here described - brings search-based software testing one step closer to practical usage, by equipping software engineers with the state of the art in automated test generation
Genetic Improvement of Software: a Comprehensive Survey
Genetic improvement uses automated search to find improved versions of existing software. We present a comprehensive survey of this nascent field of research with a focus on the core papers in the area published between 1995 and 2015. We identified core publications including empirical studies, 96% of which use evolutionary algorithms (genetic programming in particular). Although we can trace the foundations of genetic improvement back to the origins of computer science itself, our analysis reveals a significant upsurge in activity since 2012. Genetic improvement has resulted in dramatic performance improvements for a diverse set of properties such as execution time, energy and memory consumption, as well as results for fixing and extending existing system functionality. Moreover, we present examples of research work that lies on the boundary between genetic improvement and other areas, such as program transformation, approximate computing, and software repair, with the intention of encouraging further exchange of ideas between researchers in these fields
Fundamental Approaches to Software Engineering
computer software maintenance; computer software selection and evaluation; formal logic; formal methods; formal specification; programming languages; semantics; software engineering; specifications; verificatio
Towards Automated Performance Analysis of Programs by Runtime Verification
This thesis makes a contribution to the field of Runtime Verification, a formal method for the analysis of computational systems. The contribution is made in multiple parts. First, a new language is introduced for the specification of properties at the source code level of programs. These properties tend to be with respect to program performance. Second, automatic monitoring and instrumentation techniques are introduced for the specification language. Third, an approach for explaining violations of these properties by program runs is introduced. Finally, the resulting body of theoretical work is implemented in an extensive ecosystem of tools for program analysis. This ecosystem is described in detail, along with its application to a real world system at CERN. The work presented in this thesis diverges from past work in the Runtime Verification community. Instead of focusing on maximising expressiveness of the specification formalism and solving the resulting monitoring and instrumentation problems, it focuses on introducing a language in which properties that often need to be checked over real-world programs can easily be expressed. In the direction of instrumentation, the source-code level of abstraction of our specification language allows an approach to instrumentation that diverges from much previous work. Many previous approaches have treated instrumentation as a separate problem from specification, usually providing a language in which one can describe how instrumentation should be performed. With our specification language, instrumentation can be performed automatically with respect to a specification. Further, an area that has received little attention in the Runtime Verification community is the analysis of verdicts resulting from monitoring programs with respect to specifications. The contributions to this area described in this thesis take the form of tools in the ecosystem. These tools enable detailed exploration of monitoring information, and mark a step towards automated generation of explanations of verdicts. Following the description of the extensive set of tools, this thesis concludes with an in depth discussion of their application to perform significant analyses of software used at CERN. Ultimately, the work described, including the theoretical foundations and implementations, forms the beginnings of a program analysis project whose aim, through continued development at CERN, is to enable detailed analysis of the performance of programs by software engineers with minimal effort
A distributed framework for the control and cooperation of heterogeneous mobile robots in smart factories.
Doctoral Degree. University of KwaZulu-Natal, Durban.The present consumer market is driven by the mass customisation of products. Manufacturers are now challenged with the problem of not being able to capture market share and gain higher profits by producing large volumes of the same product to a mass market. Some businesses have implemented mass customisation manufacturing (MCM) techniques as a solution to this problem, where customised products are produced rapidly while keeping the costs at a mass production level. In addition to this, the arrival of the fourth industrial revolution (Industry 4.0) enables the possibility of establishing the decentralised intelligence of embedded devices to detect and respond to real-time variations in the MCM factory.
One of the key pillars in the Industry 4.0, smart factory concept is Advanced Robotics. This includes cooperation and control within multiple heterogeneous robot networks, which increases flexibility in the smart factory and enables the ability to rapidly reconfigure systems to adapt to variations in consumer product demand. Another benefit in these systems is the reduction of production bottleneck conditions where robot services must be coordinated efficiently so that high levels of productivity are maintained.
This study focuses on the research, design and development of a distributed framework that would aid researchers in implementing algorithms for controlling the task goals of heterogeneous mobile robots, to achieve robot cooperation and reduce bottlenecks in a production environment. The framework can be used as a toolkit by the end-user for developing advanced algorithms that can be simulated before being deployed in an actual system, thereby fast prototyping the system integration process.
Keywords: Cooperation, heterogeneity, multiple mobile robots, Industry 4.0, smart factory, manufacturing, middleware, ROS, OPC, framework
Recommended from our members
The interlocutory tool box: techniques for curtailing coincidental correctness
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonEliminating faults in software systems is important, because they can have catastrophic consequences. This can be achieved by testing and debugging. Testing involves executing the system with a test case to obtain an output. The output is evaluated against the tester’s expectations; deviation from these expectations indicates that a fault has been detected. Debugging involves using information about the fault, that was gleaned during testing, to isolate the fault in the system. Coincidental correctness is a widespread phenomenon in which a fault corrupts a program state, and despite this, the system produces an output that satisfies the tester’s expectations. Coincidental correctness can compromise the effectiveness of testing and debugging techniques.
This thesis investigated methods for alleviating coincidental correctness in testing and debugging. The investigation culminated in four techniques. The first technique is called Interlocutory Testing. Interlocutory Testing is a framework for the development of test oracles that are referred to as Interlocutory Relations. Interlocutory Relations are the first type of oracle that has been specifically designed to operate effectively in the presence of coincidental correctness. Metamorphic Testing was pioneered for testing non-testable systems. However, the effectiveness of this technique can be compromised by coincidental correctness. The second technique, Interlocutory Metamorphic Testing, is a version of Metamorphic Testing that has been integrated with Interlocutory Testing, to alleviate the impact of coincidental correctness on Metamorphic Testing. Interlocutory Mutation Testing is the third technique. This technique uses similar principles to Interlocutory Testing to alleviate the Equivalent Mutant Problem in the presence of coincidental correctness and non-determinism. Finally, the fourth technique is Interlocutory Spectrum-based Fault Localisation. This technique uses Interlocutory Relations to ameliorate the effects of coincidental correctness on fault localisation.
Each technique was empirically evaluated. The results were promising, and indicated that these techniques were capable of mitigating the impact of coincidental correctness
- …