22 research outputs found
Capture-based Automated Test Input Generation
Testing object-oriented software is critical because object-oriented languages have been commonly used in developing modern software systems. Many efficient test input generation techniques for object-oriented software have been proposed; however, state-of-the-art algorithms yield very low code coverage (e.g., less than 50%) on large-scale software. Therefore, one important and yet challenging problem is to generate desirable input objects for receivers and arguments that can achieve high code coverage (such as branch coverage) or help reveal bugs. Desirable objects help tests exercise the new parts of the code. However, generating desirable objects has been a significant challenge for automated test input generation tools, partly because the search space for such desirable objects is huge.
To address this significant challenge, we propose a novel approach called Capture-based Automated Test Input Generation for Objected-Oriented Unit Testing (CAPTIG). The contributions of this proposed research are the following.
First, CAPTIG enhances method-sequence generation techniques. Our approach intro-duces a set of new algorithms for guided input and method selection that increase code coverage. In addition, CAPTIG efficently reduces the amount of generated input.
Second, CAPTIG captures objects dynamically from program execution during either system testing or real use. These captured inputs can support existing automated test input generation tools, such as a random testing tool called Randoop, to achieve higher code coverage.
Third, CAPTIG statically analyzes the observed branches that had not been covered and attempts to exercise them by mutating existing inputs, based on the weakest precon-dition analysis. This technique also contributes to achieve higher code coverage.
Fourth, CAPTIG can be used to reproduce software crashes, based on crash stack trace. This feature can considerably reduce cost for analyzing and removing causes of the crashes.
In addition, each CAPTIG technique can be independently applied to leverage existing testing techniques. We anticipate our approach can achieve higher code coverage with a reduced duration of time with smaller amount of test input. To evaluate this new approach, we performed experiments with well-known large-scale open-source software and discovered our approach can help achieve higher code coverage with fewer amounts of time and test inputs
The role of communication systems in smart grids: Architectures, technical solutions and research challenges
The purpose of this survey is to present a critical overview of smart grid concepts, with a special focus on the role that communication, networking and middleware technologies will have in the transformation of existing electric power systems into smart grids. First of all we elaborate on the key technological, economical and societal drivers for the development of smart grids. By adopting a data-centric perspective we present a conceptual model of communication systems for smart grids, and we identify functional components, technologies, network topologies and communication services that are needed to support smart grid communications. Then, we introduce the fundamental research challenges in this field including communication reliability and timeliness, QoS support, data management services, and autonomic behaviors. Finally, we discuss the main solutions proposed in the literature for each of them, and we identify possible future research directions
Discourses on social software
Can computer scientists contribute to the solution of societal problems? Can logic help to model social interactions? Are there recipes for making groups with diverging preferences arrive at reasonable decisions? Why is common knowledge important for social interaction? Does the rational pursuit of individual interests put the public interest in danger, and if so, why? Discourses on Social Software sheds light on these and similar questions. This book offers the reader an ideal introduction to the exciting new field of social software. It shows in detail the many ways in which the seemingly abstract sciences of logic and computer science can be put to use to analyse and solve contemporary social problems. The unusual format of a series of discussions among a logician, a computer scientist, a philosopher and some researchers from other disciplines encourages the reader to develop his own point of view. The only requirements for reading this book are a nodding familiarity with logic, a curious mind, and a taste for spicy debate.Kunnen de computerwetenschappers bijdragen aan een oplossing van sociale problemen? Kan logica gebruikt worden om sociale interactie te modelleren? Zijn er regels op te stellen om groepen met afwijkende voorkeuren tot redelijke besluiten te laten komen? Discourses on Social Software biedt de lezer een ideale inleiding op (nog nieuwe) gebied van sociale software. Het toont in detail de vele manieren waarin de schijnbaar abstracte wetenschappen van logica en computerwetenschap aan het werk kunnen worden gezet om eigentijdse sociale problemen te analyseren en op te lossen. Door de ongebruikelijke aanpak in dit boek, namelijk door discussies tussen een logicus, een computerwetenschapper, een filosoof en onderzoekers uit andere disciplines, wordt de lezer aangemoedigd zijn eigen standpunt te ontwikkelen. De enige vereisten om dit boek te lezen zijn enige vertrouwdheid met de logica, een nieuwsgierige geest, en liefde voor een pittig debat
Generation of model-based safety arguments from automatically allocated safety integrity levels
To certify safety-critical systems, assurance arguments linking evidence of safety to appropriate requirements must be constructed. However, modern safety-critical systems feature increasing complexity and integration, which render manual approaches impractical to apply. This thesis addresses this problem by introducing a model-based method, with an exemplary application based on the aerospace domain.Previous work has partially addressed this problem for slightly different applications, including verification-based, COTS, product-line and process-based assurance. Each of the approaches is applicable to a specialised case and does not deliver a solution applicable to a generic system in a top-down process. This thesis argues that such a solution is feasible and can be achieved based on the automatic allocation of safety requirements onto a system’s architecture. This automatic allocation is a recent development which combines model-based safety analysis and optimisation techniques. The proposed approach emphasises the use of model-based safety analysis, such as HiP-HOPS, to maximise the benefits towards the system development lifecycle.The thesis investigates the background and earlier work regarding construction of safety arguments, safety requirements allocation and optimisation. A method for addressing the problem of optimal safety requirements allocation is first introduced, using the Tabu Search optimisation metaheuristic. The method delivers satisfactory results that are further exploited for construction of safety arguments. Using the produced requirements allocation, an instantiation algorithm is applied onto a generic safety argument pattern, which is compliant with standards, to automatically construct an argument establishing a claim that a system’s safety requirements have been met. This argument is hierarchically decomposed and shows how system and subsystem safety requirements are satisfied by architectures and analyses at low levels of decomposition. Evaluation on two abstract case studies demonstrates the feasibility and scalability of the method and indicates good performance of the algorithms proposed. Limitations and potential areas of further investigation are identified
Working Notes from the 1992 AAAI Workshop on Automating Software Design. Theme: Domain Specific Software Design
The goal of this workshop is to identify different architectural approaches to building domain-specific software design systems and to explore issues unique to domain-specific (vs. general-purpose) software design. Some general issues that cut across the particular software design domain include: (1) knowledge representation, acquisition, and maintenance; (2) specialized software design techniques; and (3) user interaction and user interface
No Optimisation Without Representation: A Knowledge Based Systems View of Evolutionary/Neighbourhood Search Optimisation
Centre for Intelligent Systems and their ApplicationsIn recent years, research into ‘neighbourhood search’ optimisation techniques such as simulated
annealing, tabu search, and evolutionary algorithms has increased apace, resulting in a
number of useful heuristic solution procedures for real-world and research combinatorial and
function optimisation problems. Unfortunately, their selection and design remains a somewhat
ad hoc procedure and very much an art. Needless to say, this shortcoming presents real
difficulties for the future development and deployment of these methods.
This thesis presents work aimed at resolving this issue of principled optimiser design. Driven
by the needs of both the end-user and designer, and their knowledge of the problem domain
and the search dynamics of these techniques, a semi-formal, structured, design methodology
that makes full use of the available knowledge will be proposed, justified, and evaluated. This
methodology is centred around a Knowledge Based System (KBS) view of neighbourhood
search with a number of well-defined knowledge sources that relate to specific hypotheses
about the problem domain. This viewpoint is complemented by a number of design heuristics
that suggest a structured series of hillclimbing experiments which allow these results to be
empirically evaluated and then transferred to other optimisation techniques if desired.
First of all, this thesis reviews the techniques under consideration. The case for the exploitation
of problem-specific knowledge in optimiser design is then made. Optimiser knowledge is
shown to be derived from either the problem domain theory, or the optimiser search dynamics
theory. From this, it will be argued that the design process should be primarily driven by
the problem domain theory knowledge as this makes best use of the available knowledge and
results in a system whose behaviour is more likely to be justifiable to the end-user.
The encoding and neighbourhood operators are shown to embody the main source of problem
domain knowledge, and it will be shown how forma analysis can be used to formalise the
hypotheses about the problem domain that they represent. Therefore it should be possible
for the designer to experimentally evaluate hypotheses about the problem domain. To this
end, proposed design heuristics that allow the transfer of results across optimisers based on a
common hillclimbing class, and that can be used to inform the choice of evolutionary algorithm
recombination operators, will be justified. In fact, the above approach bears some similarity to
that of KBS design. Additional knowledge sources and roles will therefore be described and
discussed, and it will be shown how forma analysis again plays a key part in their formalisation.
Design heuristics for many of these knowledge sources will then be proposed and justified.
This methodology will be evaluated by testing the validity of the proposed design heuristics in
the context of two sequencing case studies. The first case study is a well-studied problem from
operational research, the flowshop sequencing problem, which will provide a through test of
many of the design heuristics proposed here. Also, an idle-time move preference heuristic will
be proposed and demonstrated on both directed mutation and candidate list methods.
The second case study applies the above methodology to design a prototype system for resource
redistribution in the developing world, a problem that can be modelled as a very large
transportation problem with non-linear constraints and objective function. The system, combining
neighbourhood search with a constructive algorithm which reformulates the problem
to one of sequencing, was able to produce feasible shipment plans for problems derived from
data from the World Health Organisation’s TB programme in China that are much larger than
those problems tackled by the current ‘state-of-the-art’ for transportation problems
Combining SOA and BPM Technologies for Cross-System Process Automation
This paper summarizes the results of an industry case study that introduced a cross-system business process automation solution based on a combination of SOA and BPM standard technologies (i.e., BPMN, BPEL, WSDL). Besides discussing major weaknesses of the existing, custom-built, solution and comparing them against experiences with the developed prototype, the paper presents a course of action for transforming the current solution into the proposed solution. This includes a general approach, consisting of four distinct steps, as well as specific action items that are to be performed for every step. The discussion also covers language and tool support and challenges arising from the transformation