149 research outputs found
Recommended from our members
The interlocutory tool box: techniques for curtailing coincidental correctness
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonEliminating faults in software systems is important, because they can have catastrophic consequences. This can be achieved by testing and debugging. Testing involves executing the system with a test case to obtain an output. The output is evaluated against the tester’s expectations; deviation from these expectations indicates that a fault has been detected. Debugging involves using information about the fault, that was gleaned during testing, to isolate the fault in the system. Coincidental correctness is a widespread phenomenon in which a fault corrupts a program state, and despite this, the system produces an output that satisfies the tester’s expectations. Coincidental correctness can compromise the effectiveness of testing and debugging techniques.
This thesis investigated methods for alleviating coincidental correctness in testing and debugging. The investigation culminated in four techniques. The first technique is called Interlocutory Testing. Interlocutory Testing is a framework for the development of test oracles that are referred to as Interlocutory Relations. Interlocutory Relations are the first type of oracle that has been specifically designed to operate effectively in the presence of coincidental correctness. Metamorphic Testing was pioneered for testing non-testable systems. However, the effectiveness of this technique can be compromised by coincidental correctness. The second technique, Interlocutory Metamorphic Testing, is a version of Metamorphic Testing that has been integrated with Interlocutory Testing, to alleviate the impact of coincidental correctness on Metamorphic Testing. Interlocutory Mutation Testing is the third technique. This technique uses similar principles to Interlocutory Testing to alleviate the Equivalent Mutant Problem in the presence of coincidental correctness and non-determinism. Finally, the fourth technique is Interlocutory Spectrum-based Fault Localisation. This technique uses Interlocutory Relations to ameliorate the effects of coincidental correctness on fault localisation.
Each technique was empirically evaluated. The results were promising, and indicated that these techniques were capable of mitigating the impact of coincidental correctness
Deep Learning-Based Machinery Fault Diagnostics
This book offers a compilation for experts, scholars, and researchers to present the most recent advancements, from theoretical methods to the applications of sophisticated fault diagnosis techniques. The deep learning methods for analyzing and testing complex mechanical systems are of particular interest. Special attention is given to the representation and analysis of system information, operating condition monitoring, the establishment of technical standards, and scientific support of machinery fault diagnosis
Computational Intelligence in Healthcare
This book is a printed edition of the Special Issue Computational Intelligence in Healthcare that was published in Electronic
Computational Intelligence in Healthcare
The number of patient health data has been estimated to have reached 2314 exabytes by 2020. Traditional data analysis techniques are unsuitable to extract useful information from such a vast quantity of data. Thus, intelligent data analysis methods combining human expertise and computational models for accurate and in-depth data analysis are necessary. The technological revolution and medical advances made by combining vast quantities of available data, cloud computing services, and AI-based solutions can provide expert insight and analysis on a mass scale and at a relatively low cost. Computational intelligence (CI) methods, such as fuzzy models, artificial neural networks, evolutionary algorithms, and probabilistic methods, have recently emerged as promising tools for the development and application of intelligent systems in healthcare practice. CI-based systems can learn from data and evolve according to changes in the environments by taking into account the uncertainty characterizing health data, including omics data, clinical data, sensor, and imaging data. The use of CI in healthcare can improve the processing of such data to develop intelligent solutions for prevention, diagnosis, treatment, and follow-up, as well as for the analysis of administrative processes. The present Special Issue on computational intelligence for healthcare is intended to show the potential and the practical impacts of CI techniques in challenging healthcare applications
Recommended from our members
Enhancing Usability and Explainability of Data Systems
The recent growth of data science expanded its reach to an ever-growing user base of nonexperts, increasing the need for usability, understandability, and explainability in these systems. Enhancing usability makes data systems accessible to people with different skills and backgrounds alike, leading to democratization of data systems. Furthermore, proper understanding of data and data-driven systems is necessary for the users to trust the function of the systems that learn from data. Finally, data systems should be transparent: when a data system behaves unexpectedly or malfunctions, the users deserve proper explanation of what caused the observed incident. Unfortunately, most existing data systems offer limited usability and support for explanations: these systems are usable only by experts with sound technical skills, and even expert users are hindered by the lack of transparency into the systems\u27 inner workings and functions. The aim of my thesis is to bridge the usability gap between nonexpert users and complex data systems, aid all sort of users, including the expert ones, in data and system understanding, and provide explanations that help reason about unexpected outcomes involving data systems. Specifically, my thesis has the following three goals: (1) enhancing usability of data systems for nonexperts, (2) enable data understanding that can assist users in a variety of tasks such as achieving trust in data-driven machine learning, gaining data understanding, and data cleaning, and (3) explaining causes of unexpected outcomes involving data and data systems.
For enhancing usability, we focus on example-driven user intent discovery. We develop systems based on example-driven interactions in two different settings: querying relational databases and personalized document summarization. Towards data understanding, we develop a new data-profiling primitive that can characterize tuples for which a machine-learned model is likely to produce untrustworthy predictions. We also develop an explanation framework to explain causes of such untrustworthy predictions. Additionally, this new data-profiling primitive enables interactive data cleaning. Finally, we develop two explanation frameworks, tailored to provide explanations in debugging data system components, including the data itself. The explanation frameworks focus on explaining the root cause of a concurrent application\u27s intermittent failure and exposing issues in the data that cause a data-driven system to malfunction
Recommended from our members
Software-based gradient nonlinearity distortion correction
The primary purpose of the thesis is to discuss the use of Magnetic Resonance Imaging (MRI) in functional proton radiosurgery. The methods presented in this thesis were specifically designed to correct gradient nonlinearity distortion, the single greatest hurdle that limits the deployment of MRI-based functional proton radiosurgery systems. The new system central in the thesis fully utilized MRI to provide localization of anatomical targets with submillimeter accuracy. The thesis provides analysis and solutions to the problems related to gradient nonlinearity distortion. The characteristics of proton radiosurgery are introduced, in addition to a discussion of its advantages over other current methods of radiation oncology. A historical background for proton radiosurgery is also presented, along with a description of its implementation at Loma Linda University Medical Center (LLUMC), where a new system for functional proton radiosurgery has been proposed and is currently under development
Recent Applications in Graph Theory
Graph theory, being a rigorously investigated field of combinatorial mathematics, is adopted by a wide variety of disciplines addressing a plethora of real-world applications. Advances in graph algorithms and software implementations have made graph theory accessible to a larger community of interest. Ever-increasing interest in machine learning and model deployments for network data demands a coherent selection of topics rewarding a fresh, up-to-date summary of the theory and fruitful applications to probe further. This volume is a small yet unique contribution to graph theory applications and modeling with graphs. The subjects discussed include information hiding using graphs, dynamic graph-based systems to model and control cyber-physical systems, graph reconstruction, average distance neighborhood graphs, and pure and mixed-integer linear programming formulations to cluster networks
Brain-Computer Interface
Brain-computer interfacing (BCI) with the use of advanced artificial intelligence identification is a rapidly growing new technology that allows a silently commanding brain to manipulate devices ranging from smartphones to advanced articulated robotic arms when physical control is not possible. BCI can be viewed as a collaboration between the brain and a device via the direct passage of electrical signals from neurons to an external system. The book provides a comprehensive summary of conventional and novel methods for processing brain signals. The chapters cover a range of topics including noninvasive and invasive signal acquisition, signal processing methods, deep learning approaches, and implementation of BCI in experimental problems
- …