141 research outputs found

    Cyber-physical business systems modelling : advancing Industry 4.0

    Get PDF
    Abstract: The dynamic digital age drives contemporary multinationals to focus on delivering world-class business solutions with the use of advanced technology. Contemporary multinationals relate to a present-day business primarily engaged to generate profits. These complex multinationals offer value through the manufacture, sale, and management of products and services. Disruptive strategies in operations driven by emerging technological innovations demand continuous business improvements. These insightful opportunities are inclusive of operations, enterprise systems, engineering management, and research. Business sustainability is a strategic priority to deliver exceptional digital solutions. The Fourth Industrial Revolutions (4IR) offer significant technological advancements for total business sustainability. The underlying 4IR technologies include Cyber-Physical Systems (CPS). The collective challenges of a large global business are not easy to predict. CPS protocols deliver sustainable prospects required to integrate and model physical systems in real-time driven by the 4IR implementations. The goal of this thesis is to develop a model (CPS) suitable for self-predicting and to determine ideal operational practice driven by technologies of the 4IR. The model (CPS) seeks a novel tool effective for comprehensive business evaluation and optimisation. The competence of the anticipated tool includes suitability to collaborate current operations and predict the impact of change on a complex business. ..D.Phil. (Engineering Management

    Delayed failure of software components using stochastic testing

    Get PDF
    The present research investigates the delayed failure of software components and addresses the problem that the conventional approach to software testing is unlikely to reveal this type of failure. Delayed failure is defined as a failure that occurs some time after the condition that causes the failure, and is a consequence of long-latency error propagation. This research seeks to close a perceived gap between academic research into software testing and industrial software testing practice by showing that stochastic testing can reveal delayed failure, and supporting this conclusion by a model of error propagation and failure that has been validated by experiment. The focus of the present research is on software components described by a request-response model. Within this conceptual framework, a Markov chain model of error propagation and failure is used to derive the expected delayed failure behaviour of software components. Results from an experimental study of delayed failure of DBMS software components MySQL and Oracle XE using stochastic testing with random generation of SQL are consistent with expected behaviour based on the Markov chain model. Metrics for failure delay and reliability are shown to depend on the characteristics of the chosen experimental profile. SQL mutation is used to generate negative as well as positive test profiles. There appear to be few systematic studies of delayed failure in the software engineering literature, and no studies of stochastic testing related to delayed failure of software components, or specifically to delayed failure of DBMS. Stochastic testing is shown to be an effective technique for revealing delayed failure of software components, as well as a suitable technique for reliability and robustness testing of software components. These results provide a deeper insight into the testing technique and should lead to further research. Stochastic testing could provide a dependability benchmark for component-based software engineering

    Automated Software Testing of Relational Database Schemas

    Get PDF
    Relational databases are critical for many software systems, holding the most valuable data for organisations. Data engineers build relational databases using schemas to specify the structure of the data within a database and defining integrity constraints. These constraints protect the data's consistency and coherency, leading industry experts to recommend testing them. Since manual schema testing is labour-intensive and error-prone, automated techniques enable the generation of test data. Although these generators are well-established and effective, they use default values and often produce many, long, and similar tests --- this results in decreasing fault detection and increasing regression testing time and testers inspection efforts. It raises the following questions: How effective is the optimised random generator at generating tests and its fault detection compared to prior methods? What factors make tests understandable for testers? How to reduce tests while maintaining effectiveness? How effectively do testers inspect differently reduced tests? To answer these questions, the first contribution of this thesis is to evaluate a new optimised random generator against well-established methods empirically. Secondly, identifying understandability factors of schema tests using a human study. Thirdly, evaluating a novel approach that reduces and merge tests against traditional reduction methods. Finally, studying testers' inspection efforts with differently reduced tests using a human study. The results show that the optimised random method efficiently generates effective tests compared to well-established methods. Testers reported that many NULLs and negative numbers are confusing, and they prefer simple repetition of unimportant values and readable strings. The reduction technique with merging is the most effective at minimising tests and producing efficient tests while maintaining effectiveness compared to traditional methods. The merged tests showed an increase in inspection efficiency with a slight accuracy decrease compared to only reduced tests. Therefore, these techniques and investigations can help practitioners adopt these generators in practice

    Biological network analysis: from topological indexes to biological applications towards personalised medicine.

    Get PDF
    Systems Biology encompasses different research areas, sharing graph theory as a common conceptual framework. Its main focus is the modelling and investigation of molecular interactions as complex networks. Notably, although experimental datasets allow the construction of context-specific molecular networks, the effect of quantitative variations of molecular states, i.e. the biochemical status, is not in- corporated into the current network topologies. This fact poses great limitations in terms of predictive power. To overcome these limitations we have developed a novel methodology that allows incorporating experimental quantitative data into the graph topology, thus leading to a potentiated network representation. It is now possible to model, at graph level, the outcome of a specific experimental analysis. The mathematical approach, based on a demonstrated theorem, was validated in four different pathological contexts, including B-Cell Lymphocytic Leukaemia, Amyloidosis, Pancreatic Endocrine Tumours and Myocardial Infarc- tion. Reconstructing disease-specific, potentiated networks coupled to topolog- ical analysis and machine learning techniques allowed the automatic discrimina- tion of healthy versus unhealthy subjects in every context. Our methodology takes advantage of the topological information extracted from protein-protein in- teractions networks integrating experimental data into their topology. Incorpo- rating quantitative data of molecular state into graphs permits to obtain enriched representations that are tailored to a specific experimental condition, or to a sub- ject, leading to an effective personalised approach. Moreover, in order to validate the biological results, we have developed an app, for the Cytoscape platform, that allows the creation of randomised networks and the randomisation of exist- ing, real networks. Since there is a lack of tools for generating and randomising networks, our app helps researchers to exploit different, well known random net- work models that could be used as a benchmark for validating the outcomes from real datasets. We also proposed three possibile approaches for creating randomly weighted networks starting from the experimental, quantitative data. Finally, some of the functionalities of our app, plus some other functions, were devel- oped, in R, to allow exploiting the potential of this language and to perform network analysis using our multiplication model. In summary, we developed a workflow that starts from the creation of a set of personalised networks that are able to integrate numerical information. We gave some directions that guide the researchers in performing the network analysis. Finally, we developed a Java App and some R functions that permit to validate all the findings using a random network based approach

    Sensitivity Analysis

    Get PDF
    Sensitivity analysis (SA), in particular global sensitivity analysis (GSA), is now regarded as a discipline coming of age, primarily for understanding and quantifying how model results and associated inferences depend on its parameters and assumptions. Indeed, GSA is seen as a key part of good modelling practice. However, inappropriate SA, such as insufficient convergence of sensitivity metrics, can lead to untrustworthy results and associated inferences. Good practice SA should also consider the robustness of results and inferences to choices in methods and assumptions relating to the procedure. Moreover, computationally expensive models are common in various fields including environmental domains, where model runtimes are long due to the nature of the model itself, and/or software platform and legacy issues. To extract using GSA the most accurate information from a computationally expensive model, there may be a need for increased computational efficiency. Primary considerations here are sampling methods that provide efficient but adequate coverage of parameter space and estimation algorithms for sensitivity indices that are computationally efficient. An essential aspect in the procedure is adopting methods that monitor and assess the convergence of sensitivity metrics. The thesis reviews the different categories of GSA methods, and then it lays out the various factors and choices therein that can impact the robustness of a GSA exercise. It argues that the overall level of assurance, or practical trustworthiness, of results obtained is engendered from consideration of robustness with respect to the individual choices made for each impact factor. Such consideration would minimally involve transparent justification of individual choices made in the GSA exercise but, wherever feasible, include assessment of the impacts on results of plausible alternative choices. Satisfactory convergence plays a key role in contributing to the level of assurance, and hence the ultimate effectiveness of the GSA can be enhanced if choices are made to achieve that convergence. The thesis examines several of these impact factors, primary ones being the GSA method/estimator, the sampling method, and the convergence monitoring method, the latter being essential for ensuring robustness. The motivation of the thesis is to gain a further understanding and quantitative appreciation of elements that shape and guide the results and computational efficiency of a GSA exercise. This is undertaken through comparative analysis of estimators of GSA sensitivity measures, sampling methods and error estimation of sensitivity metrics in various settings using well-established test functions. Although quasi-Monte Carlo Sobol' sampling can be a good choice computationally, it has error spike issues which are addressed here through a new Column Shift resampling method. We also explore an Active Subspace based GSA method, which is demonstrated to be more informative and computationally efficient than those based on the variance-based Sobol' method. Given that GSA can be computationally demanding, the thesis aims to explore ways that GSA can be more computationally efficient by: addressing how convergence can be monitored and assessed; analysing and improving sampling methods that provide a high convergence rate with low error in sensitivity measures; and analysing and comparing GSA methods, including their algorithm settings

    Evaluating Software Testing Techniques: A Systematic Mapping Study

    Get PDF
    Software testing techniques are crucial for detecting faults in software and reducing the risk of using it. As such, it is important that we have a good understanding of how to evaluate these techniques for their efficiency, scalability, applicability, and effectiveness at finding faults. This thesis enhances our understanding of testing technique evaluations by providing an overview of the state of the art in research. To accomplish this we utilize a systematic mapping study; structuring the field and identifying research gaps and publication trends. We then present a small case study demonstrating how our mapping study can be used to assist researchers in evaluating their own software testing techniques. We find that a majority of evaluations are empirical evaluations in the form of case studies and experiments, most of these evaluations are of low quality based on proper methodology guidelines, and that relatively few papers in the field discuss how testing techniques should be evaluated

    Proceedings of Mathsport international 2017 conference

    Get PDF
    Proceedings of MathSport International 2017 Conference, held in the Botanical Garden of the University of Padua, June 26-28, 2017. MathSport International organizes biennial conferences dedicated to all topics where mathematics and sport meet. Topics include: performance measures, optimization of sports performance, statistics and probability models, mathematical and physical models in sports, competitive strategies, statistics and probability match outcome models, optimal tournament design and scheduling, decision support systems, analysis of rules and adjudication, econometrics in sport, analysis of sporting technologies, financial valuation in sport, e-sports (gaming), betting and sports

    Timely and reliable evaluation of the effects of interventions: a framework for adaptive meta-analysis (FAME)

    Get PDF
    Most systematic reviews are retrospective and use aggregate data AD) from publications, meaning they can be unreliable, lag behind therapeutic developments and fail to influence ongoing or new trials. Commonly, the potential influence of unpublished or ongoing trials is overlooked when interpreting results, or determining the value of updating the meta-analysis or need to collect individual participant data (IPD). Therefore, we developed a Framework for Adaptive Metaanalysis (FAME) to determine prospectively the earliest opportunity for reliable AD meta-analysis. We illustrate FAME using two systematic reviews in men with metastatic (M1) and non-metastatic (M0)hormone-sensitive prostate cancer (HSPC)
    • …
    corecore