20 research outputs found

    Novel analysis and modelling methodologies applied to pultrusion and other processes

    Get PDF
    Often a manufacturing process may be a bottleneck or critical to a business. This thesis focuses on the analysis and modelling of such processest, to both better understand them, and to support the enhancement of quality or output capability of the process. The main thrusts of this thesis therefore are: To model inter-process physics, inter-relationships, and complex processes in a manner that enables re-exploitation, re-interpretation and reuse of this knowledge and generic elements e.g. using Object Oriented (00) & Qualitative Modelling (QM) techniques. This involves the development of superior process models to capture process complexity and reuse any generic elements; To demonstrate advanced modelling and simulation techniques (e.g. Artificial Neural Networks(ANN), Rule-Based-Systems (RBS), and statistical modelling) on a number of complex manufacturing case studies; To gain a better understanding of the physics and process inter-relationships exhibited in a number of complex manufacturing processes (e.g. pultrusion, bioprocess, and logistics) using analysis and modelling. To these ends, both a novel Object Oriented Qualitative (Problem) Analysis (OOQA) methodology, and a novel Artificial Neural Network Process Modelling (ANNPM) methodology were developed and applied to a number of complex manufacturing case studies- thermoset and thermoplastic pultrusion, bioprocess reactor, and a logistics supply chain. It has been shown that these methodologies and the models developed support capture of complex process inter-relationships, enable reuse of generic elements, support effective variable selection for ANN models, and perform well as a predictor of process properties. In particular the ANN pultrusion models, using laboratory data from IKV, Aachen and Pera, Melton Mowbray, predicted product properties very well

    Coalition Formation and Execution in Multi-robot Tasks

    Get PDF
    In this research, I explore several related problems in distributed robot systems that must be addressed in order to achieve multi-robot tasks, in which individual robots may not possess all the required capabilities. While most previous research work on multi-robot cooperation mainly concentrates on loosely-coupled multi-robot tasks, a more challenging problem is to also address tightly-coupled multi- robot tasks involving close robot interactions, which often require capability sharing. Three related topics towards addressing these tasks are discussed, as follows: Forming coalitions, which determines how robots should form into subgroups (i.e., coalitions) to address individual tasks. To achieve system autonomy, the ability to identify the feasibility of potential solutions is critical for forming coalitions. A general IQ-ASyMTRe architecture, which is formally proven to be sound and complete in this research, is introduced to incorporate this capability based on the ASyMTRe architecture. Executing coalitions, which coordinates different robots within the same coalition during physical execution to accomplish individual tasks. For executing coalitions, the IQ-ASyMTRe+ approach is presented. An information quality measure is introduced to control the robots to maintain the required constraints for task execution in dynamic environment. Redundancies at sensory and computational levels are utilized to enable execution that is robust to internal and external influences. Task allocation, which optimizes the overall performance of the system when multiple tasks need to be addressed. In this research, this problem is analyzed and the formulation is extended. A new greedy heuristic is introduced, which considers inter-task resource constraints to approximate the influence between different assignments in task allocation. Through combining the above approaches, a framework that achieves system autonomy can be created for addressing multi-robot tasks

    A framework for the analysis and evaluation of enterprise models

    Get PDF
    Bibliography: leaves 264-288.The purpose of this study is the development and validation of a comprehensive framework for the analysis and evaluation of enterprise models. The study starts with an extensive literature review of modelling concepts and an overview of the various reference disciplines concerned with enterprise modelling. This overview is more extensive than usual in order to accommodate readers from different backgrounds. The proposed framework is based on the distinction between the syntactic, semantic and pragmatic model aspects and populated with evaluation criteria drawn from an extensive literature survey. In order to operationalize and empirically validate the framework, an exhaustive survey of enterprise models was conducted. From this survey, an XML database of more than twenty relatively large, publicly available enterprise models was constructed. A strong emphasis was placed on the interdisciplinary nature of this database and models were drawn from ontology research, linguistics, analysis patterns as well as the traditional fields of data modelling, data warehousing and enterprise systems. The resultant database forms the test bed for the detailed framework-based analysis and its public availability should constitute a useful contribution to the modelling research community. The bulk of the research is dedicated to implementing and validating specific analysis techniques to quantify the various model evaluation criteria of the framework. The aim for each of the analysis techniques is that it can, where possible, be automated and generalised to other modelling domains. The syntactic measures and analysis techniques originate largely from the disciplines of systems engineering, graph theory and computer science. Various metrics to measure model hierarchy, architecture and complexity are tested and discussed. It is found that many are not particularly useful or valid for enterprise models. Hence some new measures are proposed to assist with model visualization and an original "model signature" consisting of three key metrics is proposed.Perhaps the most significant contribution ofthe research lies in the development and validation of a significant number of semantic analysis techniques, drawing heavily on current developments in lexicography, linguistics and ontology research. Some novel and interesting techniques are proposed to measure, inter alia, domain coverage, model genericity, quality of documentation, perspicuity and model similarity. Especially model similarity is explored in depth by means of various similarity and clustering algorithms as well as ways to visualize the similarity between models. Finally, a number of pragmatic analyses techniques are applied to the models. These include face validity, degree of use, authority of model author, availability, cost, flexibility, adaptability, model currency, maturity and degree of support. This analysis relies mostly on the searching for and ranking of certain specific information details, often involving a degree of subjective interpretation, although more specific quantitative procedures are suggested for some of the criteria. To aid future researchers, a separate chapter lists some promising analysis techniques that were investigated but found to be problematic from methodological perspective. More interestingly, this chapter also presents a very strong conceptual case on how the proposed framework and the analysis techniques associated vrith its various criteria can be applied to many other information systems research areas. The case is presented on the grounds of the underlying isomorphism between the various research areas and illustrated by suggesting the application of the framework to evaluate web sites, algorithms, software applications, programming languages, system development methodologies and user interfaces

    On risk-based decision-making for structural health monitoring

    Get PDF
    Structural health monitoring (SHM) technologies seek to detect, localise, and characterise damage present within structures and infrastructure. Arguably, the foremost incentive for developing and implementing SHM systems is to improve the quality of operation and maintenance (O&M) strategies for structures, such that safety can be enhanced, or greater economic benefits can be realised. Given this motivation, SHM systems can be considered primarily as decision-support tools. Although much research has been conducted into damage identification and characterisation approaches, there has been relatively little that has explicitly considered the decision-making applications of SHM systems. In light of this fact, the current thesis seeks to consider decision-making for SHM with respect to risk. Risk, defined as a product of probability and cost, can be interpreted as an expected utility. The keystone of the current thesis is a general framework for conducting risk-based, SHM generated by combining aspects of probabilistic risk assessment (PRA) with the existing statistical pattern recognition paradigm for SHM. The framework, founded on probabilistic graphical models (PGMs), utilises Bayesian network representations of fault-trees to facilitate the flow of information between observations of discriminative features to failure states of structures of interest. Using estimations of failure probabilities in conjunction with utility functions that capture the severity of consequences enables risk assessments -- these risks can be minimised with respect to candidate maintenance actions to determine optimal strategies. Key elements of the decision framework are examined; in particular, a physics-based methodology for initialising a structural degradation model defining health-state transition probabilities is presented. The risk-based framework allows aspects of SHM systems to be developed with explicit consideration for the decision-support applications. In relation to this aim, the current thesis proposes a novel approach to learn statistical classification models within an online SHM system. The approach adopts an active learning framework in which descriptive labels, corresponding to salient health states of a structure, are obtained via structural inspections. To account for the decision processes associated with SHM, structural inspections are mandated according to the expected value of information for data-labels. The resulting risk-based active learning algorithm is shown to yield cost-effective improvements in the performance of decision-making agents, in addition to reducing the number of manual inspections made over the course of a monitoring campaign. Characteristics of the risk-based active learning algorithm are further investigated, with particular focus on the effects of \sampling bias. Sampling bias is known to degrade decision-making performance over time, thus engineers have a vested interest in mitigating its negative effects. On this theme, two approaches are considered for improving risk-based active learning; semi-supervised learning, and discriminative classification models. Semi-supervised learning yielded mixed results, with performance being highly dependent on base distributions being representative of the underlying data. On the other hand, discriminative classifiers performed strongly across the board. It is shown that by mitigating the negative effects of sampling bias via classifier and algorithm design, decision-support systems can be enhanced, resulting in more cost-effective O&M strategies. Finally, the future of risk-based decision-making is considered. Particular attention is given to population-based structural health monitoring (PBSHM), and the management of fleets of assets. The hierarchical representation of structures used to develop the risk-based SHM framework is extended to populations of structures. Initial research into PBSHM shows promising results with respect to the transfer of information between individual structures comprising a population. The significance of these results in the context of decision-making is discussed. To summarise, by framing SHM systems as decision-support tools, risk-informed O&M strategies can be developed for structures and infrastructure such that safety is improved and costs are reduced

    Fundamental Approaches to Software Engineering

    Get PDF
    computer software maintenance; computer software selection and evaluation; formal logic; formal methods; formal specification; programming languages; semantics; software engineering; specifications; verificatio

    Combining SOA and BPM Technologies for Cross-System Process Automation

    Get PDF
    This paper summarizes the results of an industry case study that introduced a cross-system business process automation solution based on a combination of SOA and BPM standard technologies (i.e., BPMN, BPEL, WSDL). Besides discussing major weaknesses of the existing, custom-built, solution and comparing them against experiences with the developed prototype, the paper presents a course of action for transforming the current solution into the proposed solution. This includes a general approach, consisting of four distinct steps, as well as specific action items that are to be performed for every step. The discussion also covers language and tool support and challenges arising from the transformation

    Statistical Methodologies of Functional Data Analysis for Industrial Applications

    Get PDF
    This thesis stands as one of the first attempt to connect the statistical object oriented data analysis (OODA) methodologies with the industry field. Indeed, the aim of this thesis is to develop statistical methods to tackle industrial problems through the paradigm of the OODA. The new framework of Industry 4.0 requires factories that are equipped with sensor and advanced acquisition systems that acquire data with a high degree of complexity. OODA can be particularly suitable to deal with this increasing complexity as it considers each statistical unit as an atom or a data object assumed to be a point in a well-defined mathematical space. This idea allows one to deal with complex data structure by changing the resolution of the analysis. Indeed, from standard methods where the atom is represented by vector of numbers, the focus now is on methodologies where the objects of the analysis are whole complex objects. In particular, this thesis focuses on functional data analysis (FDA), a branch of OODA that considers as the atom of the analysis functions defined on compact domains. The cross-fertilization of FDA methods to industrial applications is developed into three parts in this dissertation. The first part presents methodologies developed to solve specific applicative problems. In particular, a first consistent portion of this part is focused on \textit{profile monitoring} methods applied to ship CO\textsubscript{2} emissions. A second portion deals with the problem of predicting the mechanical properties of an additively manufactured artifact given the particle size distribution of the powder used for its production. And, a third portion copes with the cluster analysis for the quality assessment of metal sheet spot welds in the automotive industry based on observations of dynamic resistance curve. Stimulated by these challenges, the second part of this dissertation turns towards a more methodological line that addresses the notion of \textit{interpretability} for functional data. In particular, two new interpretable estimators of the coefficient function of the function-on-function linear regression model are proposed, which are named S-LASSO and AdaSS, respectively. Moreover, a new method, referred to as SaS-Funclust, is presented for sparse clustering of functional data that aims to classify a sample of curves into homogeneous groups while jointly detecting the most informative portions of domain. In the last part, two ongoing researches on FDA methods for industrial application are presented. In particular, the first one regards the definition of a new robust nonparametric functional ANOVA method (Ro-FANOVA) to test differences among group functional means by being robust against the presence of outliers with an application to additive manufacturing. The second one sketches a new methodological framework for the real-time profile monitoring

    Simulating Humans: Computer Graphics, Animation, and Control

    Get PDF
    People are all around us. They inhabit our home, workplace, entertainment, and environment. Their presence and actions are noted or ignored, enjoyed or disdained, analyzed or prescribed. The very ubiquitousness of other people in our lives poses a tantalizing challenge to the computational modeler: people are at once the most common object of interest and yet the most structurally complex. Their everyday movements are amazingly uid yet demanding to reproduce, with actions driven not just mechanically by muscles and bones but also cognitively by beliefs and intentions. Our motor systems manage to learn how to make us move without leaving us the burden or pleasure of knowing how we did it. Likewise we learn how to describe the actions and behaviors of others without consciously struggling with the processes of perception, recognition, and language
    corecore