954 research outputs found
Risk of employing an evolvable production system
Nowadays manufacturing companies are facing a more challenging environment due to the unpredictability of the markets in order to survive. Enterprises need to keep innovating and deliver products with new internal or external characteristics. There are strategies and solutions, to different organisational level from strategic to operational, when technology is growing faster in operational level, more specifically in manufacturing system. This means that companies have to deal with the changes of the emergent manufacturing systems while it can be expensive and not easy to be implement.
An agile manufacturing system can help to cope with the markets changeability. Evolvable Production Systems (EPS) is an emergent paradigm which aims to bring new solutions to deal with changeability. The proposed paradigm is characterised by modularity and intends to introduce high flexibility and dynamism at shop floor level through the use of the evolution of new computational devices and technology. This new approach brings to enterprises the ability to plug and unplug new devices and allowing fast reformulation of the production line without reprogramming. There is no doubt about the advantages and benefits of this emerging technology but the feasibility and applicability is still under questioned. Most researches in this area are focused on technical side, explaining the advantages of those systems while there are no sufficient works discussing the implementation risks from different perspective, including business owner.
The main objective of this work is to propose a methodology and model to identify, classify and measure potential risk associated with an implementation of this emergent paradigm. To quantify the proposed comprehensive risk model, an Intelligent Decision system is developed employing Fuzzy Inference System to deal with the knowledge of experts, as there are no historical data and sufficient research on this area. The result can be the vulnerability assessment of implementing EPS technology in manufacturing companies when the focus is more on SMEs.
The present dissertation used the experts’ knowledge and experiences, who were involved in FP7 project IDEAS, which is one of the leading projects in this area
Students´ language in computer-assisted tutoring of mathematical proofs
Truth and proof are central to mathematics. Proving (or disproving) seemingly simple statements often turns out to be one of the hardest mathematical tasks. Yet, doing proofs is rarely taught in the classroom. Studies on cognitive difficulties in learning to do proofs have shown that pupils and students not only often do not understand or cannot apply basic formal reasoning techniques and do not know how to use formal mathematical language, but, at a far more fundamental level, they also do not understand what it means to prove a statement or even do not see the purpose of proof at all. Since insight into the importance of proof and doing proofs as such cannot be learnt other than by practice, learning support through individualised tutoring is in demand.
This volume presents a part of an interdisciplinary project, set at the intersection of pedagogical science, artificial intelligence, and (computational) linguistics, which investigated issues involved in provisioning computer-based tutoring of mathematical proofs through dialogue in natural language. The ultimate goal in this context, addressing the above-mentioned need for learning support, is to build intelligent automated tutoring systems for mathematical proofs. The research presented here has been focused on the language that students use while interacting with such a system: its linguistic propeties and computational modelling. Contribution is made at three levels: first, an analysis of language phenomena found in students´ input to a (simulated) proof tutoring system is conducted and the variety of students´ verbalisations is quantitatively assessed, second, a general computational processing strategy for informal mathematical language and methods of modelling prominent language phenomena are proposed, and third, the prospects for natural language as an input modality for proof tutoring systems is evaluated based on collected corpora
Working Notes from the 1992 AAAI Workshop on Automating Software Design. Theme: Domain Specific Software Design
The goal of this workshop is to identify different architectural approaches to building domain-specific software design systems and to explore issues unique to domain-specific (vs. general-purpose) software design. Some general issues that cut across the particular software design domain include: (1) knowledge representation, acquisition, and maintenance; (2) specialized software design techniques; and (3) user interaction and user interface
Recommended from our members
Analyzing decision making in software design
A model is given for the analysis of rationality in design decision making. We define a formal means for answering the query, To what extent has a designer, on a particular occasion, using an explicit definition of 'good', decided rationally?A decision rationality classification scheme is proposed. This scheme incorporates non-compensatory decision analysis techniques (dominance and conjunctive cut-off) as well as compensatory techniques (simple and hierarchical additive weighting, linear assignment, concordance, and displaced ideal). A formal definition of design decision is derived by extending the Lehman, Stenning, Turski transformational model of the software design process. Their view of artifact specification mappings between linguistic systems is extended to include the concomitant effect of the mapping on resource expenditure.A formal specification for decision control knowledge is defined. This representation is the union of that knowledge required to support the various decision analysis techniques. Presumed to operationalize a designer's goals, the knowledge representation scheme includes five levels:1. Each objective expresses some relevant design concern for an artifact and/or resource characteristic.2. Each criterion expresses some relevant decomposition of a superior objective or criterion.3. Each attribute expresses the bottom-most decomposition for a superior criterion. Each attribute may have a weight indicating its relative contribution to its superior criterion.4. For each attribute, a value function expresses the designer's preference ordering over observed performance for an attribute.5. For each attribute, an observation channel describes an observer independent metric over some specification (either resource or artifact) rendered in some linguistic system and a procedure for application of that metric.Our model is applied to problems in Structured Design and conceptual data modeling. We argue that a comprehensive design history must include not only the transformations applied but also the rationale used in deciding their application. This rationale must include decision control knowledge governing both artifact (product) and resource (process) facets of design decision making. The principal contribution of this work is that the opacity of the decision intensive aspects of design are reduced thereby taking a necessary step for increasing the efficiency and effectiveness of software development
Towards Managing and Understanding the Risk of Underwater Terrorism
This dissertation proposes a methodology to manage and understand the risk of underwater terrorism to critical infrastructures utilizing the parameters of the risk equation. Current methods frequently rely on statistical methods, which suffer from a lack of appropriate historical data to produce distributions and do not integrate epistemic uncertainty. Other methods rely on locating subject matter experts who can provide judgment and then undertaking an associated validation of these judgments.
Using experimentation, data from unclassified successful, or near successful, underwater attacks are analyzed and instantiated as a network graph with the key characteristics of the risk of terrorism represented as nodes and the relationship between the key characteristics forming the edges. The values of the key characteristics, instantiated as the length of the edges, are defaulted to absolute uncertainty, the state where there is no information for, or against, a particular causal factor. To facilitate obtaining the value of the nodes, the Malice spectrum is formally defined which provides a dimensionless, methodology independent model to determine the value of any given parameter. The methodology produces a meta-model constructed from the relationships between the parameters of the risk equation, which determines a relative risk value
FINE-GRAINED EMOTION DETECTION IN MICROBLOG TEXT
Automatic emotion detection in text is concerned with using natural language processing techniques to recognize emotions expressed in written discourse. Endowing computers with the ability to recognize emotions in a particular kind of text, microblogs, has important applications in sentiment analysis and affective computing. In order to build computational models that can recognize the emotions represented in tweets we need to identify a set of suitable emotion categories. Prior work has mainly focused on building computational models for only a small set of six basic emotions (happiness, sadness, fear, anger, disgust, and surprise). This thesis describes a taxonomy of 28 emotion categories, an expansion of these six basic emotions, developed inductively from data. This set of 28 emotion categories represents a set of fine-grained emotion categories that are representative of the range of emotions expressed in tweets, microblog posts on Twitter.
The ability of humans to recognize these fine-grained emotion categories is characterized using inter-annotator reliability measures based on annotations provided by expert and novice annotators. A set of 15,553 human-annotated tweets form a gold standard corpus, EmoTweet-28. For each emotion category, we have extracted a set of linguistic cues (i.e., punctuation marks, emoticons, emojis, abbreviated forms, interjections, lemmas, hashtags and collocations) that can serve as salient indicators for that emotion category.
We evaluated the performance of automatic classification techniques on the set of 28 emotion categories through a series of experiments using several classifier and feature combinations. Our results shows that it is feasible to extend machine learning classification to fine-grained emotion detection in tweets (i.e., as many as 28 emotion categories) with results that are comparable to state-of-the-art classifiers that detect six to eight basic emotions in text. Classifiers using features extracted from the linguistic cues associated with each category equal or better the performance of conventional corpus-based and lexicon-based features for fine-grained emotion classification.
This thesis makes an important theoretical contribution in the development of a taxonomy of emotion in text. In addition, this research also makes several practical contributions, particularly in the creation of language resources (i.e., corpus and lexicon) and machine learning models for fine-grained emotion detection in text
Linguistic probability theory
In recent years probabilistic knowledge-based systems such as Bayesian networks and influence diagrams have come to the fore as a means of representing and reasoning about complex real-world situations. Although some of the
probabilities used in these models may be obtained statistically, where this is
impossible or simply inconvenient, modellers rely on expert knowledge. Experts, however, typically find it difficult to specify exact probabilities and conventional representations cannot reflect any uncertainty they may have. In
this way, the use of conventional point probabilities can damage the accuracy,
robustness and interpretability of acquired models. With these concerns in
mind, psychometric researchers have demonstrated that fuzzy numbers are
good candidates for representing the inherent vagueness of probability estimates, and the fuzzy community has responded with two distinct theories of
fuzzy probabilities.This thesis, however, identifies formal and presentational problems with these
theories which render them unable to represent even very simple scenarios.
This analysis leads to the development of a novel and intuitively appealing
alternative - a
theory of linguistic probabilities patterned after the standard Kolmogorov axioms of probability theory. Since fuzzy numbers lack algebraic
inverses, the resulting theory is weaker than, but generalises its classical counterpart. Nevertheless, it is demonstrated that analogues for classical probabilistic concepts such as conditional probability and random variables can be
constructed. In the classical theory, representation theorems mean that most of
the time the distinction between mass/density distributions and probability
measures can be ignored. Similar results are proven for linguistic probabiliities.From these results it is shown that directed acyclic graphs annotated with linguistic probabilities (under certain identified conditions) represent systems of
linguistic random variables. It is then demonstrated these linguistic Bayesian
networks can utilise adapted best-of-breed Bayesian network algorithms (junction tree based inference and Bayes' ball irrelevancy calculation). These algorithms are implemented in ARBOR, an interactive design, editing and querying
tool for linguistic Bayesian networks.To explore the applications of these techniques, a realistic example drawn from
the domain of forensic statistics is developed. In this domain the knowledge
engineering problems cited above are especially pronounced and expert estimates are commonplace. Moreover, robust conclusions are of unusually critical importance. An analysis of the resulting linguistic Bayesian network for
assessing evidential support in glass-transfer scenarios highlights the potential
utility of the approach
Design and implementation of fuzzy logic controller for a process control application
Many industrial applications of fuzzy logic control have been reported. This thesis studies and reports the problems associated with the Heat-exchanger temperature control via conventional PID control implemented with Programmable Logic Controllers (PLC) and provides an example of design and implementation of fuzzy logic controllers (FLC\u27s) for a Heat exchanger in a Water for Injection (WFI) system.
After a basic FLC was designed and tested, it is shown how its rule base evolved to achieve superior performance by utilizing additional low-cost sensing information in the process and its environment. A method for the implementation of FLC\u27s into the existing PLC is discussed. The system performance of the five designed FLC rule-base strategies is compared with that of the existing PIID controller and it is concluded that better performance can be achieved by using the fuzzy logic control technology.
Finally, this thesis discusses some blocking problems in widespread industrial applications of FLCs and the possible solutions to them
- …