44 research outputs found

    Semantics of Higraphs for Process Modeling and Analysis

    No full text
    International audienceKnowledge and experience of a case manager remains a key success factor for Case Management Processes (CMPs). When a number of influential parameters is high, a number of possible scenarios grows significantly. Automated guidance in scenario evaluation and activity planning would be of a great help. In our previous work, we defined the statecharts semantics for visualisation and simulation of CMP scenarios. In this work, we formalise the state-oriented models with higraphs: higraphs provide mathematical foundation for statecharts and eventually enable a wide panoply of algorithms for process analysis and optimisation. We show how a statecharts diagram can be transformed into higraph and analysed at run-time with graph algorithms. In particular, we take an example of the Shortest Path algorithm and show how this algorithm can be used in order to guide the case manager suggesting her the best process scenario. Compared to BPM approaches, a state-oriented process scenario does not specify concrete activities but only the objectives and constraints to be met. Thus, our approach does not prescribe but describe an activity to be executed next. The manager can define an activity that fit the description " on the fly " , based on her experience and intuition

    Simplification of rules extracted from neural networks

    Get PDF
    Artificial neural networks (ANNs) have been proven to be successful general machine learning techniques for, amongst others, pattern recognition and classification. Realworld problems in agriculture (soybean, tea), medicine (cancer, cardiology, mammograms) and finance (credit rating, stock market) are successfully solved using ANNs. ANNs model biological neural systems. A biological neural system consists of neurons interconnected through neural synapses. These neurons serve as information processing units. Synapses carrt information to the neurons, which then processes or responds to the data by sending a signal to the next level of neurons. Information is strengthened or lessened according to the sign ..and magnitude of the weight associated with the connection. An ANN consists of cell-like entities called units (also called artificial neurons) and weighted connections between these units referred to as links. ANNs can be viewed as a directed graph with weighted connections. An unit belongs to one of three groups: input, hidden or output. Input units receive the initial training patterns, which consist of input attributes and the associated target attributes, from the environment. Hidden units do not interact with the environment whereas output units presents the results to the environment. Hidden and output units compute an output ai which is a function f of the sum of its input weights w; multiplied by the output x; of the units j in the preceding layer, together with a bias term fh that acts as a threshold for the unit. The output ai for unit i with n input units is calculated as ai = f("f:,'J= 1 x;w; - 8i ). Training of the ANN is done by adapting the weight values for each unit via a gradient search. Given a set of input-target pairs, the ANN learns the functional relationship between the input and the target. A serious drawback of the neural network approach is the difficulty to determine why a particular conclusion was reached. This is due to the inherit 'black box' nature of the neural network approach. Neural networks rely on 'raw' training data to learn the relationships between the initial inputs and target outputs. Knowledge is encoded in a set of numeric weights and biases. Although this data driven aspect of neural network allows easy adjustments when change of environment or events occur, it is difficult to interpret numeric weights, making it difficult for humans to understand. Concepts represent by symbolic learning algorithms are intuitive and therefore easily understood by humans [Wnek 1994). One approach to understanding the representations formed by neural networks is to extract such symbolic rules from networks. Over the last few years, a number of rule extraction methods have been reported (Craven 1993, Fu 1994). There are some general assumptions that these algorithms adhere to. The first assumption that most rule extraction algorithms make, is that non-input units are either maximally active (activation near 1) or inactive (activation near 0). This Boolean valued activation is approximated by using the standard logistic activation function /(z) = 1/( 1 + e-•z ) and setting s 5.0. The use of the above function parameters guarantees that non-input units always have non-negative activations in the range [0,1). The second underlying premise of rule extraction is that each hidden and output unit implements a symbolic rule. The concept associated with each unit is the consequent of the rule, and certain subsets of the input units represent the antecedent of the rule. Rule extraction algorithms search for those combinations of input values to a particular hidden or output unit that results in it having an optimal (near-one) activation. Here, rule extraction methods exploit a very basic principle of biological neural networks. That is, if the sum of its weighted inputs exceeds a certain threshold, then the biological neuron fires [Fu 1994). This condition is satisfied when the sum of the weighted inputs exceeds the bias, where (E'Jiz,=::l w; > 9i)• It has been shown that most concepts described by humans usally can be expressed as production rules in disjunctive normal form (DNF) notation. Rules expressed in this notation are therefore highly comprehensible and intuitive. In addition, the number of production rules may be reduced and the structure thereof simplified by using propositional logic. A method that extracts production rules in DNF is presented [Viktor 1995). The basic idea of the method is the use of equivalence classes. Similarly weighted links are grouped into a cluster, the assumption being that individual weights do not have unique importance. Clustering considerably reduces the combinatorics of the method as opposed to previously reported approaches. Since the rules are in a logically manipulatable form, significant simplifications in the structure thereof can be obtained, yielding a highly reduced and comprehensible set of rules. Experimental results have shown that the accuracy of the extracted rules compare favourably with the CN2 [Clark 1989] and C4.5 [Quinlan 1993] symbolic rule extraction methods. The extracted rules are highly comprehensible and similar to those extracted by traditional symfiolic methods

    Higraphs: an overview of theory and application

    Get PDF
    This paper presents an overview of the established concepts of David Harel' s higraphs, to increase their visibility. Higraphs are a union of extended graph and extended set theory which allows the understandable definition of complex semantics, having a powerful intuitive cognitive nature. Viewing 'the big picture' is cited as an example of this understandability. A number of other applications of higraphs are given. Some novel applications are suggested, including the use of higraphs in analyzing business processes, graphical user interface specification, graph domain specification and executable graphs, A proposition is made that process graphs and data-entity state-transition higraphs are duals. Finally, a case is made for 'informal' higraphs in group communications

    System Modeling and Traceability Applications of the Higraph Formalism

    Get PDF
    One of the most important tools for a systems engineer is their system model. From this model, engineering decisions can be made without costly integration, fabrication, or installations. Existing system modeling languages used to create the system model are detailed and comprehensive, but lack a true ability to unify the system model by showing all relationships among all components in the model. Higraphs, a type of mathematical graph, allow systems engineers to not only represent all required information in a system model, but to formally show all relationships in the model through hierarchies, edges, and orthogonalities. With a higraph system model, all relationships between system requirements, components, and behaviors are formalized allowing for a "smart" model that can be queried for custom sets of information that, when presented to the systems engineer, will aid in engineering decisions

    Proceedings of the Graduate Student Symposium of the 7th International Conference on the Theory and Application of Diagrams, July 5 2012

    Get PDF
    Proceedings of the Graduate Student Symposium held at the 7th International Conference on the Theory and Application of Diagrams, ( Diagrams 2012 ), held at the University of Kent on July 5, 2012. Dr. Nathaniel Miller, professor of in the School of Mathematical Sciences at UNC, served on the symposium organizing committee

    HAZOP: Our Primary Guide in the Land of Process Risks: How can we improve it and do more with its results?

    Get PDF
    PresentationAll risk management starts in determining what can happen. Reliable predictive analysis is key. So, we perform process hazard analysis, which should result in scenario identification and definition. Apart from material/substance properties, thereby, process conditions and possible deviations and mishaps form inputs. Over the years HAZOP has been the most important tool to identify potential process risks by systematically considering deviations in observables, by determining possible causes and consequences, and, if necessary, suggesting improvements. Drawbacks of HAZOP are known; it is effort-intensive while the results are used only once. The exercise must be repeated at several stages of process build-up, and when the process is operational, it must be re-conducted periodically. There have been many past attempts to semi- automate the HazOp procedure to ease the effort of conducting it, but lately new promising developments have been realized enabling also the use of the results for facilitating operational fault diagnosis. This paper will review the directions in which improved automation of HazOp is progressing and how the results, besides for risk analysis and design of preventive and protective measures, also can be used during operations for early warning of upcoming abnormal process situations

    Reusable abstractions for modeling languages

    Full text link
    This is the author’s version of a work that was accepted for publication in Information Systems. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Information Systems, 38, 8, (2013) DOI: 10.1016/j.is.2013.06.001Model-driven engineering proposes the use of models to describe the relevant aspects of the system to be built and synthesize the final application from them. Models are normally described using Domain-Specific Modeling Languages (DSMLs), which provide primitives and constructs of the domain. Still, the increasing complexity of systems has raised the need for abstraction techniques able to produce simpler versions of the models while retaining some properties of interest. The problem is that developing such abstractions for each DSML from scratch is time and resource consuming. In this paper, our goal is reducing the effort to provide modeling languages with abstraction mechanisms. For this purpose, we have devised some techniques, based on generic programming and domain-specific meta-modeling, to define generic abstraction operations that can be reused over families of modeling languages sharing certain characteristics. Abstractions can make use of clustering algorithms as similarity criteria for model elements. These algorithms can be made generic as well, and customized for particular languages by means of annotation models. As a result, we have developed a catalog of reusable abstractions using the proposed techniques, together with a working implementation in the MetaDepth multi-level meta-modeling tool. Our techniques and prototypes demonstrate that it is feasible to build reusable and adaptable abstractions, so that similar abstractions need not be developed from scratch, and their integration in new or existing modeling languages is less costly.Work funded by the Spanish Ministry of Economy and Competitivity with project “Go Lite” (TIN2011-24139), and the R&D programme of Madrid Region with project “eMadrid” (S2009/TIC-1650)
    corecore