1,050,751 research outputs found

    Semantic process mining tools: core building blocks

    Get PDF
    Process mining aims at discovering new knowledge based on information hidden in event logs. Two important enablers for such analysis are powerful process mining techniques and the omnipresence of event logs in today's information systems. Most information systems supporting (structured) business processes (e.g. ERP, CRM, and workflow systems) record events in some form (e.g. transaction logs, audit trails, and database tables). Process mining techniques use event logs for all kinds of analysis, e.g., auditing, performance analysis, process discovery, etc. Although current process mining techniques/tools are quite mature, the analysis they support is somewhat limited because it is purely based on labels in logs. This means that these techniques cannot benefit from the actual semantics behind these labels which could cater for more accurate and robust analysis techniques. Existing analysis techniques are purely syntax oriented, i.e., much time is spent on filtering, translating, interpreting, and modifying event logs given a particular question. This paper presents the core building blocks necessary to enable semantic process mining techniques/tools. Although the approach is highly generic, we focus on a particular process mining technique and show how this technique can be extended and implemented in the ProM framework tool

    Requirements analysis of the VoD application using the tools in TRADE

    Get PDF
    This report contains a specification of requirements for a video-on-demand (VoD) application developed at Belgacom, used as a trial application in the 2RARE project. The specification contains three parts: an informal specification in natural language; a semiformal specification consisting of a number of diagrams intended to illustrate the informal specification; and a formal specification that makes the requiremants on the desired software system precise. The informal specification is structured in such a way that it resembles official specification documents conforming to standards such as that of IEEE or ESA. The semiformal specification uses some of the tools in from a requirements engineering toolkit called TRADE (Toolkit for Requirements And Design Engineering). The purpose of TRADE is to combine the best ideas in current structured and object-oriented analysis and design methods within a traditional systems engineering framework. In the case of the VoD system, the systems engineering framework is useful because it provides techniques for allocation and flowdown of system functions to components. TRADE consists of semiformal techniques taken from structured and object-oriented analysis as well as a formal specification langyage, which provides constructs that correspond to the semiformal constructs. The formal specification used in TRADE is LCM (Language for Conceptual Modeling), which is a syntactically sugared version of order-sorted dynamic logic with equality. The purpose of this report is to illustrate and validate the TRADE/LCM approach in the specification of distributed, communication-intensive systems

    Knowledge Acquisition for Content Selection

    Full text link
    An important part of building a natural-language generation (NLG) system is knowledge acquisition, that is deciding on the specific schemas, plans, grammar rules, and so forth that should be used in the NLG system. We discuss some experiments we have performed with KA for content-selection rules, in the context of building an NLG system which generates health-related material. These experiments suggest that it is useful to supplement corpus analysis with KA techniques developed for building expert systems, such as structured group discussions and think-aloud protocols. They also raise the point that KA issues may influence architectural design issues, in particular the decision on whether a planning approach is used for content selection. We suspect that in some cases, KA may be easier if other constructive expert-system techniques (such as production rules, or case-based reasoning) are used to determine the content of a generated text.Comment: To appear in the 1997 European NLG workshop. 10 pages, postscrip

    A Comparative Examinati on of Systems Analysis Techniques

    Get PDF
    The systems industry has now experienced al most three decades of growth and devel opment. In that period, a 1 arge number of analysis tools and techniques have been proposed to aid the development process. Early systems were supported by analysis techniques which had been used for some time in precomputer systems. Next, the precomputer techniques \u27were modified to meet some of the unique requirements of computer based systems. Succeeding generations of analysis tools continued to provide improved support to the analysis process. In recent years, a series of structured analysis tools and techniques has been introduced to the industry. At this point, a large number of competing analysis techniques exist and are widely used. However, they are not cl early understood by many practici ng professi onal s. They tend to be i ncompl ete, requi ri ng careful eval uation and integration to result in coherent analysis processes. Unfortunately, the 1 iterature on the subject tends to concentrate on the strengths of individual tool s, often implying that a single analysis process can address all needs. In reality, all analysis tools and techniques are incompl ete. While specific approaches provide support for specific analysis problems, none cover all of the system issues of interest. Traditional techniques tended to provide good detail on input and output detail. In addition, traditional analysis approaches cl arified flows of information through the organization. Later approaches considered data storage and provided tool s to represent procedural system aspects. Structured techniques concentrate on the structure of data fl ows, data, and control . Unfortunately, modern analysis approaches exhibit improvements in some areas of analysis while neglecting some of the strengths of older techniques. This paper presents a comparative examination of analysis techniques to aid practicing professionals in the choice of tool s for devel opment efforts. The comparison i s supported by a set of dimensions which represent the various system aspects of interest during analysis. These dimensions include considerations of system structure, functions, procedure, input detail, output detail, and mechanisms responsible for functions. In addition, analysis techniques may be compared in terms of their ability to support high and low level analysis and to support effective communication between systems professionals and their customers. The comparison of analysis techniques clearly shows that traditional approaches failed to consider system structure i ssues. However, modern tools fail to consi der some of the traditi onal issues of interest. For exampl e, most of the , structured analysis methods fail to provide any support for I/0 detail. In addition, almost all of the currently popul ar analysis techniques assume that all functions will be implemented in software. Only SADT and some of the older techniques support the analysis of mechanisms responsible for functions. Current man-machine concerns make mechanism analysis critical. The comparison of techniques indicates a need for the combination of multiple tools to provide complete coverage of the issues of interest during analysis. In the comparison, the strongest approaches were those which explicitly required the use of multiple tools. For exampl e, HIPO is a package which is quite compl ete, despite its age. The comparison process provides sufficient detail to support the choice of techniques which can be combined into complete packages

    Minimizing synchronizations in sparse iterative solvers for distributed supercomputers

    Get PDF
    Eliminating synchronizations is one of the important techniques related to minimizing communications for modern high performance computing. This paper discusses principles of reducing communications due to global synchronizations in sparse iterative solvers on distributed supercomputers. We demonstrates how to minimizing global synchronizations by rescheduling a typical Krylov subspace method. The benefit of minimizing synchronizations is shown in theoretical analysis and is verified by numerical experiments using up to 900 processors. The experiments also show the communication complexity for some structured sparse matrix vector multiplications and global communications in the underlying supercomputers are in the order P1/2.5 and P4/5 respectively, where P is the number of processors and the experiments were carried on a Dawning 5000A

    Geomatics Applications to Contemporary Social and Environmental Problems in Mexico

    Get PDF
    Trends in geospatial technologies have led to the development of new powerful analysis and representation techniques that involve processing of massive datasets, some unstructured, some acquired from ubiquitous sources, and some others from remotely located sensors of different kinds, all of which complement the structured information produced on a regular basis by governmental and international agencies. In this chapter, we provide both an extensive revision of such techniques and an insight of the applications of some of these techniques in various study cases in Mexico for various scales of analysis: from regional migration flows of highly qualified people at the country level and the spatio-temporal analysis of unstructured information in geotagged tweets for sentiment assessment, to more local applications of participatory cartography for policy definitions jointly between local authorities and citizens, and an automated method for three dimensional (3D) modelling and visualisation of forest inventorying with laser scanner technology
    • …
    corecore