1,266 research outputs found

    An Empirical Investigation Of The Cognitive Fit Of Selected Process Model Diagramming Techniques

    Get PDF
    This paper describes an empirical investigation of the cognitive fit (defined as the degree to which a particular diagramming technique is representative of a problem space) between four process modeling techniques consisting of data flow diagrams (DFD), process maps (PM), flowcharts (FC), and Resources, Events, and Agents (REA) diagrams. xperimental results indicated some positive associations between three techniques (PM, DFD, and REA) and scores for questions hypothetically pertaining to each, respectively.  Contrary to the hypotheses, PM and DFD outperformed F

    An Empirical Investigation Of The Cognitive Fit Of Selected Process Model Diagramming Techniques

    Get PDF
    This paper describes an empirical investigation of the cognitive fit (defined as the degree to which a particular diagramming technique is representative of a problem space) between four process modeling techniques consisting of data flow diagrams (DFD), process maps (PM), flowcharts (FC), and Resources, Events, and Agents (REA) diagrams. Experimental results indicated some positive associations between three techniques (PM, DFD, and REA) and scores for questions hypothetically pertaining to each, respectively.  Contrary to the hypotheses, PM and DFD outperformed FC

    Efficient Software Implementation of Stream Programs

    Get PDF
    The way we use computers and mobile phones today requires large amounts of processing of data streams. Examples include digital signal processing for wireless transmission, audio and video coding for recording and watching videos, and noise reduction for the phone calls. These tasks can be performed by stream programs—computer programs that process streams of data. Stream programs can be composed of other stream programs. Components of a composition are connected in a network, i.e. the output streams of one component are sent as input streams to other components. The components, that perform the actual computation, are called kernels. They can be described in different styles and programming languages. There are also formal models for describing the kernels and the networks. One such model is the actor machine.This dissertation evaluates the actor machine, how it facilitates creating efficient software implementation of stream programs. The evaluation is divided into four aspects: (1) analyzability of its structure, (2) generality in what languages and styles it can express, (3) efficient implementation of kernels, and (4) efficient implementation of networks. This dissertation demonstrates all four aspects through implementation and evaluation of a stream program compiler based on actor machines

    A maximal clique based multiobjective evolutionary algorithm for overlapping community detection

    Get PDF
    Detecting community structure has become one im-portant technique for studying complex networks. Although many community detection algorithms have been proposed, most of them focus on separated communities, where each node can be-long to only one community. However, in many real-world net-works, communities are often overlapped with each other. De-veloping overlapping community detection algorithms thus be-comes necessary. Along this avenue, this paper proposes a maxi-mal clique based multiobjective evolutionary algorithm for over-lapping community detection. In this algorithm, a new represen-tation scheme based on the introduced maximal-clique graph is presented. Since the maximal-clique graph is defined by using a set of maximal cliques of original graph as nodes and two maximal cliques are allowed to share the same nodes of the original graph, overlap is an intrinsic property of the maximal-clique graph. Attributing to this property, the new representation scheme al-lows multiobjective evolutionary algorithms to handle the over-lapping community detection problem in a way similar to that of the separated community detection, such that the optimization problems are simplified. As a result, the proposed algorithm could detect overlapping community structure with higher partition accuracy and lower computational cost when compared with the existing ones. The experiments on both synthetic and real-world networks validate the effectiveness and efficiency of the proposed algorithm

    Towards a fully automated computation of RG-functions for the 3-dd O(N) vector model: Parametrizing amplitudes

    Full text link
    Within the framework of field-theoretical description of second-order phase transitions via the 3-dimensional O(N) vector model, accurate predictions for critical exponents can be obtained from (resummation of) the perturbative series of Renormalization-Group functions, which are in turn derived --following Parisi's approach-- from the expansions of appropriate field correlators evaluated at zero external momenta. Such a technique was fully exploited 30 years ago in two seminal works of Baker, Nickel, Green and Meiron, which lead to the knowledge of the β\beta-function up to the 6-loop level; they succeeded in obtaining a precise numerical evaluation of all needed Feynman amplitudes in momentum space by lowering the dimensionalities of each integration with a cleverly arranged set of computational simplifications. In fact, extending this computation is not straightforward, due both to the factorial proliferation of relevant diagrams and the increasing dimensionality of their associated integrals; in any case, this task can be reasonably carried on only in the framework of an automated environment. On the road towards the creation of such an environment, we here show how a strategy closely inspired by that of Nickel and coworkers can be stated in algorithmic form, and successfully implemented on the computer. As an application, we plot the minimized distributions of residual integrations for the sets of diagrams needed to obtain RG-functions to the full 7-loop level; they represent a good evaluation of the computational effort which will be required to improve the currently available estimates of critical exponents.Comment: 54 pages, 17 figures and 4 table

    Iterchanging Discrete Event Simulationprocess Interaction Modelsusing The Web Ontology Language - Owl

    Get PDF
    Discrete event simulation development requires significant investments in time and resources. Descriptions of discrete event simulation models are associated with world views, including the process interaction orientation. Historically, these models have been encoded using high-level programming languages or special purpose, typically vendor-specific, simulation languages. These approaches complicate simulation model reuse and interchange. The current document-centric World Wide Web is evolving into a Semantic Web that communicates information using ontologies. The Web Ontology Language OWL, was used to encode a Process Interaction Modeling Ontology for Discrete Event Simulations (PIMODES). The PIMODES ontology was developed using ontology engineering processes. Software was developed to demonstrate the feasibility of interchanging models from commercial simulation packages using PIMODES as an intermediate representation. The purpose of PIMODES is to provide a vendor-neutral open representation to support model interchange. Model interchange enables reuse and provides an opportunity to improve simulation quality, reduce development costs, and reduce development times

    Throughput-driven Partitioning of Stream Programs on Heterogeneous Distributed Systems

    Get PDF
    This is an Open Access article. © 2015 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.Graph partitioning is an important problem in computer science and is of NP-hard complexity. In practice it is usually solved using heuristics. In this article we introduce the use of graph partitioning to partition the workload of stream programs to optimise the throughput on heterogeneous distributed platforms. Existing graph partitioning heuristics are not adequate for this problem domain. In this article we present two new heuristics to capture the problem space of graph partitioning for stream programs to optimise throughput. The first algorithm is an adaptation of the well-known Kernighan-Lin algorithm, called KL-Adapted (KLA), which is relatively slow. As a second algorithm we have developed the Congestion Avoidance (CA) partitioning algorithm, which performs reconfiguration moves optimised to our problem type. We compare both KLA and CA with the generic meta-heuristic Simulated Annealing (SA). All three methods achieve similar throughput results for most cases, but with significant differences in calculation time. For small graphs KLA is faster than SA, but KLA is slower for larger graphs. CA on the other hand is always orders of magnitudes faster than both KLA and SA, even for large graphs. This makes CA potentially useful for re-partitioning of systems during runtime.Peer reviewedFinal Published versio

    Geometric Path-Planning Algorithm in Cluttered 2D Environments Using Convex Hulls

    Get PDF
    Routing or path planning is the problem of finding a collision-free path in an environment usually scattered with multiple objects. Finding the shortest route in a planar (2D) or spatial (3D) environment has a variety of applications such as robot motion planning, navigating autonomous vehicles, routing of cables, wires, and harnesses in vehicles, routing of pipes in chemical process plants, etc. The problem often times is decomposed into two main sub-problems: modeling and representation of the workspace geometrically and optimization of the path. Geometric modeling and representation of the workspace are paramount in any path planning problem since it builds the data structures and provides the means for solving the optimization problem. The optimization aspect of the path planning involves satisfying some constraints, the most important of which is to avoid intersections with the interior of any object and optimizing one or more criteria. The most common criterion in path planning problems is to minimize the length of the path between a source and a destination point of the workspace while other criteria such as minimizing the number of links or curves could also be taken into account. Planar path planning is mainly about modeling the workspace of the problem as a collision-free graph. The graph is, later on, searched for the optimal path using network optimization techniques such as branch-and-bound or search algorithms such as Dijkstra\u27s. Previous methods developed to construct the collision-free graph explore the entire workspace of the problem which usually results in some unnecessary information that has no value but to increase the time complexity of the algorithm, hence, affecting the efficiency significantly. For example, the fastest known algorithm to construct the visibility graph, which is the most common method of modeling the collision-free space, in a workspace with a total of n vertices has a time complexity of order O(n2). In this research, first, the 2D workspace of the problem is modeled using the tessellated format of the objects in a CAD software which facilitates handling of any free-form object. Then, an algorithm is developed to construct the collision-free graph of the workspace using the convex hulls of the intersecting obstacles. The proposed algorithm focuses only on a portion of the workspace involved in the straight line connecting the source and destination points. Considering the worst case that all the objects of the workspace are intersecting, the algorithm yields a time complexity of O(nlog(n/f)), with n being the total number of vertices and f being the number of objects. The collision-free graph is later searched for the shortest path between the two given nodes using a search algorithm known as Dijkstra\u27s
    corecore