11,964 research outputs found

    Investigating the trade-off between the effectiveness and efficiency of process modeling

    Get PDF
    Despite recent efforts to improve the quality of process models, we still observe a significant dissimilarity in quality between models. This paper focuses on the syntactic condition of process models, and how it is achieved. To this end, a dataset of 121 modeling sessions was investigated. By going through each of these sessions step by step, a separate ‘revision’ phase was identified for 81 of them. Next, by cutting the modeling process off at the start of the revision phase, a partial process model was exported for these modeling sessions. Finally, each partial model was compared with its corresponding final model, in terms of time, effort, and the number of syntactic errors made or solved, in search for a possible trade-off between the effectiveness and efficiency of process modeling. Based on the findings, we give a provisional explanation for the difference in syntactic quality of process models

    How Advanced Change Patterns Impact the Process of Process Modeling

    Get PDF
    Process model quality has been an area of considerable research efforts. In this context, correctness-by-construction as enabled by change patterns provides promising perspectives. While the process of process modeling (PPM) based on change primitives has been thoroughly investigated, only little is known about the PPM based on change patterns. In particular, it is unclear what set of change patterns should be provided and how the available change pattern set impacts the PPM. To obtain a better understanding of the latter as well as the (subjective) perceptions of process modelers, the arising challenges, and the pros and cons of different change pattern sets we conduct a controlled experiment. Our results indicate that process modelers face similar challenges irrespective of the used change pattern set (core pattern set versus extended pattern set, which adds two advanced change patterns to the core patterns set). An extended change pattern set, however, is perceived as more difficult to use, yielding a higher mental effort. Moreover, our results indicate that more advanced patterns were only used to a limited extent and frequently applied incorrectly, thus, lowering the potential benefits of an extended pattern set

    New Theory of Flight

    Get PDF
    We present a new mathematical theory explaining the fluid mechanics of subsonic flight, which is fundamentally different from the existing boundary layer-circulation theory by Prandtl–Kutta–Zhukovsky formed 100 year ago. The new theory is based on our new resolution of d’Alembert’s paradox showing that slightly viscous bluff body flow can be viewed as zero-drag/lift potential flow modified by 3d rotational slip separation arising from a specific separation instability of potential flow, into turbulent flow with nonzero drag/lift. For a wing this separation mechanism maintains the large lift of potential flow generated at the leading edge at the price of small drag, resulting in a lift to drag quotient of size 15–20 for a small propeller plane at cruising speed with Reynolds number Re≈107Re \approx 10^7 and a jumbojet at take-off and landing with Re≈108Re \approx 10^8, which allows flight at affordable power. The new mathematical theory is supported by computed turbulent solutions of the Navier–Stokes equations with a slip boundary condition as a model of observed small skin friction of a turbulent boundary layer always arising for Re>106Re > 10^6, in close accordance with experimental observations over the entire range of angle of attacks including stall using a few millions of mesh points for a full wing-body configuration

    Radio Emission and Particle Acceleration in SN 1993J

    Get PDF
    The radio light curves of SN 1993J are found to be well fit by a synchrotron spectrum, suppressed by external free-free absorption and synchrotron self-absorption. A standard r^-2 circumstellar medium is assumed, and found to be adequate. The magnetic field and number density of relativistic electrons behind the shock are determined. The strength of the magnetic field argues strongly for turbulent amplification behind the shock. The ratio of the magnetic and thermal energy density behind the shock is ~0.14. Synchrotron and Coulomb cooling dominate the losses of the electrons. The injected electron spectrum has a power law index -2.1, consistent with diffusive shock acceleration, and the number density scales with the thermal electron energy density. The total energy density of the relativistic electrons is, if extrapolated to gamma ~ 1, ~ 5x10^-4 of the thermal energy density. The free-free absorption required is consistent with previous calculations of the circumstellar temperature of SN 1993J, T_e ~ (2-10)x10^5 K. The relative importance of free-free absorption, Razin suppression, and the synchrotron self-absorption effect for other supernovae are briefly discussed. Guidelines for the modeling and interpretation of VLBI observations are given.Comment: accepted for Ap.

    The origin and evolution of syntax errors in simple sequence flow models in BPMN

    Get PDF
    How do syntax errors emerge? What is the earliest moment that potential syntax errors can be detected? Which evolution do syntax errors go through during modeling? A provisional answer to these questions is formulated in this paper based on an investigation of a dataset containing the operational details of 126 modeling sessions. First, a list is composed of the different potential syntax errors. Second, a classification framework is built to categorize the errors according to their certainty and severity during modeling (i.e., in partial or complete models). Third, the origin and evolution of all syntax errors in the dataset are identified. This data is then used to collect a number of observations, which form a basis for future research

    Spectral/hp element methods: recent developments, applications, and perspectives

    Get PDF
    The spectral/hp element method combines the geometric flexibility of the classical h-type finite element technique with the desirable numerical properties of spectral methods, employing high-degree piecewise polynomial basis functions on coarse finite element-type meshes. The spatial approximation is based upon orthogonal polynomials, such as Legendre or Chebychev polynomials, modified to accommodate C0-continuous expansions. Computationally and theoretically, by increasing the polynomial order p, high-precision solutions and fast convergence can be obtained and, in particular, under certain regularity assumptions an exponential reduction in approximation error between numerical and exact solutions can be achieved. This method has now been applied in many simulation studies of both fundamental and practical engineering flows. This paper briefly describes the formulation of the spectral/hp element method and provides an overview of its application to computational fluid dynamics. In particular, it focuses on the use the spectral/hp element method in transitional flows and ocean engineering. Finally, some of the major challenges to be overcome in order to use the spectral/hp element method in more complex science and engineering applications are discussed
    • 

    corecore