64,560 research outputs found

    Prototyping the Semantics of a DSL using ASF+SDF: Link to Formal Verification of DSL Models

    Full text link
    A formal definition of the semantics of a domain-specific language (DSL) is a key prerequisite for the verification of the correctness of models specified using such a DSL and of transformations applied to these models. For this reason, we implemented a prototype of the semantics of a DSL for the specification of systems consisting of concurrent, communicating objects. Using this prototype, models specified in the DSL can be transformed to labeled transition systems (LTS). This approach of transforming models to LTSs allows us to apply existing tools for visualization and verification to models with little or no further effort. The prototype is implemented using the ASF+SDF Meta-Environment, an IDE for the algebraic specification language ASF+SDF, which offers efficient execution of the transformation as well as the ability to read models and produce LTSs without any additional pre or post processing.Comment: In Proceedings AMMSE 2011, arXiv:1106.596

    Object orientation and visualization of physics in two dimensions

    Full text link
    We present a generalized framework for cellular/lattice based visualizations in two dimensions based on state of the art computing abstractions. Our implementation takes the form of a library of reusable functions written in C++ which hides complex graphical programming issues from the user and mimics the algebraic structure of physics at the Hamiltonian level. Our toolkit is not just a graphics library but an object analysis of physical systems which disentangles separate concepts in a faithful analytical way. It could be rewritten in other languages such as Java and extended to three dimensional systems straightforwardly. We illustrate the usefulness of our analysis with implementations of spin-films (the two-dimensional XY model with and without an external magnetic field) and a model for diffusion through a triangular lattice.Comment: 12 pages, 10 figure

    What May Visualization Processes Optimize?

    Full text link
    In this paper, we present an abstract model of visualization and inference processes and describe an information-theoretic measure for optimizing such processes. In order to obtain such an abstraction, we first examined six classes of workflows in data analysis and visualization, and identified four levels of typical visualization components, namely disseminative, observational, analytical and model-developmental visualization. We noticed a common phenomenon at different levels of visualization, that is, the transformation of data spaces (referred to as alphabets) usually corresponds to the reduction of maximal entropy along a workflow. Based on this observation, we establish an information-theoretic measure of cost-benefit ratio that may be used as a cost function for optimizing a data visualization process. To demonstrate the validity of this measure, we examined a number of successful visualization processes in the literature, and showed that the information-theoretic measure can mathematically explain the advantages of such processes over possible alternatives.Comment: 10 page

    The Need to Support of Data Flow Graph Visualization of Forensic Lucid Programs, Forensic Evidence, and their Evaluation by GIPSY

    Full text link
    Lucid programs are data-flow programs and can be visually represented as data flow graphs (DFGs) and composed visually. Forensic Lucid, a Lucid dialect, is a language to specify and reason about cyberforensic cases. It includes the encoding of the evidence (representing the context of evaluation) and the crime scene modeling in order to validate claims against the model and perform event reconstruction, potentially within large swaths of digital evidence. To aid investigators to model the scene and evaluate it, instead of typing a Forensic Lucid program, we propose to expand the design and implementation of the Lucid DFG programming onto Forensic Lucid case modeling and specification to enhance the usability of the language and the system and its behavior. We briefly discuss the related work on visual programming an DFG modeling in an attempt to define and select one approach or a composition of approaches for Forensic Lucid based on various criteria such as previous implementation, wide use, formal backing in terms of semantics and translation. In the end, we solicit the readers' constructive, opinions, feedback, comments, and recommendations within the context of this short discussion.Comment: 11 pages, 7 figures, index; extended abstract presented at VizSec'10 at http://www.vizsec2010.org/posters ; short paper accepted at PST'1

    Taming Numbers and Durations in the Model Checking Integrated Planning System

    Full text link
    The Model Checking Integrated Planning System (MIPS) is a temporal least commitment heuristic search planner based on a flexible object-oriented workbench architecture. Its design clearly separates explicit and symbolic directed exploration algorithms from the set of on-line and off-line computed estimates and associated data structures. MIPS has shown distinguished performance in the last two international planning competitions. In the last event the description language was extended from pure propositional planning to include numerical state variables, action durations, and plan quality objective functions. Plans were no longer sequences of actions but time-stamped schedules. As a participant of the fully automated track of the competition, MIPS has proven to be a general system; in each track and every benchmark domain it efficiently computed plans of remarkable quality. This article introduces and analyzes the most important algorithmic novelties that were necessary to tackle the new layers of expressiveness in the benchmark problems and to achieve a high level of performance. The extensions include critical path analysis of sequentially generated plans to generate corresponding optimal parallel plans. The linear time algorithm to compute the parallel plan bypasses known NP hardness results for partial ordering by scheduling plans with respect to the set of actions and the imposed precedence relations. The efficiency of this algorithm also allows us to improve the exploration guidance: for each encountered planning state the corresponding approximate sequential plan is scheduled. One major strength of MIPS is its static analysis phase that grounds and simplifies parameterized predicates, functions and operators, that infers knowledge to minimize the state description length, and that detects domain object symmetries. The latter aspect is analyzed in detail. MIPS has been developed to serve as a complete and optimal state space planner, with admissible estimates, exploration engines and branching cuts. In the competition version, however, certain performance compromises had to be made, including floating point arithmetic, weighted heuristic search exploration according to an inadmissible estimate and parameterized optimization
    • …
    corecore