9,704 research outputs found

    The Need to Support of Data Flow Graph Visualization of Forensic Lucid Programs, Forensic Evidence, and their Evaluation by GIPSY

    Full text link
    Lucid programs are data-flow programs and can be visually represented as data flow graphs (DFGs) and composed visually. Forensic Lucid, a Lucid dialect, is a language to specify and reason about cyberforensic cases. It includes the encoding of the evidence (representing the context of evaluation) and the crime scene modeling in order to validate claims against the model and perform event reconstruction, potentially within large swaths of digital evidence. To aid investigators to model the scene and evaluate it, instead of typing a Forensic Lucid program, we propose to expand the design and implementation of the Lucid DFG programming onto Forensic Lucid case modeling and specification to enhance the usability of the language and the system and its behavior. We briefly discuss the related work on visual programming an DFG modeling in an attempt to define and select one approach or a composition of approaches for Forensic Lucid based on various criteria such as previous implementation, wide use, formal backing in terms of semantics and translation. In the end, we solicit the readers' constructive, opinions, feedback, comments, and recommendations within the context of this short discussion.Comment: 11 pages, 7 figures, index; extended abstract presented at VizSec'10 at http://www.vizsec2010.org/posters ; short paper accepted at PST'1

    CSP design model and tool support

    Get PDF
    The CSP paradigm is known as a powerful concept for designing and analysing the architectural and behavioural parts of concurrent software. Although the theory of CSP is useful for mathematicians, the programming language occam has been derived from CSP that is useful for any engineering practice. Nowadays, the concept of occam/CSP can be used for almost every object-oriented programming language. This paper describes a tree-based description model and prototype tool that elevates the use of occam/CSP concepts at the design level and performs code generation to Java, C, C++, and machine-readable CSP for the level of implementation. The tree-based description model can be used to browse through the generated source code. The tool is a kind of browser that is able to assist modern workbenches (like Borland Builder, Microsoft Visual C++ and 20-SIM) with coding concurrency. The tool will guide the user through the design trajectory using support messages and several semantic and syntax rule checks. The machine-readable CSP can be read by FDR, enabling more advanced analysis on the design. Early experiments with the prototype tool show that the browser concept, combined with the tree-based description model, enables a user-friendly way to create a design using the CSP concepts and benefits. The design tool is available from our URL, http://www.rt.el.utwente.nl/javapp

    Improving Usability of Interactive Graphics Specification and Implementation with Picking Views and Inverse Transformations

    Get PDF
    Specifying and programming graphical interactions are difficult tasks, notably because designers have difficulties to express the dynamics of the interaction. This paper shows how the MDPC architecture improves the usability of the specification and the implementation of graphical interaction. The architecture is based on the use of picking views and inverse transforms from the graphics to the data. With three examples of graphical interaction, we show how to express them with the architecture, how to implement them, and how this improves programming usability. Moreover, we show that it enables implementing graphical interaction without a scene graph. This kind of code prevents from errors due to cache consistency management

    Towards a compendium of process technologies: The jBPT library for process model analysis

    Get PDF
    This paper presents the idea of a compendium of process technologies, i.e., a concise but comprehensive collection of techniques for process model analysis that support research on the design, execution, and evaluation of processes. The idea originated from observations on the evolution of process-related research disciplines. Based on these observations, we derive design goals for a compendium. Then, we present the jBPT library, which addresses these goals by means of an implementation of common analysis techniques in an open source codebase

    Static Enforcement of Role-Based Access Control

    Full text link
    We propose a new static approach to Role-Based Access Control (RBAC) policy enforcement. The static approach we advocate includes a new design methodology, for applications involving RBAC, which integrates the security requirements into the system's architecture. We apply this new approach to policies restricting calls to methods in Java applications. We present a language to express RBAC policies on calls to methods in Java, a set of design patterns which Java programs must adhere to for the policy to be enforced statically, and a description of the checks made by our static verifier for static enforcement.Comment: In Proceedings WWV 2014, arXiv:1409.229

    Extending snBench to Support Hierarchical and Configurable Scheduling

    Full text link
    It is useful in systems that must support multiple applications with various temporal requirements to allow application-specific policies to manage resources accordingly. However, there is a tension between this goal and the desire to control and police possibly malicious programs. The Java-based Sensor Execution Environment (SXE) in snBench presents a situation where such considerations add value to the system. Multiple applications can be run by multiple users with varied temporal requirements, some Real-Time and others best effort. This paper outlines and documents an implementation of a hierarchical and configurable scheduling system with which different applications can be executed using application-specific scheduling policies. Concurrently the system administrator can define fairness policies between applications that are imposed upon the system. Additionally, to ensure forward progress of system execution in the face of malicious or malformed user programs, an infrastructure for execution using multiple threads is described

    On the Effect of Semantically Enriched Context Models on Software Modularization

    Full text link
    Many of the existing approaches for program comprehension rely on the linguistic information found in source code, such as identifier names and comments. Semantic clustering is one such technique for modularization of the system that relies on the informal semantics of the program, encoded in the vocabulary used in the source code. Treating the source code as a collection of tokens loses the semantic information embedded within the identifiers. We try to overcome this problem by introducing context models for source code identifiers to obtain a semantic kernel, which can be used for both deriving the topics that run through the system as well as their clustering. In the first model, we abstract an identifier to its type representation and build on this notion of context to construct contextual vector representation of the source code. The second notion of context is defined based on the flow of data between identifiers to represent a module as a dependency graph where the nodes correspond to identifiers and the edges represent the data dependencies between pairs of identifiers. We have applied our approach to 10 medium-sized open source Java projects, and show that by introducing contexts for identifiers, the quality of the modularization of the software systems is improved. Both of the context models give results that are superior to the plain vector representation of documents. In some cases, the authoritativeness of decompositions is improved by 67%. Furthermore, a more detailed evaluation of our approach on JEdit, an open source editor, demonstrates that inferred topics through performing topic analysis on the contextual representations are more meaningful compared to the plain representation of the documents. The proposed approach in introducing a context model for source code identifiers paves the way for building tools that support developers in program comprehension tasks such as application and domain concept location, software modularization and topic analysis
    corecore