65,195 research outputs found

    Metamodel Instance Generation: A systematic literature review

    Get PDF
    Modelling and thus metamodelling have become increasingly important in Software Engineering through the use of Model Driven Engineering. In this paper we present a systematic literature review of instance generation techniques for metamodels, i.e. the process of automatically generating models from a given metamodel. We start by presenting a set of research questions that our review is intended to answer. We then identify the main topics that are related to metamodel instance generation techniques, and use these to initiate our literature search. This search resulted in the identification of 34 key papers in the area, and each of these is reviewed here and discussed in detail. The outcome is that we are able to identify a knowledge gap in this field, and we offer suggestions as to some potential directions for future research.Comment: 25 page

    The Topology ToolKit

    Full text link
    This system paper presents the Topology ToolKit (TTK), a software platform designed for topological data analysis in scientific visualization. TTK provides a unified, generic, efficient, and robust implementation of key algorithms for the topological analysis of scalar data, including: critical points, integral lines, persistence diagrams, persistence curves, merge trees, contour trees, Morse-Smale complexes, fiber surfaces, continuous scatterplots, Jacobi sets, Reeb spaces, and more. TTK is easily accessible to end users due to a tight integration with ParaView. It is also easily accessible to developers through a variety of bindings (Python, VTK/C++) for fast prototyping or through direct, dependence-free, C++, to ease integration into pre-existing complex systems. While developing TTK, we faced several algorithmic and software engineering challenges, which we document in this paper. In particular, we present an algorithm for the construction of a discrete gradient that complies to the critical points extracted in the piecewise-linear setting. This algorithm guarantees a combinatorial consistency across the topological abstractions supported by TTK, and importantly, a unified implementation of topological data simplification for multi-scale exploration and analysis. We also present a cached triangulation data structure, that supports time efficient and generic traversals, which self-adjusts its memory usage on demand for input simplicial meshes and which implicitly emulates a triangulation for regular grids with no memory overhead. Finally, we describe an original software architecture, which guarantees memory efficient and direct accesses to TTK features, while still allowing for researchers powerful and easy bindings and extensions. TTK is open source (BSD license) and its code, online documentation and video tutorials are available on TTK's website

    GIS and urban design

    Get PDF
    Although urban planning has used computer models and information systems sincethe 1950s and architectural practice has recently restructured to the use of computeraideddesign (CAD) and computer drafting software, urban design has hardly beentouched by the digital world. This is about to change as very fine scale spatial datarelevant to such design becomes routinely available, as 2dimensional GIS(geographic information systems) become linked to 3dimensional CAD packages,and as other kinds of photorealistic media are increasingly being fused with thesesoftware. In this chapter, we present the role of GIS in urban design, outlining whatcurrent desktop software is capable of and showing how various new techniques canbe developed which make such software highly suitable as basis for urban design.We first outline the nature of urban design and then present ideas about how varioussoftware might form a tool kit to aid its process. We then look in turn at: utilisingstandard mapping capabilities within GIS relevant to urban design; buildingfunctional extensions to GIS which measure local scale accessibility; providingsketch planning capability in GIS and linking 2-d to 3-d visualisations using low costnet-enabled CAD browsers. We finally conclude with some speculations on thefuture of GIS for urban design across networks whereby a wide range of participantsmight engage in the design process digitally but remotely

    Political Text Scaling Meets Computational Semantics

    Full text link
    During the last fifteen years, automatic text scaling has become one of the key tools of the Text as Data community in political science. Prominent text scaling algorithms, however, rely on the assumption that latent positions can be captured just by leveraging the information about word frequencies in documents under study. We challenge this traditional view and present a new, semantically aware text scaling algorithm, SemScale, which combines recent developments in the area of computational linguistics with unsupervised graph-based clustering. We conduct an extensive quantitative analysis over a collection of speeches from the European Parliament in five different languages and from two different legislative terms, and show that a scaling approach relying on semantic document representations is often better at capturing known underlying political dimensions than the established frequency-based (i.e., symbolic) scaling method. We further validate our findings through a series of experiments focused on text preprocessing and feature selection, document representation, scaling of party manifestos, and a supervised extension of our algorithm. To catalyze further research on this new branch of text scaling methods, we release a Python implementation of SemScale with all included data sets and evaluation procedures.Comment: Updated version - accepted for Transactions on Data Science (TDS

    Integration and mining of malaria molecular, functional and pharmacological data: how far are we from a chemogenomic knowledge space?

    Get PDF
    The organization and mining of malaria genomic and post-genomic data is highly motivated by the necessity to predict and characterize new biological targets and new drugs. Biological targets are sought in a biological space designed from the genomic data from Plasmodium falciparum, but using also the millions of genomic data from other species. Drug candidates are sought in a chemical space containing the millions of small molecules stored in public and private chemolibraries. Data management should therefore be as reliable and versatile as possible. In this context, we examined five aspects of the organization and mining of malaria genomic and post-genomic data: 1) the comparison of protein sequences including compositionally atypical malaria sequences, 2) the high throughput reconstruction of molecular phylogenies, 3) the representation of biological processes particularly metabolic pathways, 4) the versatile methods to integrate genomic data, biological representations and functional profiling obtained from X-omic experiments after drug treatments and 5) the determination and prediction of protein structures and their molecular docking with drug candidate structures. Progresses toward a grid-enabled chemogenomic knowledge space are discussed.Comment: 43 pages, 4 figures, to appear in Malaria Journa

    A Logic-based Approach for Recognizing Textual Entailment Supported by Ontological Background Knowledge

    Full text link
    We present the architecture and the evaluation of a new system for recognizing textual entailment (RTE). In RTE we want to identify automatically the type of a logical relation between two input texts. In particular, we are interested in proving the existence of an entailment between them. We conceive our system as a modular environment allowing for a high-coverage syntactic and semantic text analysis combined with logical inference. For the syntactic and semantic analysis we combine a deep semantic analysis with a shallow one supported by statistical models in order to increase the quality and the accuracy of results. For RTE we use logical inference of first-order employing model-theoretic techniques and automated reasoning tools. The inference is supported with problem-relevant background knowledge extracted automatically and on demand from external sources like, e.g., WordNet, YAGO, and OpenCyc, or other, more experimental sources with, e.g., manually defined presupposition resolutions, or with axiomatized general and common sense knowledge. The results show that fine-grained and consistent knowledge coming from diverse sources is a necessary condition determining the correctness and traceability of results.Comment: 25 pages, 10 figure

    Variable-free exploration of stochastic models: a gene regulatory network example

    Get PDF
    Finding coarse-grained, low-dimensional descriptions is an important task in the analysis of complex, stochastic models of gene regulatory networks. This task involves (a) identifying observables that best describe the state of these complex systems and (b) characterizing the dynamics of the observables. In a previous paper [13], we assumed that good observables were known a priori, and presented an equation-free approach to approximate coarse-grained quantities (i.e, effective drift and diffusion coefficients) that characterize the long-time behavior of the observables. Here we use diffusion maps [9] to extract appropriate observables ("reduction coordinates") in an automated fashion; these involve the leading eigenvectors of a weighted Laplacian on a graph constructed from network simulation data. We present lifting and restriction procedures for translating between physical variables and these data-based observables. These procedures allow us to perform equation-free coarse-grained, computations characterizing the long-term dynamics through the design and processing of short bursts of stochastic simulation initialized at appropriate values of the data-based observables.Comment: 26 pages, 9 figure

    Supporting decision-making in the building life-cycle using linked building data

    Get PDF
    The interoperability challenge is a long-standing challenge in the domain of architecture, engineering and construction (AEC). Diverse approaches have already been presented for addressing this challenge. This article will look into the possibility of addressing the interoperability challenge in the building life-cycle with a linked data approach. An outline is given of how linked data technologies tend to be deployed, thereby working towards a “more holistic” perspective on the building, or towards a large-scale web of “linked building data”. From this overview, and the associated use case scenarios, we conclude that the interoperability challenge cannot be “solved” using linked data technologies, but that it can be addressed. In other words, information exchange and management can be improved, but a pragmatic usage of technologies is still required in practice. Finally, we give an initial outline of some anticipated use cases in the building life-cycle in which the usage of linked data technologies may generate advantages over existing technologies and methods

    Development of a client interface for a methodology independent object-oriented CASE tool : a thesis presented in partial fulfilment of the requirements for the degree of Master of Science in Computer Science at Massey University

    Get PDF
    The overall aim of the research presented in this thesis is the development of a prototype CASE Tool user interface that supports the use of arbitrary methodology notations for the construction of small-scale diagrams. This research is part of the larger CASE Tool project, MOOT (Massey's Object Oriented Tool). MOOT is a meta-system with a client-server architecture that provides a framework within which the semantics and syntax of methodologies can be described. The CASE Tool user interface is implemented in Java so it is as portable as possible and has a consistent look and feel. It has been designed as a client to the rest of the MOOT system (which acts as a server). A communications protocol has been designed to support the interaction between the CASE Tool client and a MOOT server. The user interface design of MOOT must support all possible graphical notations. No assumptions about the types of notations that a software engineer may use can be made. MOOT therefore provides a specification language called NDL for the definition of a methodology's syntax. Hence, the MOOT CASE Tool client described in this thesis is a shell that is parameterised by NDL specifications. The flexibility provided by such a high level of abstraction presents significant challenges in terms of designing effective human-computer interaction mechanisms for the MOOT user interface. Functional and non-functional requirements of the client user interface have been identified and applied during the construction of the prototype. A notation specification that defines the syntax for Coad and Yourdon OOA/OOD has been written in NDL and used as a test case. The thesis includes the iterative evaluation and extension of NDL resulting from the prototype development. The prototype has shown that the current approach to NDL is efficacious, and that the syntax and semantics of a methodology description can successfully be separated. The developed prototype has shown that it is possible to build a simple, non-intrusive, and efficient, yet flexible, useable, and helpful interface for meta-CASE tools. The development of the CASE Tool client, through its generic, methodology independent design, has provided a pilot with which future ideas may be explored
    • 

    corecore