92,052 research outputs found

    Virtual Mutation Analysis of Relational Database Schemas

    Get PDF
    Relational databases are a vital component of many modern soft- ware applications. Key to the definition of the database schema ā€” which specifies what types of data will be stored in the database and the structure in which the data is to be organized ā€” are integrity constraints. Integrity constraints are conditions that protect and preserve the consistency and validity of data in the database, preventing data values that violate their rules from being admitted into database tables. They encode logic about the application concerned, and like any other component of a software application, need to be properly tested. Mutation analysis is a technique that has been successfully applied to integrity constraint testing, seeding database schema faults of both omission and commission. Yet, as for traditional mutation analysis for program testing, it is costly to perform, since the test suite under analysis needs to be run against each individual mutant to establish whether or not it exposes the fault. One overhead incurred by database schema mutation is the cost of communicating with the database management system (DBMS). In this paper, we seek to eliminate this cost by performing mutation analysis virtually on a local model of the DBMS, rather than on an actual, running instance hosting a real database. We present an empirical evaluation of our virtual technique revealing that, across all of the studied DBMSs and schemas, the virtual method yields an average time saving of 51% over the baseline

    Software engineering and middleware: a roadmap (Invited talk)

    Get PDF
    The construction of a large class of distributed systems can be simplified by leveraging middleware, which is layered between network operating systems and application components. Middleware resolves heterogeneity and facilitates communication and coordination of distributed components. Existing middleware products enable software engineers to build systems that are distributed across a local-area network. State-of-the-art middleware research aims to push this boundary towards Internet-scale distribution, adaptive and reconfigurable middleware and middleware for dependable and wireless systems. The challenge for software engineering research is to devise notations, techniques, methods and tools for distributed system construction that systematically build and exploit the capabilities that middleware deliver

    Design of the shared Environmental Information System (SEIS) and development of a web-based GIS interface

    Get PDF
    Chapter 5The Shared Environmental Information System (SEIS) is a collaborative initiative of the European Commission (EC) and the European Environment Agency (EEA) aimed to establish an integrated and shared EU-wide environmental information system together with the Member States. SEIS presents the European vision on environmental information interoperability. It is a set of high-level principles & workflow-processes that organize the collection, exchange, and use of environmental data & information aimed to: ā€¢ Modernise the way in which information required by environmental legislation is made available to member states or EC instruments; ā€¢ Streamline reporting processes and repeal overlaps or obsolete reporting obligations; ā€¢ Stimulate similar developments at international conventions; ā€¢ Standardise according to INSPIRE when possible; and ā€¢ Introduce the SDI (spatial database infrastructure) principle EU-wide. SEIS is a system and workflow of operations that offers technical capabilities geared to meet concept expectations. In that respect, SEIS shows the way and sets up the workflow effectively in a standardise way (e.g, INSPIRE) to: ā€¢ Collect Data from Spatial Databases, in situ sensors, statistical databases, earth observation readings (e.g., EOS, GMES), marine observation using standard data transfer protocols (ODBC, SOS, ft p, etc). ā€¢ Harmonise collected data (including data check/data integrity) according to best practices proven to perform well, according to the INSPIRE Directive 2007/2/EC (1) Annexes I: II: III: plus INSPIRE Implementation Rules for data not specified in above mentioned Annexes. ā€¢ Harmonise collected data according to WISE (Water Information System from Europe) or Ozone-web. ā€¢ Process, aggregate harmonise data so to extract information in a format understandable by wider audiences (e.g., Eurostat, enviro-indicators). ā€¢ Document information to fulfi l national reporting obligations towards EU bodies (e.g., the JRC, EEA, DGENV, Eurostat) ā€¢ Store and publish information for authorised end-users (e.g., citizens, institutions). This paper presents the development and integration of the SEIS-Malta Geoportal. The first section outlines EU Regulations on INSPIRE and Aarhus Directives. The second covers the architecture and the implementation of SEIS-Malta Geoportal. The third discusses the results and successful implementation of the Geoportal.peer-reviewe

    Active artefact management for distributed software engineering

    Get PDF
    We describe a software artefact repository that provides its contents with some awareness of their own creation. "Active" artefacts are distinguished from their passive counterparts by their enriched meta-data model which reflects the work-flow process that created them, the actors responsible, the actions taken to change the artefact, and various other pieces of organisational knowledge. This enriched view of an artefact is intended to support re-use of both software and the expertise gained when creating the software. Unlike other organisational knowledge systems, the meta-data is intrinsically part of the artefact and may be populated automatically from sources including existing data-format specific information, user supplied data and records of communication. Such a system is of increased importance in the world of "virtual teams" where transmission of vital organisational knowledge, at best difficult, is further constrained by the lack of direct contact between engineers and differing development cultures

    On the Optimization of Visualizations of Complex Phenomena

    Get PDF
    The problem of perceptually optimizing complex visualizations is a difficult one, involving perceptual as well as aesthetic issues. In our experience, controlled experiments are quite limited in their ability to uncover interrelationships among visualization parameters, and thus may not be the most useful way to develop rules-of-thumb or theory to guide the production of high-quality visualizations. In this paper, we propose a new experimental approach to optimizing visualization quality that integrates some of the strong points of controlled experiments with methods more suited to investigating complex highly-coupled phenomena. We use human-in-the-loop experiments to search through visualization parameter space, generating large databases of rated visualization solutions. This is followed by data mining to extract results such as exemplar visualizations, guidelines for producing visualizations, and hypotheses about strategies leading to strong visualizations. The approach can easily address both perceptual and aesthetic concerns, and can handle complex parameter interactions. We suggest a genetic algorithm as a valuable way of guiding the human-in-the-loop search through visualization parameter space. We describe our methods for using clustering, histogramming, principal component analysis, and neural networks for data mining. The experimental approach is illustrated with a study of the problem of optimal texturing for viewing layered surfaces so that both surfaces are maximally observable
    • ā€¦
    corecore