9,627 research outputs found
The transaction pattern through automating TrAM
Transaction Agent Modelling (TrAM) has demonstrated how the early requirements of complex enterprise systems can be captured and described in a lucid yet rigorous way. Using Geerts and McCarthy’s REA (Resource-Events-Agents) model as its basis, the TrAM process manages to capture the ‘qualitative’ dimensions of business transactions and business processes. A key part of the process is automated model-checking, which CG has revealed to be beneficial in this regard. It enables models to retain the high-level business concepts yet providing a formal structure at that high-level that is lacking in Use Cases. Using a conceptual catalogue informed by transactions, we illustrate the automation of a transaction pattern from which further specialisations impart a tested specification for system implementation, which we envisage as a multi-agent system in order to reflect the dynamic world of business activity. It would furthermore be able to interoperate across business domains as they would share the generalised TM as a pattern.</p
TOPYDE: A Tool for Physical Database Design
We describe a tool for physical database design based on a combination of theoretical and pragmatic approaches. The tool takes as input a relational schema, the workload defined on the schema, and some additional database characteristics and produces as output a physical schema. For the time being, the tool is tuned towards Ingres
A unified view of data-intensive flows in business intelligence systems : a survey
Data-intensive flows are central processes in today’s business intelligence (BI) systems, deploying different technologies to deliver data, from a multitude of data sources, in user-preferred and analysis-ready formats. To meet complex requirements of next generation BI systems, we often need an effective combination of the traditionally batched extract-transform-load (ETL) processes that populate a data warehouse (DW) from integrated data sources, and more real-time and operational data flows that integrate source data at runtime. Both academia and industry thus must have a clear understanding of the foundations of data-intensive flows and the challenges of moving towards next generation BI environments. In this paper we present a survey of today’s research on data-intensive flows and the related fundamental fields of database theory. The study is based on a proposed set of dimensions describing the important challenges of data-intensive flows in the next generation BI setting. As a result of this survey, we envision an architecture of a system for managing the lifecycle of data-intensive flows. The results further provide a comprehensive understanding of data-intensive flows, recognizing challenges that still are to be addressed, and how the current solutions can be applied for addressing these challenges.Peer ReviewedPostprint (author's final draft
Recommended from our members
Automated verification of refinement laws
Demonic refinement algebras are variants of Kleene algebras. Introduced by von Wright as a light-weight variant of the refinement calculus, their intended semantics are positively disjunctive predicate transformers, and their calculus is entirely within first-order equational logic. So, for the first time, off-the-shelf automated theorem proving (ATP) becomes available for refinement proofs. We used ATP to verify a toolkit of basic refinement laws. Based on this toolkit, we then verified two classical complex refinement laws for action systems by ATP: a data refinement law and Back's atomicity refinement law. We also present a refinement law for infinite loops that has been discovered through automated analysis. Our proof experiments not only demonstrate that refinement can effectively be automated, they also compare eleven different ATP systems and suggest that program verification with variants of Kleene algebras yields interesting theorem proving benchmarks. Finally, we apply hypothesis learning techniques that seem indispensable for automating more complex proofs
Automating the multiprocessing environment
An approach to automate the programming and operation of tree-structured networks of multiprocessor systems is discussed. A conceptual, knowledge-based operating environment is presented, and requirements for two major technology elements are identified as follows: (1) An intelligent information translator is proposed for implementating information transfer between dissimilar hardware and software, thereby enabling independent and modular development of future systems and promoting a language-independence of codes and information; (2) A resident system activity manager, which recognizes the systems capabilities and monitors the status of all systems within the environment, is proposed for integrating dissimilar systems into effective parallel processing resources to optimally meet user needs. Finally, key computational capabilities which must be provided before the environment can be realized are identified
Challenges in Bridging Social Semantics and Formal Semantics on the Web
This paper describes several results of Wimmics, a research lab which names
stands for: web-instrumented man-machine interactions, communities, and
semantics. The approaches introduced here rely on graph-oriented knowledge
representation, reasoning and operationalization to model and support actors,
actions and interactions in web-based epistemic communities. The re-search
results are applied to support and foster interactions in online communities
and manage their resources
Evolving NoSQL Databases Without Downtime
NoSQL databases like Redis, Cassandra, and MongoDB are increasingly popular
because they are flexible, lightweight, and easy to work with. Applications
that use these databases will evolve over time, sometimes necessitating (or
preferring) a change to the format or organization of the data. The problem we
address in this paper is: How can we support the evolution of high-availability
applications and their NoSQL data online, without excessive delays or
interruptions, even in the presence of backward-incompatible data format
changes?
We present KVolve, an extension to the popular Redis NoSQL database, as a
solution to this problem. KVolve permits a developer to submit an upgrade
specification that defines how to transform existing data to the newest
version. This transformation is applied lazily as applications interact with
the database, thus avoiding long pause times. We demonstrate that KVolve is
expressive enough to support substantial practical updates, including format
changes to RedisFS, a Redis-backed file system, while imposing essentially no
overhead in general use and minimal pause times during updates.Comment: Update to writing/structur
- …