6,170 research outputs found
Utilizing a 3D game engine to develop a virtual design review system
A design review process is where information is exchanged between the designers and design reviewers to resolve any potential design related issues, and to ensure that the interests and goals of the owner are met. The effective execution of design review will minimize potential errors or conflicts, reduce the time for review, shorten the project life-cycle, allow for earlier occupancy, and ultimately translate into significant total project savings to the owner. However, the current methods of design review are still heavily relying on 2D paper-based format, sequential and lack central and integrated information base for efficient exchange and flow of information. There is thus a need for the use of a new medium that allow for 3D visualization of designs, collaboration among designers and design reviewers, and early and easy access to design review information. This paper documents the innovative utilization of a 3D game engine, the Torque Game Engine as the underlying tool and enabling technology for a design review system, the Virtual Design Review System for architectural designs. Two major elements are incorporated; 1) a 3D game engine as the driving tool for the development and implementation of design review processes, and 2) a virtual environment as the medium for design review, where visualization of design and design review information is based on sound principles of GUI design. The development of the VDRS involves two major phases; firstly, the creation of the assets and the assembly of the virtual environment, and secondly, the modification of existing functions or introducing new functionality through programming of the 3D game engine in order to support design review in a virtual environment. The features that are included in the VDRS are support for database, real-time collaboration across network, viewing and navigation modes, 3D object manipulation, parametric input, GUI, and organization for 3D objects
ChimpCheck: Property-Based Randomized Test Generation for Interactive Apps
We consider the problem of generating relevant execution traces to test rich
interactive applications. Rich interactive applications, such as apps on mobile
platforms, are complex stateful and often distributed systems where
sufficiently exercising the app with user-interaction (UI) event sequences to
expose defects is both hard and time-consuming. In particular, there is a
fundamental tension between brute-force random UI exercising tools, which are
fully-automated but offer low relevance, and UI test scripts, which are manual
but offer high relevance. In this paper, we consider a middle way---enabling a
seamless fusion of scripted and randomized UI testing. This fusion is
prototyped in a testing tool called ChimpCheck for programming, generating, and
executing property-based randomized test cases for Android apps. Our approach
realizes this fusion by offering a high-level, embedded domain-specific
language for defining custom generators of simulated user-interaction event
sequences. What follows is a combinator library built on industrial strength
frameworks for property-based testing (ScalaCheck) and Android testing (Android
JUnit and Espresso) to implement property-based randomized testing for Android
development. Driven by real, reported issues in open source Android apps, we
show, through case studies, how ChimpCheck enables expressing effective testing
patterns in a compact manner.Comment: 20 pages, 21 figures, Symposium on New ideas, New Paradigms, and
Reflections on Programming and Software (Onward!2017
The archive solution for distributed workflow management agents of the CMS experiment at LHC
The CMS experiment at the CERN LHC developed the Workflow Management Archive
system to persistently store unstructured framework job report documents
produced by distributed workflow management agents. In this paper we present
its architecture, implementation, deployment, and integration with the CMS and
CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster.
The system leverages modern technologies such as a document oriented database
and the Hadoop eco-system to provide the necessary flexibility to reliably
process, store, and aggregate (1M) documents on a daily basis. We
describe the data transformation, the short and long term storage layers, the
query language, along with the aggregation pipeline developed to visualize
various performance metrics to assist CMS data operators in assessing the
performance of the CMS computing system.Comment: This is a pre-print of an article published in Computing and Software
for Big Science. The final authenticated version is available online at:
https://doi.org/10.1007/s41781-018-0005-
Architecture for Provenance Systems
This document covers the logical and process architectures of provenance systems. The logical architecture identifies key roles and their interactions, whereas the process architecture discusses distribution and security. A fundamental aspect of our presentation is its technology-independent nature, which makes it reusable: the principles that are exposed in this document may be applied to different technologies
An Exercise in Invariant-based Programming with Interactive and Automatic Theorem Prover Support
Invariant-Based Programming (IBP) is a diagram-based correct-by-construction
programming methodology in which the program is structured around the
invariants, which are additionally formulated before the actual code. Socos is
a program construction and verification environment built specifically to
support IBP. The front-end to Socos is a graphical diagram editor, allowing the
programmer to construct invariant-based programs and check their correctness.
The back-end component of Socos, the program checker, computes the verification
conditions of the program and tries to prove them automatically. It uses the
theorem prover PVS and the SMT solver Yices to discharge as many of the
verification conditions as possible without user interaction. In this paper, we
first describe the Socos environment from a user and systems level perspective;
we then exemplify the IBP workflow by building a verified implementation of
heapsort in Socos. The case study highlights the role of both automatic and
interactive theorem proving in three sequential stages of the IBP workflow:
developing the background theory, formulating the program specification and
invariants, and proving the correctness of the final implementation.Comment: In Proceedings THedu'11, arXiv:1202.453
Abstracting object interactions using composition filters
It is generally claimed that object-based models are very suitable for building distributed system architectures since object interactions follow the client-server model. To cope with the complexity of today's distributed systems, however, we think that high-level linguistic mechanisms are needed to effectively structure, abstract and reuse object interactions. For example, the conventional object-oriented model does not provide high-level language mechanisms to model layered system architectures. Moreover, we consider the message passing model of the conventional object-oriented model as being too low-level because it can only specify object interactions that involve two partner objects at a time and its semantics cannot be extended easily. This paper introduces Abstract Communication Types (ACTs), which are objects that abstract interactions among objects. ACTs make it easier to model layered communication architectures, to enforce the invariant behavior among objects, to reduce the complexity of programs by hiding the interaction details in separate modules and to improve reusability through the application of object-oriented principles to ACT classes. We illustrate the concept of ACTs using the composition filters model
CryptoMaze: Atomic Off-Chain Payments in Payment Channel Network
Payment protocols developed to realize off-chain transactions in Payment
channel network (PCN) assumes the underlying routing algorithm transfers the
payment via a single path. However, a path may not have sufficient capacity to
route a transaction. It is inevitable to split the payment across multiple
paths. If we run independent instances of the protocol on each path, the
execution may fail in some of the paths, leading to partial transfer of funds.
A payer has to reattempt the entire process for the residual amount. We propose
a secure and privacy-preserving payment protocol, CryptoMaze. Instead of
independent paths, the funds are transferred from sender to receiver across
several payment channels responsible for routing, in a breadth-first fashion.
Payments are resolved faster at reduced setup cost, compared to existing
state-of-the-art. Correlation among the partial payments is captured,
guaranteeing atomicity. Further, two party ECDSA signature is used for
establishing scriptless locks among parties involved in the payment. It reduces
space overhead by leveraging on core Bitcoin scripts. We provide a formal model
in the Universal Composability framework and state the privacy goals achieved
by CryptoMaze. We compare the performance of our protocol with the existing
single path based payment protocol, Multi-hop HTLC, applied iteratively on one
path at a time on several instances. It is observed that CryptoMaze requires
less communication overhead and low execution time, demonstrating efficiency
and scalability.Comment: 30 pages, 9 figures, 1 tabl
Multilevel Contracts for Trusted Components
This article contributes to the design and the verification of trusted
components and services. The contracts are declined at several levels to cover
then different facets, such as component consistency, compatibility or
correctness. The article introduces multilevel contracts and a
design+verification process for handling and analysing these contracts in
component models. The approach is implemented with the COSTO platform that
supports the Kmelia component model. A case study illustrates the overall
approach.Comment: In Proceedings WCSI 2010, arXiv:1010.233
BlogForever: D3.1 Preservation Strategy Report
This report describes preservation planning approaches and strategies recommended by the BlogForever project as a core component of a weblog repository design. More specifically, we start by discussing why we would want to preserve weblogs in the first place and what it is exactly that we are trying to preserve. We further present a review of past and present work and highlight why current practices in web archiving do not address the needs of weblog preservation adequately. We make three distinctive contributions in this volume: a) we propose transferable practical workflows for applying a combination of established metadata and repository standards in developing a weblog repository, b) we provide an automated approach to identifying significant properties of weblog content that uses the notion of communities and how this affects previous strategies, c) we propose a sustainability plan that draws upon community knowledge through innovative repository design
A Web Component for Real-Time Collaborative Text Editing
Real-time collaborative software allows physically distinct people to co-operate by working on a shared application state, receiving updates from each other in real-time. The goal of this thesis was to create a developer tool, which would allow web application developers to easily integrate a collaborative text editor into their applications. In order to remain technology agnostic and to utilize the latest web standards, this product was implemented as a web component, a reusable user interface component built with native web browser features.
The main challenge in developing a real-time collaboration tool is the handling of concurrent updates, which might conflict with one another. To tackle this issue, many consistency maintenance algorithms have been presented in the academic literature. Most of these techniques are variations of two main approaches: operational transformation and commutative replicated data types. In this thesis, we reviewed some of these methods and chose the GOTO operational transformation algorithm to be implemented in our component.
Besides selecting and implementing an appropriate consistency maintenance technique, the contributions of this thesis include the design of an easy-to-use application programming interface (API). Our solution also fulfills some practical requirements of group editors not covered by the consistency maintenance theory, such as session management and cleaning of the message queue. The created web component succeeds in encapsulating the complexity related to concurrency control and handling of joining peers in the client-side implementation, which allows the application logic to remain simplistic. This open-source product enables software developers to add a collaborative text editor to their web applications by broadcasting the updates provided by an event-based API to participating peers
- …