4,250 research outputs found

    On page-based optimistic process checkpointing

    Get PDF
    Persistent object systems must provide some form of checkpointing to ensure that changes to persistent data are secured on non-volatile storage. When processes share or exchange modified data, mechanisms must be provided to ensure that they may be consistently checkpointed. This may be performed eagerly by synchronously checkpointing all dependent data. Alternatively, optimistic techniques may be used where processes are individually checkpointed and globally consistent states are found asynchronously. This paper examines two eager checkpointing techniques and describes a new optimistic technique. The technique is applicable in systems such as SASOS, where the notion of process and address space are decoupled.Othe

    ProofPeer - A Cloud-based Interactive Theorem Proving System

    Get PDF
    ProofPeer strives to be a system for cloud-based interactive theorem proving. After illustrating why such a system is needed, the paper presents some of the design challenges that ProofPeer needs to meet to succeed. Contexts are presented as a solution to the problem of sharing proof state among the users of ProofPeer. Chronicles are introduced as a way to organize and version contexts

    ShareCare: a study of databases within Q&A webapp context

    Get PDF
    Treballs Finals de Grau d'Enginyeria Informàtica, Facultat de Matemàtiques, Universitat de Barcelona, Any: 2018, Director: Blasco Jiménez, Guillermo[en] The content of this work is related to the study of the different database families that can be found working in different applications for the web. This study means a description of those families together with an analysis of their features and some of their common uses. This overview of databases will be reinforced by the example of a web application, created to exemplify some interesting use cases that make nowadays application use several kinds of databases. The purpose of this work is, therefore, to show how important is to know the possibilities that exist in terms of databases, and also to know some facts that a developer may bear in mind in order to make a good choice when selecting a database to work with

    Big Data Analytics in Static and Streaming Provenance

    Get PDF
    Thesis (Ph.D.) - Indiana University, Informatics and Computing,, 2016With recent technological and computational advances, scientists increasingly integrate sensors and model simulations to understand spatial, temporal, social, and ecological relationships at unprecedented scale. Data provenance traces relationships of entities over time, thus providing a unique view on over-time behavior under study. However, provenance can be overwhelming in both volume and complexity; the now forecasting potential of provenance creates additional demands. This dissertation focuses on Big Data analytics of static and streaming provenance. It develops filters and a non-preprocessing slicing technique for in-situ querying of static provenance. It presents a stream processing framework for online processing of provenance data at high receiving rate. While the former is sufficient for answering queries that are given prior to the application start (forward queries), the latter deals with queries whose targets are unknown beforehand (backward queries). Finally, it explores data mining on large collections of provenance and proposes a temporal representation of provenance that can reduce the high dimensionality while effectively supporting mining tasks like clustering, classification and association rules mining; and the temporal representation can be further applied to streaming provenance as well. The proposed techniques are verified through software prototypes applied to Big Data provenance captured from computer network data, weather models, ocean models, remote (satellite) imagery data, and agent-based simulations of agricultural decision making

    Obvious: a meta-toolkit to encapsulate information visualization toolkits. One toolkit to bind them all

    Get PDF
    This article describes “Obvious”: a meta-toolkit that abstracts and encapsulates information visualization toolkits implemented in the Java language. It intends to unify their use and postpone the choice of which concrete toolkit(s) to use later-on in the development of visual analytics applications. We also report on the lessons we have learned when wrapping popular toolkits with Obvious, namely Prefuse, the InfoVis Toolkit, partly Improvise, JUNG and other data management libraries. We show several examples on the uses of Obvious, how the different toolkits can be combined, for instance sharing their data models. We also show how Weka and RapidMiner, two popular machine-learning toolkits, have been wrapped with Obvious and can be used directly with all the other wrapped toolkits. We expect Obvious to start a co-evolution process: Obvious is meant to evolve when more components of Information Visualization systems will become consensual. It is also designed to help information visualization systems adhere to the best practices to provide a higher level of interoperability and leverage the domain of visual analytics

    Analysis of methods

    Get PDF
    Information is one of an organization's most important assets. For this reason the development and maintenance of an integrated information system environment is one of the most important functions within a large organization. The Integrated Information Systems Evolution Environment (IISEE) project has as one of its primary goals a computerized solution to the difficulties involved in the development of integrated information systems. To develop such an environment a thorough understanding of the enterprise's information needs and requirements is of paramount importance. This document is the current release of the research performed by the Integrated Development Support Environment (IDSE) Research Team in support of the IISEE project. Research indicates that an integral part of any information system environment would be multiple modeling methods to support the management of the organization's information. Automated tool support for these methods is necessary to facilitate their use in an integrated environment. An integrated environment makes it necessary to maintain an integrated database which contains the different kinds of models developed under the various methodologies. In addition, to speed the process of development of models, a procedure or technique is needed to allow automatic translation from one methodology's representation to another while maintaining the integrity of both. The purpose for the analysis of the modeling methods included in this document is to examine these methods with the goal being to include them in an integrated development support environment. To accomplish this and to develop a method for allowing intra-methodology and inter-methodology model element reuse, a thorough understanding of multiple modeling methodologies is necessary. Currently the IDSE Research Team is investigating the family of Integrated Computer Aided Manufacturing (ICAM) DEFinition (IDEF) languages IDEF(0), IDEF(1), and IDEF(1x), as well as ENALIM, Entity Relationship, Data Flow Diagrams, and Structure Charts, for inclusion in an integrated development support environment

    Framework for dependency analysis of software artifacts

    Get PDF
    Cílem této práce je seznámit se s komponentově orientovanými systémy, s reprezentací a analýzou grafových dat a s existujícími metodami a nástroji pro statickou analýzu komponentově orientovaných systémů, které jsou vyvíjeny na Katedře informatiky a výpočetní techniky Západočeské univerzity v Plzni. Na základě zjištěných poznatků je výsledkem této práce návrh a implementace frameworku s důrazem na podporu vývoje ve více programovacích jazycích a na schopnost zpracovávat velké datové sady. Vytvořený framework pak může sloužit pro podporu výzkumu komponentově orientovaných systémů. Autor této práce navrhuje zobecnění a rozšíření frameworku pro analýzu závislostí softwarových artefaktů, který byl vytvořen v rámci diplomové práce M. Hotovce. Model ukládání dat frameworku byl rovněž analyzován s důrazem na grafové databáze. Jako řešení pro ukládání dat byla nakonec zvolena databáze ArangoDB. Dále byla implementována knihovna s jádrem frameworku v jazyce Java, které umožňuje vývoj nástrojů frameworku. Výsledná návrhová rozhodnutí umožňují využití frameworku v širší škále případů použití, jako je například extrakce a verifikace kompatibility komponent, což bylo demonstrováno replikací této funkcionality v nástroji frameworku vytvořeném v rámci této práce.ObhájenoThis thesis aims to familiarize with the component-based systems, graph data representation and analysis and with existing methods and tools for static analysis of component-based systems which are being developed at the Department of Computer Science at the University of West Bohemia in Pilsen, Czech Republic. Based on the findings, the result of this thesis is a framework design and implementation with emphasis on support for development in multiple programming languages and on the ability to process large datasets. The created framework then can serve to support the research of the component-based systems. The author of this thesis proposes generalization and extension of the framework for software artifacts dependency analysis which has been created as a part of M. Hotovec's master's thesis. The framework data storage model has also been analyzed with emphasis on graph databases. ArangoDB database has been eventually chosen as a storage solution and a core library in Java has been implemented to allow the development of framework tools. The resulting design decisions allows the framework to be used in broader range of use cases such as components compatibility extraction and verification, which has been demonstrated by replicating this functionality in a framework tool created as a part of this thesis

    Anti-fragile ICT Systems

    Get PDF
    This book introduces a novel approach to the design and operation of large ICT systems. It views the technical solutions and their stakeholders as complex adaptive systems and argues that traditional risk analyses cannot predict all future incidents with major impacts. To avoid unacceptable events, it is necessary to establish and operate anti-fragile ICT systems that limit the impact of all incidents, and which learn from small-impact incidents how to function increasingly well in changing environments. The book applies four design principles and one operational principle to achieve anti-fragility for different classes of incidents. It discusses how systems can achieve high availability, prevent malware epidemics, and detect anomalies. Analyses of Netflix’s media streaming solution, Norwegian telecom infrastructures, e-government platforms, and Numenta’s anomaly detection software show that cloud computing is essential to achieving anti-fragility for classes of events with negative impacts
    corecore