16,999 research outputs found

    GeoCoin:supporting ideation and collaborative design with location-based smart contracts

    Get PDF
    Design and HCI researchers are increasingly working with complex digital infrastructures, such as cryptocurrencies, distributed ledgers and smart contracts. These technologies will have a profound impact on digital systems and their audiences. However, given their emergent nature and technical complexity, involving non-specialists in the design of applications that employ these technologies is challenging. In this paper, we discuss these challenges and present GeoCoin, a location-based platform for embodied learning and speculative ideating with smart contracts. In collaborative workshops with GeoCoin, participants engaged with location-based smart contracts, using the platform to explore digital `debit' and `credit' zones in the city. These exercises led to the design of diverse distributed-ledger applications, for time-limited financial unions, participatory budgeting, and humanitarian aid. These results contribute to the HCI community by demonstrating how an experiential prototype can support understanding of the complexities behind new digital infrastructures and facilitate participant engagement in ideation and design processes

    A Comparative Analysis of STM Approaches to Reduction Operations in Irregular Applications

    Get PDF
    As a recently consolidated paradigm for optimistic concurrency in modern multicore architectures, Transactional Memory (TM) can help to the exploitation of parallelism in irregular applications when data dependence information is not available up to run- time. This paper presents and discusses how to leverage TM to exploit parallelism in an important class of irregular applications, the class that exhibits irregular reduction patterns. In order to test and compare our techniques with other solutions, they were implemented in a software TM system called ReduxSTM, that acts as a proof of concept. Basically, ReduxSTM combines two major ideas: a sequential-equivalent ordering of transaction commits that assures the correct result, and an extension of the underlying TM privatization mechanism to reduce unnecessary overhead due to reduction memory updates as well as unnecesary aborts and rollbacks. A comparative study of STM solutions, including ReduxSTM, and other more classical approaches to the parallelization of reduction operations is presented in terms of time, memory and overhead.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Incremental Consistency Guarantees for Replicated Objects

    Get PDF
    Programming with replicated objects is difficult. Developers must face the fundamental trade-off between consistency and performance head on, while struggling with the complexity of distributed storage stacks. We introduce Correctables, a novel abstraction that hides most of this complexity, allowing developers to focus on the task of balancing consistency and performance. To aid developers with this task, Correctables provide incremental consistency guarantees, which capture successive refinements on the result of an ongoing operation on a replicated object. In short, applications receive both a preliminary---fast, possibly inconsistent---result, as well as a final---consistent---result that arrives later. We show how to leverage incremental consistency guarantees by speculating on preliminary values, trading throughput and bandwidth for improved latency. We experiment with two popular storage systems (Cassandra and ZooKeeper) and three applications: a Twissandra-based microblogging service, an ad serving system, and a ticket selling system. Our evaluation on the Amazon EC2 platform with YCSB workloads A, B, and C shows that we can reduce the latency of strongly consistent operations by up to 40% (from 100ms to 60ms) at little cost (10% bandwidth increase, 6% throughput drop) in the ad system. Even if the preliminary result is frequently inconsistent (25% of accesses), incremental consistency incurs a bandwidth overhead of only 27%.Comment: 16 total pages, 12 figures. OSDI'16 (to appear

    Issues in digital preservation: towards a new research agenda

    Get PDF
    Digital Preservation has evolved into a specialized, interdisciplinary research discipline of its own, seeing significant increases in terms of research capacity, results, but also challenges. However, with this specialization and subsequent formation of a dedicated subgroup of researchers active in this field, limitations of the challenges addressed can be observed. Digital preservation research may seem to react to problems arising, fixing problems that exist now, rather than proactively researching new solutions that may be applicable only after a few years of maturing. Recognising the benefits of bringing together researchers and practitioners with various professional backgrounds related to digital preservation, a seminar was organized in Schloss Dagstuhl, at the Leibniz Center for Informatics (18-23 July 2010), with the aim of addressing the current digital preservation challenges, with a specific focus on the automation aspects in this field. The main goal of the seminar was to outline some research challenges in digital preservation, providing a number of "research questions" that could be immediately tackled, e.g. in Doctoral Thesis. The seminar intended also to highlight the need for the digital preservation community to reach out to IT research and other research communities outside the immediate digital preservation domain, in order to jointly develop solutions

    An Agent-Based Simulation API for Speculative PDES Runtime Environments

    Get PDF
    Agent-Based Modeling and Simulation (ABMS) is an effective paradigm to model systems exhibiting complex interactions, also with the goal of studying the emergent behavior of these systems. While ABMS has been effectively used in many disciplines, many successful models are still run only sequentially. Relying on simple and easy-to-use languages such as NetLogo limits the possibility to benefit from more effective runtime paradigms, such as speculative Parallel Discrete Event Simulation (PDES). In this paper, we discuss a semantically-rich API allowing to implement Agent-Based Models in a simple and effective way. We also describe the critical points which should be taken into account to implement this API in a speculative PDES environment, to scale up simulations on distributed massively-parallel clusters. We present an experimental assessment showing how our proposal allows to implement complicated interactions with a reduced complexity, while delivering a non-negligible performance increase

    Consistent and efficient output-streams management in optimistic simulation platforms

    Get PDF
    Optimistic synchronization is considered an effective means for supporting Parallel Discrete Event Simulations. It relies on a speculative approach, where concurrent processes execute simulation events regardless of their safety, and consistency is ensured via proper rollback mechanisms, upon the a-posteriori detection of causal inconsistencies along the events' execution path. Interactions with the outside world (e.g. generation of output streams) are a well-known problem for rollback-based systems, since the outside world may have no notion of rollback. In this context, approaches for allowing the simulation modeler to generate consistent output rely on either the usage of ad-hoc APIs (which must be provided by the underlying simulation kernel) or temporary suspension of processing activities in order to wait for the final outcome (commit/rollback) associated with a speculatively-produced output. In this paper we present design indications and a reference implementation for an output streams' management subsystem which allows the simulation-model writer to rely on standard output-generation libraries (e.g. stdio) within code blocks associated with event processing. Further, the subsystem ensures that the produced output is consistent, namely associated with events that are eventually committed, and system-wide ordered along the simulation time axis. The above features jointly provide the illusion of a classical (simple to deal with) sequential programming model, which spares the developer from being aware that the simulation program is run concurrently and speculatively. We also show, via an experimental study, how the design/development optimizations we present lead to limited overhead, giving rise to the situation where the simulation run would have been carried out with near-to-zero or reduced output management cost. At the same time, the delay for materializing the output stream (making it available for any type of audit activity) is shown to be fairly limited and constant, especially for good mixtures of I/O-bound vs CPU-bound behaviors at the application level. Further, the whole output streams' management subsystem has been designed in order to provide scalability for I/O management on clusters. © 2013 ACM

    Towards a cyberinfrastructure for enhanced scientific

    Get PDF
    A new generation of information and communication infrastructures, including advanced Internet computing and Grid technologies, promises to enable more direct and shared access to more widely distributed computing resources than was previously possible. Scientific and technological collaboration, consequently, is more and more coming to be seen as critically dependent upon effective access to, and sharing of digital research data, and of the information tools that facilitate data being structured for efficient storage, search, retrieval, display and higher level analysis. A recent (February 2003) report to the U.S. NSF Directorate of Computer and Information System Engineering urged that funding be provided for a major enhancement of computer and network technologies, thereby creating a cyberinfrastructure whose facilities would support and transform the conduct of scientific and engineering research. The articulation of this programmatic vision reflects a widely shared expectation that solving the technical engineering problems associated with the advanced hardware and software systems of the cyberinfrastructure will yield revolutionary payoffs by empowering individual researchers and increasing the scale, scope and flexibility of collective research enterprises. The argument of this paper, however, is that engineering breakthroughs alone will not be enough to achieve such an outcome; success in realizing the cyberinfrastructure’s potential, if it is achieved, will more likely to be the resultant of a nexus of interrelated social, legal and technical transformations. The socio-institutional elements of a new infrastructure supporting collaboration – that is to say, its supposedly “softer” parts -- are every bit as complicated as the hardware and computer software, and, indeed, may prove much harder to devise and implement. The roots of this latter class of challenges facing “e-Science” will be seen to lie in the micro- and meso-level incentive structures created by the existing legal and administrative regimes. Although a number of these same conditions and circumstances appear to be equally significant obstacles to commercial provision of Grid services in interorganizational contexts, the domain of publicly supported scientific collaboration is held to be the more hospitable environment in which to experiment with a variety of new approaches to solving these problems. The paper concludes by proposing several “solution modalities,” including some that also could be made applicable for fields of information-intensive collaboration in business and finance that must regularly transcends organizational boundaries.

    Towards a cyberinfrastructure for enhanced scientific

    Get PDF
    Scientific and technological collaboration is more and more coming to be seen as critically dependent upon effective access to, and sharing of digital research data, and of the information tools that facilitate data being structured for efficient storage, search, retrieval, display and higher level analysis. A February 2003 report to the U.S. NSF Directorate of Computer and Information System Engineering urged that funding be provided for a major enhancement of computer and network technologies, thereby creating a cyberinfrastructure whose facilities would support and transform the conduct of scientific and engineering research. The argument of this paper is that engineering breakthroughs alone will not be enough to achieve such an outcome; success in realizing the cyberinfrastructure’s potential, if it is achieved, will more likely to be the resultant of a nexus of interrelated social, legal and technical transformations. The socio-institutional elements of a new infrastructure supporting collaboration that is to say, its supposedly “softer” parts -- are every bit as complicated as the hardware and computer software, and, indeed, may prove much harder to devise and implement. The roots of this latter class of challenges facing “e- Science” will be seen to lie in the micro- and meso-level incentive structures created by the existing legal and administrative regimes. Although a number of these same conditions and circumstances appear to be equally significant obstacles to commercial provision of Grid services in interorganizational contexts, the domain of publicly supported scientific collaboration is held to be the more hospitable environment in which to experiment with a variety of new approaches to solving these problems. The paper concludes by proposing several “solution modalities,” including some that also could be made applicable for fields of information-intensive collaboration in business and finance that must regularly transcends organizational boundaries.
    corecore