270 research outputs found

    Time Matters: Temporally Enacted Frame-Works in Narrative Accounts of Mediation

    Get PDF
    Bateson\u27s (1979) method of double description is utilized to examine narrative accounts of participants\u27 mediation experiences, as a way to investigate significant change events. Comparing what changes to what remains more stable suggests that temporal differences are an indicator of contextualization, providing a framework for how meaning is made meaningful. Case studies of two of these structured interview transcripts are intensively analyzed, with triangulating measures of different logical type. Specifically, these include narrative analysis of key story points, temporal analysis of the frequency and distribution of in vivo codes to yield repetitive themes, and a modified lag analysis of codes in joint proximity to yield reliable thematic clusters. Results are integrated by means of grounded theory procedures of open and axial coding, arriving at semi-saturated categories dealing with temporal enactment of meaning-making. A lexicon of temporal devices for the social construction of common frames of reference between speaker and listener is developed. These are partitioned into three types of temporal progression (i.e., sequence, episodic structure, and co-occurrence) and three types of temporal duration (i.e., repetition, framing, and selection/deselection). Defining conditions and exemplars of each are provided, along with further permutations, including transposition, chained incidents, rival narratives, adjacency, inclusio, asymmetrical bracketing, and chiasm. These provide varied narrative solutions to address the limited attentional focus of a listener. An initial hypothesis—that longer duration meanings contextualize shorter—is given provisional support, in that it appears useful to construct and compare relative durations, with longer duration lying deeper in a hierarchy of logical types. A second hypothesis—that an increase in duration means an increase in perceived significance—is not sustained, in that deselection (and thereby decreasing a meaning\u27s duration) can nonetheless be a significant vehicle for therapeutic change. The study amounts to building a set of tautological linkages that “time matters,” and mapping descriptive territories such as narrative accounts onto it, with resulting increments in explanatory understanding. It is shown how participants shaped their accounts via temporality, by selecting themes, contextualizing, repeating, grouping, ordering, and weaving into stories. The tautology is reflexively applied to itself, and avenues for future theoretical sampling are suggested

    Advances in Large-Scale RDF Data Management

    Get PDF
    One of the prime goals of the LOD2 project is improving the performance and scalability of RDF storage solutions so that the increasing amount of Linked Open Data (LOD) can be efficiently managed. Virtuoso has been chosen as the basic RDF store for the LOD2 project, and during the project it has been significantly improved by incorporating advanced relational database techniques from MonetDB and Vectorwise, turning it into a compressed column store with vectored execution. This has reduced the performance gap (“RDF tax”) between Virtuoso’s SQL and SPARQL query performance in a way that still respects the “schema-last” nature of RDF. However, by lacking schema information, RDF database systems such as Virtuoso still cannot use advanced relational storage optimizations such as table partitioning or clustered indexes and have to execute SPARQL queries with many self-joins to a triple table, which leads to more join effort than needed in SQL systems. In this chapter, we first discuss the new column store techniques applied to Virtuoso, the enhancements in its cluster parallel version, and show its performance using the popular BSBM benchmark at the unsurpassed scale of 150 billion triples. We finally describe ongoing work in deriving an “emergent” relational schema from RDF data, which can help to close the performance gap between relational-based and RDF-based storage solutions

    The Equality Multiplier

    Get PDF
    Equality can multiply due to the complementarity between wage determination and welfare spending. A more equal wage distribution fuels welfare generosity via political competition. A more generous welfare state fuels wage equality further via its support to weak groups in the labor market. Together the two effects generate a cumulative process that adds up to an important social multiplier. We focus on a political economic equilibrium which incorporates this mutual dependence between wage setting and welfare spending. It explains how almost equally rich countries differ in economic and social equality among their citizens and why countries cluster around different worlds of welfare capitalism---the Scandinavian model, the Anglo-Saxon model and the Continental model. Using data on 18 OECD countries over the period 1976-2002 we test the main predictions of the model and identify a sizeable magnitude of the equality multiplier. We obtain additional support for the cumulative complementarity between social spending and wage equality by applying another data set for the US over the period 1945-2001.

    A pragmatic approach to semantic repositories benchmarking

    Get PDF
    The aim of this paper is to benchmark various semantic repositories in order to evaluate their deployment in a commercial image retrieval and browsing application. We adopt a two-phase approach for evaluating the target semantic repositories: analytical parameters such as query language and reasoning support are used to select the pool of the target repositories, and practical parameters such as load and query response times are used to select the best match to application requirements. In addition to utilising a widely accepted benchmark for OWL repositories (UOBM), we also use a real-life dataset from the target application, which provides us with the opportunity of consolidating our findings. A distinctive advantage of this benchmarking study is that the essential requirements for the target system such as the semantic expressivity and data scalability are clearly defined, which allows us to claim contribution to the benchmarking methodology for this class of applications

    TPC-H Analyzed: Hidden Messages and Lessons Learned from an Influential Benchmark

    Get PDF
    The TPC-D benchmark was developed almost 20 years ago, and even though its current existence as TPC H could be considered superseded by TPC-DS, one can still learn from it. We focus on the technical level, summarizing the challenges posed by the TPC-H workload as we now understand them, which w

    S3G2: a Scalable Structure-correlated Social Graph Generator

    Get PDF
    Benchmarking graph-oriented database workloads and graph-oriented database systems are increasingly becoming relevant in analytical Big Data tasks, such as social network analysis. In graph data, structure is not mainly found inside the nodes, but especially in the way nodes happen to be connected, i.e. structural correlations. Because such structural correlations determine join fan-outs experienced by graph analysis algorithms and graph query executors, they are an essential, yet typically neglected, ingredient of synthetic graph generators. To address this, we present S3G2: a Scalable Structure-correlated Social Graph Generator. This graph generator creates a synthetic social graph, containing non-uniform value distributions and structural correlations, and is intended as a testbed for scalable graph analysis algorithms and graph database systems. We generalize the problem to decompose correlated graph generation in multiple passes that each focus on one so-called "correlation dimension"; each of which can be mapped to a MapReduce task. We show that using S3G2 can generate social graphs that (i) share well-known graph connectivity characteristics typically found in real social graphs (ii) contain certain plausible structural correlations that influence the performance of graph analysis algorithms and queries, and (iii) can be quickly generated at huge sizes on common cluster hardware

    CHEMSIMUL - A Program Package for Numerical Simulation of Chemical Reaction Systems.

    Get PDF

    Substring filtering for low-cost linked data interfaces

    Get PDF
    Recently, Triple Pattern Fragments (TPFS) were introduced as a low-cost server-side interface when high numbers of clients need to evaluate SPARQL queries. Scalability is achieved by moving part of the query execution to the client, at the cost of elevated query times. Since the TPFS interface purposely does not support complex constructs such as SPARQL filters, queries that use them need to be executed mostly on the client, resulting in long execution times. We therefore investigated the impact of adding a literal substring matching feature to the TPFS interface, with the goal of improving query performance while maintaining low server cost. In this paper, we discuss the client/server setup and compare the performance of SPARQL queries on multiple implementations, including Elastic Search and case-insensitive FM-index. Our evaluations indicate that these improvements allow for faster query execution without significantly increasing the load on the server. Offering the substring feature on TPF servers allows users to obtain faster responses for filter-based SPARQL queries. Furthermore, substring matching can be used to support other filters such as complete regular expressions or range queries

    Sustainable breeding strategies for the Red Maasai sheep

    Get PDF
    How could we conserve Red Maasai sheep and increase productivity for better livelihood for livestock keepers
    • …
    corecore