328,670 research outputs found

    Deconstructing the Consumption Function: New Tools and Old Problems

    Get PDF
    In this paper, we analyse anew the relationship between aggregate income and consumption in the United Kingdom. Our analysis entails a close examination of the structure of the data, for which we employ a variety of spectral methods which depend on the concepts of Fourier analysis. We discover that fluctuations in the rate of growth of consumption tend to precede similar fluctuations in income, which contradicts a common supposition. We also highlight the difficulty of uncovering from the aggregate data a structural equation representing the behaviour of consumers.Consumption function, Trend estimation, Seasonal adjustment, Spectral analysis

    Mixed Tree and Spatial Representation of Dissimilarity Judgments

    Get PDF
    Whereas previous research has shown that either tree or spatial representations of dissimilarity judgments may be appropriate, focussing on the comparative fit at the aggregate level, we investigate whether there is heterogeneity among subjects in the extent to which their dissimilarity judgments are better represented by ultrametric tree or spatial multidimensional scaling models. We develop a mixture model for the analysis of dissimilarity data, that is formulated in a stochastic context, and entails a representation and a measurement model component. The latter involves distributional assumptions on the measurement error, and enables estimation by maximum likelihood. The representation component allows dissimilarity judgments to be represented either by a tree structure or by a spatial configuration, or a mixture of both. In order to investigate the appropriateness of tree versus spatial representations, the model is applied to twenty empirical data sets. We compare the fit of our model with that of aggregate tree and spatial models, as well as with mixtures of pure trees and mixtures of pure spaces, respectively. We formulate some empirical generalizations on the relative importance of tree versus spatial structures in representing dissimilarity judgments at the individual level.Multidimensional scaling;tree models;mixture models;dissimilarity judgments

    A Domain Model for Transparency in Portuguese Cooperatives

    Get PDF
    The aim of this chapter is to present a domain model that represents the informational needs of transparency (governance structure and accountability dimensions) in Portuguese cooperatives. A domain model is an abstract representation of a reality and a milestone in the development of a metadata application profile (MAP). A community of practice publishes linked open MAP-based data for these data to be interoperable; this means intelligent software/agents can aggregate these data, provide different types of visualizations, infer from the data, and ultimately provide new discoveries. This model was developed having as basis the information obtained from the accomplishment of a focus group, and the analysis of financial reports and websites of seven Portuguese cooperatives. The authors will continue to work on the domain model to include 1) other dimensions that also contribute for transparency in the organizations and 2) other types of entities of the social economy (SE). The final aim is to define a model representing the needs of transparency of all types of European SE entities.info:eu-repo/semantics/publishedVersio

    QB2OLAP : enabling OLAP on statistical linked open data

    Get PDF
    Publication and sharing of multidimensional (MD) data on the Semantic Web (SW) opens new opportunities for the use of On-Line Analytical Processing (OLAP). The RDF Data Cube (QB) vocabulary, the current standard for statistical data publishing, however, lacks key MD concepts such as dimension hierarchies and aggregate functions. QB4OLAP was proposed to remedy this. However, QB4OLAP requires extensive manual annotation and users must still write queries in SPARQL, the standard query language for RDF, which typical OLAP users are not familiar with. In this demo, we present QB2OLAP, a tool for enabling OLAP on existing QB data. Without requiring any RDF, QB(4OLAP), or SPARQL skills, it allows semi-automatic transformation of a QB data set into a QB4OLAP one via enrichment with QB4OLAP semantics, exploration of the enriched schema, and querying with the high-level OLAP language QL that exploits the QB4OLAP semantics and is automatically translated to SPARQL.Peer ReviewedPostprint (author's final draft

    Dimensional enrichment of statistical linked open data

    Get PDF
    On-Line Analytical Processing (OLAP) is a data analysis technique typically used for local and well-prepared data. However, initiatives like Open Data and Open Government bring new and publicly available data on the web that are to be analyzed in the same way. The use of semantic web technologies for this context is especially encouraged by the Linked Data initiative. There is already a considerable amount of statistical linked open data sets published using the RDF Data Cube Vocabulary (QB) which is designed for these purposes. However, QB lacks some essential schema constructs (e.g., dimension levels) to support OLAP. Thus, the QB4OLAP vocabulary has been proposed to extend QB with the necessary constructs and be fully compliant with OLAP. In this paper, we focus on the enrichment of an existing QB data set with QB4OLAP semantics. We first thoroughly compare the two vocabularies and outline the benefits of QB4OLAP. Then, we propose a series of steps to automate the enrichment of QB data sets with specific QB4OLAP semantics; being the most important, the definition of aggregate functions and the detection of new concepts in the dimension hierarchy construction. The proposed steps are defined to form a semi-automatic enrichment method, which is implemented in a tool that enables the enrichment in an interactive and iterative fashion. The user can enrich the QB data set with QB4OLAP concepts (e.g., full-fledged dimension hierarchies) by choosing among the candidate concepts automatically discovered with the steps proposed. Finally, we conduct experiments with 25 users and use three real-world QB data sets to evaluate our approach. The evaluation demonstrates the feasibility of our approach and shows that, in practice, our tool facilitates, speeds up, and guarantees the correct results of the enrichment process.Peer ReviewedPostprint (author's final draft

    Scalable Model-Based Management of Correlated Dimensional Time Series in ModelarDB+

    Full text link
    To monitor critical infrastructure, high quality sensors sampled at a high frequency are increasingly used. However, as they produce huge amounts of data, only simple aggregates are stored. This removes outliers and fluctuations that could indicate problems. As a remedy, we present a model-based approach for managing time series with dimensions that exploits correlation in and among time series. Specifically, we propose compressing groups of correlated time series using an extensible set of model types within a user-defined error bound (possibly zero). We name this new category of model-based compression methods for time series Multi-Model Group Compression (MMGC). We present the first MMGC method GOLEMM and extend model types to compress time series groups. We propose primitives for users to effectively define groups for differently sized data sets, and based on these, an automated grouping method using only the time series dimensions. We propose algorithms for executing simple and multi-dimensional aggregate queries on models. Last, we implement our methods in the Time Series Management System (TSMS) ModelarDB (ModelarDB+). Our evaluation shows that compared to widely used formats, ModelarDB+ provides up to 13.7 times faster ingestion due to high compression, 113 times better compression due to the adaptivity of GOLEMM, 630 times faster aggregates by using models, and close to linear scalability. It is also extensible and supports online query processing.Comment: 12 Pages, 28 Figures, and 1 Tabl

    Spatial Aggregation: Theory and Applications

    Full text link
    Visual thinking plays an important role in scientific reasoning. Based on the research in automating diverse reasoning tasks about dynamical systems, nonlinear controllers, kinematic mechanisms, and fluid motion, we have identified a style of visual thinking, imagistic reasoning. Imagistic reasoning organizes computations around image-like, analogue representations so that perceptual and symbolic operations can be brought to bear to infer structure and behavior. Programs incorporating imagistic reasoning have been shown to perform at an expert level in domains that defy current analytic or numerical methods. We have developed a computational paradigm, spatial aggregation, to unify the description of a class of imagistic problem solvers. A program written in this paradigm has the following properties. It takes a continuous field and optional objective functions as input, and produces high-level descriptions of structure, behavior, or control actions. It computes a multi-layer of intermediate representations, called spatial aggregates, by forming equivalence classes and adjacency relations. It employs a small set of generic operators such as aggregation, classification, and localization to perform bidirectional mapping between the information-rich field and successively more abstract spatial aggregates. It uses a data structure, the neighborhood graph, as a common interface to modularize computations. To illustrate our theory, we describe the computational structure of three implemented problem solvers -- KAM, MAPS, and HIPAIR --- in terms of the spatial aggregation generic operators by mixing and matching a library of commonly used routines.Comment: See http://www.jair.org/ for any accompanying file

    Parallel Simulations for Analysing Portfolios of Catastrophic Event Risk

    Full text link
    At the heart of the analytical pipeline of a modern quantitative insurance/reinsurance company is a stochastic simulation technique for portfolio risk analysis and pricing process referred to as Aggregate Analysis. Support for the computation of risk measures including Probable Maximum Loss (PML) and the Tail Value at Risk (TVAR) for a variety of types of complex property catastrophe insurance contracts including Cat eXcess of Loss (XL), or Per-Occurrence XL, and Aggregate XL, and contracts that combine these measures is obtained in Aggregate Analysis. In this paper, we explore parallel methods for aggregate risk analysis. A parallel aggregate risk analysis algorithm and an engine based on the algorithm is proposed. This engine is implemented in C and OpenMP for multi-core CPUs and in C and CUDA for many-core GPUs. Performance analysis of the algorithm indicates that GPUs offer an alternative HPC solution for aggregate risk analysis that is cost effective. The optimised algorithm on the GPU performs a 1 million trial aggregate simulation with 1000 catastrophic events per trial on a typical exposure set and contract structure in just over 20 seconds which is approximately 15x times faster than the sequential counterpart. This can sufficiently support the real-time pricing scenario in which an underwriter analyses different contractual terms and pricing while discussing a deal with a client over the phone.Comment: Proceedings of the Workshop at the International Conference for High Performance Computing, Networking, Storage and Analysis (SC), 2012, 8 page
    corecore