835 research outputs found

    NoXperanto: Crowdsourced Polyglot Persistence

    No full text
    This paper proposes NoXperanto , a novel crowdsourcing approach to address querying over data collections managed by polyglot persistence settings. The main contribution of NoXperanto is the ability to solve complex queries involving different data stores by exploiting queries from expert users (i.e. a crowd of database administrators, data engineers, domain experts, etc.), assuming that these users can submit meaningful queries. NoXperanto exploits the results of meaningful queries in order to facilitate the forthcoming query answering processes. In particular, queries results are used to: (i) help non-expert users in using the multi- database environment and (ii) improve performances of the multi-database environment, which not only uses disk and memory resources, but heavily rely on network bandwidth. NoXperanto employs a layer to keep track of the information produced by the crowd modeled as a Property Graph and managed in a Graph Database Management System (GDBMS)

    Query-Time Data Integration

    Get PDF
    Today, data is collected in ever increasing scale and variety, opening up enormous potential for new insights and data-centric products. However, in many cases the volume and heterogeneity of new data sources precludes up-front integration using traditional ETL processes and data warehouses. In some cases, it is even unclear if and in what context the collected data will be utilized. Therefore, there is a need for agile methods that defer the effort of integration until the usage context is established. This thesis introduces Query-Time Data Integration as an alternative concept to traditional up-front integration. It aims at enabling users to issue ad-hoc queries on their own data as if all potential other data sources were already integrated, without declaring specific sources and mappings to use. Automated data search and integration methods are then coupled directly with query processing on the available data. The ambiguity and uncertainty introduced through fully automated retrieval and mapping methods is compensated by answering those queries with ranked lists of alternative results. Each result is then based on different data sources or query interpretations, allowing users to pick the result most suitable to their information need. To this end, this thesis makes three main contributions. Firstly, we introduce a novel method for Top-k Entity Augmentation, which is able to construct a top-k list of consistent integration results from a large corpus of heterogeneous data sources. It improves on the state-of-the-art by producing a set of individually consistent, but mutually diverse, set of alternative solutions, while minimizing the number of data sources used. Secondly, based on this novel augmentation method, we introduce the DrillBeyond system, which is able to process Open World SQL queries, i.e., queries referencing arbitrary attributes not defined in the queried database. The original database is then augmented at query time with Web data sources providing those attributes. Its hybrid augmentation/relational query processing enables the use of ad-hoc data search and integration in data analysis queries, and improves both performance and quality when compared to using separate systems for the two tasks. Finally, we studied the management of large-scale dataset corpora such as data lakes or Open Data platforms, which are used as data sources for our augmentation methods. We introduce Publish-time Data Integration as a new technique for data curation systems managing such corpora, which aims at improving the individual reusability of datasets without requiring up-front global integration. This is achieved by automatically generating metadata and format recommendations, allowing publishers to enhance their datasets with minimal effort. Collectively, these three contributions are the foundation of a Query-time Data Integration architecture, that enables ad-hoc data search and integration queries over large heterogeneous dataset collections

    The Digital Classicist 2013

    Get PDF
    This edited volume collects together peer-reviewed papers that initially emanated from presentations at Digital Classicist seminars and conference panels. This wide-ranging volume showcases exemplary applications of digital scholarship to the ancient world and critically examines the many challenges and opportunities afforded by such research. The chapters included here demonstrate innovative approaches that drive forward the research interests of both humanists and technologists while showing that rigorous scholarship is as central to digital research as it is to mainstream classical studies. As with the earlier Digital Classicist publications, our aim is not to give a broad overview of the field of digital classics; rather, we present here a snapshot of some of the varied research of our members in order to engage with and contribute to the development of scholarship both in the fields of classical antiquity and Digital Humanities more broadly

    The Digital Classicist 2013

    Get PDF
    This edited volume collects together peer-reviewed papers that initially emanated from presentations at Digital Classicist seminars and conference panels. This wide-ranging volume showcases exemplary applications of digital scholarship to the ancient world and critically examines the many challenges and opportunities afforded by such research. The chapters included here demonstrate innovative approaches that drive forward the research interests of both humanists and technologists while showing that rigorous scholarship is as central to digital research as it is to mainstream classical studies. As with the earlier Digital Classicist publications, our aim is not to give a broad overview of the field of digital classics; rather, we present here a snapshot of some of the varied research of our members in order to engage with and contribute to the development of scholarship both in the fields of classical antiquity and Digital Humanities more broadly

    Geospatial Information Research: State of the Art, Case Studies and Future Perspectives

    Get PDF
    Geospatial information science (GI science) is concerned with the development and application of geodetic and information science methods for modeling, acquiring, sharing, managing, exploring, analyzing, synthesizing, visualizing, and evaluating data on spatio-temporal phenomena related to the Earth. As an interdisciplinary scientific discipline, it focuses on developing and adapting information technologies to understand processes on the Earth and human-place interactions, to detect and predict trends and patterns in the observed data, and to support decision making. The authors – members of DGK, the Geoinformatics division, as part of the Committee on Geodesy of the Bavarian Academy of Sciences and Humanities, representing geodetic research and university teaching in Germany – have prepared this paper as a means to point out future research questions and directions in geospatial information science. For the different facets of geospatial information science, the state of art is presented and underlined with mostly own case studies. The paper thus illustrates which contributions the German GI community makes and which research perspectives arise in geospatial information science. The paper further demonstrates that GI science, with its expertise in data acquisition and interpretation, information modeling and management, integration, decision support, visualization, and dissemination, can help solve many of the grand challenges facing society today and in the future

    Engineering Agile Big-Data Systems

    Get PDF
    To be effective, data-intensive systems require extensive ongoing customisation to reflect changing user requirements, organisational policies, and the structure and interpretation of the data they hold. Manual customisation is expensive, time-consuming, and error-prone. In large complex systems, the value of the data can be such that exhaustive testing is necessary before any new feature can be added to the existing design. In most cases, the precise details of requirements, policies and data will change during the lifetime of the system, forcing a choice between expensive modification and continued operation with an inefficient design.Engineering Agile Big-Data Systems outlines an approach to dealing with these problems in software and data engineering, describing a methodology for aligning these processes throughout product lifecycles. It discusses tools which can be used to achieve these goals, and, in a number of case studies, shows how the tools and methodology have been used to improve a variety of academic and business systems

    Google search and the mediation of digital health information: a case study on unproven stem cell treatments

    Get PDF
    Google Search occupies a unique space within broader discussions of direct-to-consumer marketing of stem cell treatments in digital spaces. For patients, researchers, regulators, and the wider public, the search platform influences the who, what, where, and why of stem cell treatment information online. Ubiquitous and opaque, Google Search mediates which users are presented what types of content when these stakeholders engage in online searches around health information. The platform also sways the activities of content producers and the characteristics of the content they produce. For those seeking and studying information on digital health, this platform influence raises difficult questions around risk, authority, intervention, and oversight. This thesis addresses a critical gap in digital methodologies used in mapping and characterising that influence as part of wider debates around algorithmic accountability within STS and digital health scholarship. By adopting a novel methodological approach to Blackbox auditing and data collection, I provide a unique evidentiary base for the analysis of ads, organic results, and the platform mechanisms of influence on queries related to stem cell treatments. I explore the question: how does Google Search mediate information that people access online about ‘proven’ and ‘unproven’ stem cell treatments? Here I show that, in spite of a general ban on advertisements of stem cell treatments, users continue to be presented with content promoting unproven treatments. The types, frequency, and commercial intent of results related to stem cell treatments shifted across user groups including geography and, more troublingly, those impacted by Parkinson’s Disease and Multiple Sclerosis. Additionally, I find evidence that the technological structure of Google Search itself enables primary and secondary commercial activities around the mediation and dissemination of health information online. It suggests that Google Search’s algorithmically-mediated rendering of search results – including both commercial and non-commercial activities - has critical implications for the present and future of digital health studies

    Data-Driven Models, Techniques, and Design Principles for Combatting Healthcare Fraud

    Get PDF
    In the U.S., approximately 700billionofthe700 billion of the 2.7 trillion spent on healthcare is linked to fraud, waste, and abuse. This presents a significant challenge for healthcare payers as they navigate fraudulent activities from dishonest practitioners, sophisticated criminal networks, and even well-intentioned providers who inadvertently submit incorrect billing for legitimate services. This thesis adopts Hevner’s research methodology to guide the creation, assessment, and refinement of a healthcare fraud detection framework and recommended design principles for fraud detection. The thesis provides the following significant contributions to the field:1. A formal literature review of the field of fraud detection in Medicaid. Chapters 3 and 4 provide formal reviews of the available literature on healthcare fraud. Chapter 3 focuses on defining the types of fraud found in healthcare. Chapter 4 reviews fraud detection techniques in literature across healthcare and other industries. Chapter 5 focuses on literature covering fraud detection methodologies utilized explicitly in healthcare.2. A multidimensional data model and analysis techniques for fraud detection in healthcare. Chapter 5 applies Hevner et al. to help develop a framework for fraud detection in Medicaid that provides specific data models and techniques to identify the most prevalent fraud schemes. A multidimensional schema based on Medicaid data and a set of multidimensional models and techniques to detect fraud are presented. These artifacts are evaluated through functional testing against known fraud schemes. This chapter contributes a set of multidimensional data models and analysis techniques that can be used to detect the most prevalent known fraud types.3. A framework for deploying outlier-based fraud detection methods in healthcare. Chapter 6 proposes and evaluates methods for applying outlier detection to healthcare fraud based on literature review, comparative research, direct application on healthcare claims data, and known fraudulent cases. A method for outlier-based fraud detection is presented and evaluated using Medicaid dental claims, providers, and patients.4. Design principles for fraud detection in complex systems. Based on literature and applied research in Medicaid healthcare fraud detection, Chapter 7 offers generalized design principles for fraud detection in similar complex, multi-stakeholder systems.<br/

    Large-scale linked data integration using probabilistic reasoning and crowdsourcing

    Get PDF
    We tackle the problems of semiautomatically matching linked data sets and of linking large collections of Web pages to linked data. Our system, ZenCrowd, (1) uses a three-stage blocking technique in order to obtain the best possible instance matches while minimizing both computational complexity and latency, and (2) identifies entities from natural language text using state-of-the-art techniques and automatically connects them to the linked open data cloud. First, we use structured inverted indices to quickly find potential candidate results from entities that have been indexed in our system. Our system then analyzes the candidate matches and refines them whenever deemed necessary using computationally more expensive queries on a graph database. Finally, we resort to human computation by dynamically generating crowdsourcing tasks in case the algorithmic components fail to come up with convincing results. We integrate all results from the inverted indices, from the graph database and from the crowd using a probabilistic framework in order to make sensible decisions about candidate matches and to identify unreliable human workers. In the following, we give an overview of the architecture of our system and describe in detail our novel three-stage blocking technique and our probabilistic decision framework. We also report on a series of experimental results on a standard data set, showing that our system can achieve a 95% average accuracy on instance matching (as compared to the initial 88% average accuracy of the purely automatic baseline) while drastically limiting the amount of work performed by the crowd. The experimental evaluation of our system on the entity linking task shows an average relative improvement of 14% over our best automatic approac
    • …
    corecore