101 research outputs found

    Skyline queries computation on crowdsourced- enabled incomplete database

    Get PDF
    Data incompleteness becomes a frequent phenomenon in a large number of contemporary database applications such as web autonomous databases, big data, and crowd-sourced databases. Processing skyline queries over incomplete databases impose a number of challenges that negatively influence processing the skyline queries. Most importantly, the skylines derived from incomplete databases are also incomplete in which some values are missing. Retrieving skylines with missing values is undesirable, particularly, for recommendation and decision-making systems. Furthermore, running skyline queries on a database with incomplete data raises a number of issues influence processing skyline queries such as losing the transitivity property of the skyline technique and cyclic dominance between the tuples. The issue of estimating the missing values of skylines has been discussed and examined in the database literature. Most recently, several studies have suggested exploiting the crowd-sourced databases in order to estimate the missing values by generating plausible values using the crowd. Crowd-sourced databases have proved to be a powerful solution to perform user-given tasks by integrating human intelligence and experience to process the tasks. However, task processing using crowd-sourced incurs additional monetary cost and increases the time latency. Also, it is not always possible to produce a satisfactory result that meets the user's preferences. This paper proposes an approach for estimating the missing values of the skylines by first exploiting the available data and utilizes the implicit relationships between the attributes in order to impute the missing values of the skylines. This process aims at reducing the number of values to be estimated using the crowd when local estimation is inappropriate. Intensive experiments on both synthetic and real datasets have been accomplished. The experimental results have proven that the proposed approach for estimating the missing values of the skylines over crowd-sourced enabled incomplete databases is scalable and outperforms the other existing approaches

    Crowdsourcing for Query Processing on Web Data: A Case Study on the Skyline Operator

    Get PDF
    In recent years, crowdsourcing has become a powerful tool to bring human intelligence into information processing. This is especially important forWeb data which in contrast to well-maintained databases is almost always incomplete and may be distributed over a variety of sources. Crowdsourcing allows to tackle many problems which are not yet attainable using machine-based algorithms alone: in particular, it allows to perform database operators on incomplete data as human workers can be used to provide values during runtime. As this can become costly quickly, elaborate optimization is required. In this paper, we showcase how such optimizations can be performed for the popular skyline operator for preference queries. We present some heuristics-based approaches and compare them to crowdsourcing-based approaches using sophisticated optimization techniques while especially focusing on result correctness

    Learning To Scale Up Search-Driven Data Integration

    Get PDF
    A recent movement to tackle the long-standing data integration problem is a compositional and iterative approach, termed “pay-as-you-go” data integration. Under this model, the objective is to immediately support queries over “partly integrated” data, and to enable the user community to drive integration of the data that relate to their actual information needs. Over time, data will be gradually integrated. While the pay-as-you-go vision has been well-articulated for some time, only recently have we begun to understand how it can be manifested into a system implementation. One branch of this effort has focused on enabling queries through keyword search-driven data integration, in which users pose queries over partly integrated data encoded as a graph, receive ranked answers generated from data and metadata that is linked at query-time, and provide feedback on those answers. From this user feedback, the system learns to repair bad schema matches or record links. Many real world issues of uncertainty and diversity in search-driven integration remain open. Such tasks in search-driven integration require a combination of human guidance and machine learning. The challenge is how to make maximal use of limited human input. This thesis develops three methods to scale up search-driven integration, through learning from expert feedback: (1) active learning techniques to repair links from small amounts of user feedback; (2) collaborative learning techniques to combine users’ conflicting feedback; and (3) debugging techniques to identify where data experts could best improve integration quality. We implement these methods within the Q System, a prototype of search-driven integration, and validate their effectiveness over real-world datasets

    User-centric knowledge extraction and maintenance

    Get PDF
    An ontology is a machine readable knowledge collection. There is an abundance of information available for human consumption. Thus, large general knowledge ontologies are typically generated tapping into this information source using imperfect automatic extraction approaches that translate human readable text into machine readable semantic knowledge. This thesis provides methods for user-driven ontology generation and maintenance. In particular, this work consists of three main contributions: 1. An interactive human-supported extraction tool: LUKe. The system extends an automatic extraction framework to integrate human feedback on extraction decisions and extracted information on multiple levels. 2. A document retrieval approach based on semantic statements: S3K. While one application is the retrieval of documents that support extracted information to verify the correctness of the piece of information, another application in combination with an extraction system is a fact based indexing of a document corpus allowing statement based document retrieval. 3. A method for similarity based ontology navigation: QBEES. The approach enables search by example. That is, given a set of semantic entities, it provides the most similar entities with respect to their semantic properties considering different aspects. All three components are integrated into a modular architecture that also provides an interface for third-party components.Eine Ontologie ist eine Wissenssammlung in maschinenlesbarer Form. Da eine große Bandbreite an Informationen nur in natürlichsprachlicher Form verfügbar ist, werden maschinenlesbare Ontologien häufig durch imperfekte automatische Verfahren erzeugt, die eine Übersetzung in eine maschinenlesbare Darstellung vornehmen. In der vorliegenden Arbeit werden Methoden zur menschlichen Unterstützung des Extraktionsprozesses und Wartung der erzeugten Wissensbasen präsentiert. Dabei werden drei Beiträge geleistet: 1. Zum ersten wird ein interaktives Extraktionstool (LUKe) vorgestellt. Hierfür wird ein bestehendes Extraktionssystem um die Integration von Nutzerkorrekturen auf verschiedenen Ebenen der Extraktion erweitert und an ein beispielhaftes Szenario angepasst. 2. Zum zweiten wird ein Ansatz (S3K) zur Dokumentsuche basierend auf faktischen Aussagen beschrieben. Dieser erlaubt eine aussagenbasierte Suche nach Belegstellen oder weiteren Informationen im Zusammenhang mit diesen Aussagen in den Dokumentsammlungen die der Wissensbasis zugrunde liegen. 3. Zuletzt wird QBEES, eine Ähnlichkeitssuche in Ontologien, vorgestellt. QBEES ermöglicht die Suche bzw. Empfehlung von ähnlichen Entitäten auf Basis der semantischen Eigenschaften die sie mit einer als Beispiel angegebenen Menge von Entitäten gemein haben. Alle einzelnen Komponenten sind zudem in eine modulare Gesamtarchitektur integriert

    Skyline Queries over Incomplete Data - Error Models for Focused Crowd-Sourcing

    No full text

    Advanced Location-Based Technologies and Services

    Get PDF
    Since the publication of the first edition in 2004, advances in mobile devices, positioning sensors, WiFi fingerprinting, and wireless communications, among others, have paved the way for developing new and advanced location-based services (LBSs). This second edition provides up-to-date information on LBSs, including WiFi fingerprinting, mobile computing, geospatial clouds, geospatial data mining, location privacy, and location-based social networking. It also includes new chapters on application areas such as LBSs for public health, indoor navigation, and advertising. In addition, the chapter on remote sensing has been revised to address advancements

    Artificial intelligence and machine learning : current applications in real estate

    Get PDF
    Thesis: S.M. in Real Estate Development, Massachusetts Institute of Technology, Program in Real Estate Development in conjunction with the Center for Real Estate, 2018.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (pages 113-117).Real estate meets machine learning: real contribution or just hype? Creating and managing the built environment is a complicated task fraught with difficult decisions, challenging relationships, and a multitude of variables. Today's technology experts are building computers and software that can help resolve many of these challenges, some of them using what is broadly called artificial intelligence and machine learning. This thesis will define machine learning and artificial intelligence for the investor and real estate audience, examine the ways in which these new analytic, predictive, and automating technologies are being used in the real estate industry, and postulate potential future applications and associated challenges. Machine learning and artificial intelligence can and will be used to facilitate real estate investment in myriad ways, spanning all aspects of the real estate profession -- from property management, to investment decisions, to development processes -- transforming real estate into a more efficient and data-driven industry.by Jennifer Conway.S.M. in Real Estate Developmen

    Tracking the Temporal-Evolution of Supernova Bubbles in Numerical Simulations

    Get PDF
    The study of low-dimensional, noisy manifolds embedded in a higher dimensional space has been extremely useful in many applications, from the chemical analysis of multi-phase flows to simulations of galactic mergers. Building a probabilistic model of the manifolds has helped in describing their essential properties and how they vary in space. However, when the manifold is evolving through time, a joint spatio-temporal modelling is needed, in order to fully comprehend its nature. We propose a first-order Markovian process that propagates the spatial probabilistic model of a manifold at fixed time, to its adjacent temporal stages. The proposed methodology is demonstrated using a particle simulation of an interacting dwarf galaxy to describe the evolution of a cavity generated by a Supernov
    corecore